OpenAI & Google Just Made Their Best Models Free

ruticker 04.03.2025 15:24:58

Recognized text from YouScriptor channel Matt Wolfe

Recognized from a YouTube video by YouScriptor.com, For more details, follow the link OpenAI & Google Just Made Their Best Models Free

**Well, it's been another insanely busy week in the world of AI, and I don't want to waste your time, so let's get into this week's AI news breakdown.** Starting with news that actually came out last week, but I record these videos on Thursdays, and this news came out on Friday of last week. It was when OpenAI released their **03 Mini**. Now, we did talk about it in last Friday's video because we knew it was going to come out on Friday, but now that we actually have access to it, I figured let's talk about it real quick. This new **03 model** outperforms pretty much every other model out there in math, except for **01 Pro**, which is not actually listed on this chart. In PhD-level science questions, the **03 Mini High** version beats everything else that's out there, except for, of course, **01 Pro**. It's good at coding, good at software engineering, and it's pretty much the most powerful model on the market other than **01 Pro**, which is only in the $200 a month tier. This new **03 Mini**, however, is available in every tier and available in the API as well. Pro users will have unlimited access to **03 Mini**, and Plus and Team users will have triple the rate limits versus **01 Mini**. Free users can try **03 Mini** in ChatGPT by selecting the **Reason** button under the message composer. So even free ChatGPT users are getting access to this newest state-of-the-art model from OpenAI. You can even combine this **03 Mini** model with their search model, even on free plans. OpenAI said, "Try search plus reasoning together in ChatGPT." Free users can use OpenAI **03 Mini** with search by selecting the **Search** plus **Reason** buttons together. So if you're on the free plan and you want to use the new **03 model**, you'd select the **Reason** button. If you want to combine it with search, you select both **Search** and **Reason**. I guess when it was originally released for free members, it didn't actually show the **Chain of Thought**, but as of February 6th, even that's been updated for both free and paid users. OpenAI said, "Updated Chain of Thought in OpenAI **03 Mini** for free and paid users, and in **03 Mini High** for paid users." Now, the **Chain of Thought** that it's showing here isn't actually the true **Chain of Thought** that's happening. It's not like what you see in **Deep Seek R1**, where you see literally everything the model's thinking before it gives you the response. This gives you sort of like a summarized version of what it's thinking before it gives you a response. McKay Wrigley here even argues that it's actually worse than giving us nothing at all. He says, "**03 Mini** is exceptionally great, but I do worry that summarized **Chain of Thought** is actually worse than nothing at all. True **Chain of Thought** exposure acts as a prompt debugger; it helps us steer the model. Summarized **Chain of Thought** escapes this and potentially adds errors, making it harder to debug." So if you're looking at something like **Deep Seek R1**, and you can see literally everything it's thinking and it gives you an incorrect answer, you could literally go back and look through the **Chain of Thought** and figure out where it screwed up. These summarized **Chains of Thought** that OpenAI **03** is giving us, you can't really do that. But in my opinion, the even bigger news that came out from OpenAI this week wasn't even the fact that they gave us **03 Mini** on Friday; it was that over the weekend, they gave us **Deep Research**. Unfortunately, **Deep Research** is only available to Pro users on the $200 a month plan, which I do know makes it economically infeasible for a lot of people. But I have used it, and it is really, really good. It is kind of interesting that they named it **Deep Research** because Google has a product called **Gemini** with **Deep Research**. It's exactly the same naming scheme, which is definitely going to confuse people, but it does work really well. I asked **Deep Research** to help me with a YouTube strategy. It actually gave me some follow-up questions so that it could better understand what I was trying to accomplish, like my current strategy on long-form versus short-form videos, my current video length and format, how I decide on tutorials, what my competitors are doing, and what my monetization focuses are—things like that. I answered its questions, and then it gave me just an absolute beast of a write-up of how I should manage my YouTube channel. It is really, really, really in-depth and honestly created an amazing killer strategy. Like, I'm literally following through on this strategy with my YouTube channel now. It wrote up this giant essay here, and I actually pasted it back into ChatGPT. This is the entire write-up that it gave me. I pasted it back into **GPT-4** and asked it to give me a step-by-step checklist, and you can see here that it simplified everything and gave me a checklist of what to do for my channel. It even gave me a 4-week breakdown to dial it all in. So, **Deep Research** has been a game-changer for me. I know it's on the $200 a month plan, but had I hired a YouTube consultant to look at my channel, analyze everything I was doing, and give me a detailed, like, 10-page report with a step-by-step checklist of what I need to do on the channel, they would have charged me way more than $200. So I feel like I got the value out of that from that alone. But I also don't want you to feel like I'm trying to sell you on getting the $200 a month plan. For most people, it's probably still not worth it. I've just personally found a lot of value from it. There's a recent benchmark test that came out titled **Humanity's Last Exam**, and you can see how some of the other existing models performed on this benchmark test. **GPT-4** got a 3.3% in accuracy, OpenAI's **01** got a 9.1%, **Deep Seek R1** got a 9.4%, the new OpenAI **03 Mini High** got a 13.0%, and OpenAI with **Deep Research** got a 26.6% on the accuracy if you have a Pro account. If you combine **01 Pro** with **Deep Research**, it is hands down the most powerful AI large language model I have ever tried. It is absolutely insane because it does the research for you using **Deep Research**, so it will go off on the web and search out items for you as part of the research, and then it uses the **01 Pro's** reasoning to really think through everything that it came back with. That's how I got that insane detailed report on what I should do on my YouTube channel. It wasn't only using what was in its training data; it literally did the research, did the **Chain of Thought** reasoning, and then spit back out that entire report. That's what makes it so powerful—when you start combining all of these things, they all combine for an insanely powerful experience where the output is just mind-blowing. Even if you're in the EU, you also get access to **Deep Research**. **Deep Research** is now rolled out to 100% of all Pro users, including in the UK, EU, Norway, Iceland, Liechtenstein, and Switzerland. One interesting thing that Sam Altman said not long after this came out: "My very approximate vibe is that it can do a single-digit percentage of all economically valuable tasks in the world," which is a wild milestone. Yes, only a single digit, meaning between 1 and 9%, but that single-digit percentage still likely adds up to billions of dollars worth of value that this **Deep Research** is capable of doing. Not only that, but Sam teased that there's still something else coming. He said, "Note this is not the one more thing for **03 Mini**; a few more days for that." He said that on the same day that **Deep Research** came out. He was commenting that **03 Mini** came out, and then, "Oh, here's **Deep Research**, which makes all of this stuff even better," and we still have one more thing to show you, which is exciting, but we're not telling you yet. But OpenAI wasn't even done there with announcements this week. They had a handful of smaller announcements, like the fact that ChatGPT search is now available to everyone over on chatgpt.com—no sign-up required. So if you don't want to use Google search anymore and you'd rather use ChatGPT for your search, you can just go to chatgpt.com and do web searches that are combined with AI now without even logging in. So now it's like an actual true competitor to what Perplexity is doing. They also increased the memory limit in ChatGPT for Plus, Pro, and Team users by 25%. So yeah, it's been a big week for OpenAI. Since OpenAI had so much going on this week, they actually took to Reddit to do an AMA where Sam Altman, Mark Chen, Kevin Wheel, Seranos Nanian, Michelle Pocas, and Hungu Ren (I'm sure I butchered at least one of those names) all joined in on this Reddit AMA. A few comments they made: they are still planning on doing a **40-image generator**, so an image generator that's different than **DALL-E**. They mentioned there's some updates coming to **Advanced Voice Mode**, and that they're not calling the next model **5**; it'll just be **GPT-5**. They talked about how they are planning on increasing context length, and they're working on the ability to attach files to the reasoning models like **01** and **03**. But the comment that's probably gotten the most press, that most people have been talking about, was when Sam Altman said, "I personally think we've been on the wrong side of history here and need to figure out a different open-source strategy." This was in response to somebody asking, "Would you consider releasing some model weights and publishing some research?" He goes on to say, "Not everyone at OpenAI shares this view, and it's also not our current highest priority." Essentially, Sam Altman believes that they've been on the wrong side of history with open source and that maybe they should have been open-sourcing more of this stuff along the way instead of keeping it all closed off. But besides OpenAI, Google had a huge week as well, releasing a bunch of new models, including **Gemini 2.0**. The new **Gemini 2.0** models look pretty strong in all of the benchmarks, although these are just comparing them to previous **Gemini** models and not with the whole range of AI models that are available. With this release, they released actually three new models: **Gemini 2.0 Flash**, which is now generally available; **Gemini 2.0 Flashlight**, which is a more efficient version of **Gemini 2.0 Flash**; and **Gemini 2.0 Pro**, which is their best state-of-the-art model that they're making available right now. They also have their **Gemini 2.0 Flash Thinking Model**, which does some of that extra thinking at the time of inference, like we're seeing from things like **01**, **03**, and **Deep Seek**. The two **Gemini Flash** models both have a 1 million token context window, while the **Pro** has a 2 million context window. Pretty soon, **2.0 Flash** and **Pro** are going to be able to output audio and images. We recently had Logan Kilpatrick from Google on the Next WVE podcast; his episode comes out next week, and he goes into some details about what's actually coming with these **Gemini** models, and it's pretty exciting. But the biggest sort of deal around these new **Gemini** models is not necessarily how powerful they are; it's how inexpensive they are to use. If you're a developer and you want to develop with the **Gemini APIs**, **Gemini 2.0 Flash** costs 10 cents per million tokens. To put that into context, if you're using the **GPT-4 API**, it costs $10 per million tokens. That's quite a bit of savings there. Their **01 model** costs $60 per million tokens, if you're looking at **Claude 3.5 Sonet**, it's $15 per million tokens, and even **Hugging Face's** smallest model is still $4 per million tokens. So far, I've just been looking at the outputs, and I guess if you're comparing it to **Gemini 2.0 Flash**, it's actually 40 cents per million tokens for the output compared to $15 per million tokens for output. Still quite a bit of a price break. So if you're a developer and you want to build with a large language model API and you want to do it as inexpensively as possible, **Gemini 2.0** is definitely your route right now. Now, when it comes to actually comparing these models against other models, there's really two places to look, especially if you're confused by all the benchmarks that they share. The first is the **LM Arena**, where basically people are given a blind test. They enter a prompt, they get two outputs, and they pick which of the two outputs they like better, and that's how this ranking is generated. If we look at this based on the blind testing, **Gemini's 2.0 Flash Thinking Model** is the number one ranked overall model right now, just based on users giving it an input, not knowing that they're getting **Gemini** back, and then voting **Gemini** as the best response. **Gemini 2.0 Pro**, the new one that came out on February 5th, came in second place, followed by **GPT-4**, **Deep Seek R1**, and then **Gemini 2.0 Flash**. **Gemini** holds three out of the top five spots right now, and the new model from OpenAI, **03 Mini**, falls all the way down here at 1, 2, 3, 4, 5, 6, 7, 8, 9, 10. The other place I like to look at models is this site **OpenRouter**, which I actually learned about from Logan Kilpatrick when he was on our podcast the other day. This is actually looking at which models are actually getting the most use, so this isn't based on voting; this is just based on what is actually getting used right now. It's somehow watching the APIs and going, "Okay, these models are what most people are using." On the day of this recording, which is Thursday, February 6th, **Claude Sonet** holds the top two spots for the all-category section, but then Google's **Gemini** models hold the third and fourth spots. When it comes to usage right now, **Claude** and **Gemini** are being used more than OpenAI APIs, at least today. If we look at **Top This Week**, a very similar story: **Claude**, **Claude**, **Gemini**, **Gemini**, followed by OpenAI. **Top This Month**: **Claude**, **Claude**, **Gemini**, **Gemini**. And then if we look at trending to see which things people are switching over to and starting to use more and more of recently, look at this number one right here: **Gemini Flash 2.0**. This is the most trending model right now, and this is all categories. If we look at programming, we've got **Claude**, **Claude**, **Flash**. If we look at technology, we've got **Claude**, followed by **Flash**. And if we look at translation, **Gemini Flash**, the previous generation of the model, is number one. Kind of a cool resource to keep tabs on which AI models are actually getting the most use at the moment. But Google had some other news this week for developers that use their API: you can now use the **Imagine 3 AI** image generator from their API. We've looked at **Imagine 3** quite a bit in previous videos; it is a really, really solid model. In fact, if we jump back over to the arena here and click on our leaderboard, if we check out their text-image leaderboard, we can actually see that **Imagine 3**, the model from Google, is ranked the top model. These are ranked in the same way: you're given two images for a prompt, you pick which one you like best, it doesn't tell you which model you picked until after you picked it, and that's how this stuff gets ranked. **Imagine 3** is number one, followed by **Recraft**, followed by **Idiogram**, and so on down the line, with **Stable Diffusion** falling in last. But if you're a developer and you want to use this model within your workflow, you now have access to it. If you're not a developer and you want to play with **Imagine 3**, the best way to do it is over in **Google Labs** at labs.google.com/effect-tool/image-effects. This image effects tool is actually using the **Imagine 3** model, and it's totally free to use and play around with right now as well. Oh, and I was just talking about **Gemini** for all that time; I forgot to mention you can use all the **Gemini** models for free as well. If you go on over to **AI Studio** at google.com, over on the right, you have the option to select from various models to use, and this is totally free right now. You've got **Gemini 2.0 Flash**, **Flashlight**, **Pro**, **Experimental Flash Thinking**, plus all of their previous models and their open-source models all available for you to play with and enter prompts here. We can see we've got over a million context window as well, all totally free to use over at **AI Studio**. Another cool resource for you: you know that feeling when you're trying to get help from a company, and you end up stuck in this endless loop of, "Let me transfer you to the right person," or, "We'll get back to you in 24 to 48 hours," and even when you finally do get help, they still need to do manual things like check your order status or schedule a meeting with you? It's like watching somebody use Internet Explorer but in 2025—painfully unnecessary. That's why for this video, I partnered with **Chatbase**. They're revolutionizing the customer experience with AI agents that don't just chat; they actually do things for you. We're talking AI that can instantly book meetings through Calendly, create support tickets with Zendesk, or even check real-time data from your own systems. What makes this really cool is that these AI agents can be trained on your own business data. They're not just giving generic responses; they're providing personalized help that actually makes sense. It can do things on behalf of your business for your customers, like upgrading their subscription for them, adding members to a dashboard for you, and checking the limits of their plan—all based on your custom workflows. Plus, they work across all your channels, from your website to WhatsApp to Slack, so that your customers can get help wherever they are. And the best part? You don't need to be a coding wizard to set this up. **Chatbase** has made it super simple to set up and manage these AI agents, no matter what your technical level is. If you want to see how **Chatbase** can transform your customer experience from "Please wait" to "It's done," check out the link in the description. Trust me, your customers are going to thank you for this one, and thank you so much to **Chatbase** for sponsoring this video. There is a little bit of darker news to come out of Google this week. Google actually removed some terms; they removed the pledge to not use AI for weapons and surveillance. In fact, I believe that when Google acquired **DeepMind**, one of the terms that **DeepMind** put in place around that acquisition was that Google had to agree to not use AI for weapons and surveillance. That was part of the agreement with **DeepMind**, so it is very interesting that they sort of flipped on this stance. Demis Hassabis, who's the CEO of **DeepMind**, seems to be on board with this change because he did say, "There's a global competition taking place for AI leadership within an increasingly complex geopolitical landscape." Demis Hassabis and Mustafa Suleyman, the original founders of **DeepMind**, both put that rule in place in the beginning of you can't use this AI for weapons. Mustafa is out; he's over at Microsoft now, and it seems like Demis has sort of changed his thinking on it. Alright, so OpenAI and Google were sort of the big stories of the week, but there was a handful of smaller but still interesting AI news to come out this week. So now I'm going to sort of rapid-fire a whole bunch of other little things that happened in the world of AI, starting with the fact that **Mr. AI**, a sort of competitor to OpenAI out of France, launched a new version of **LeChat**. Now, they've had **LeChat** for a while; it's a free chatbot that you can find over at chat.m.ai, and it can do a lot of the same things you would get out of ChatGPT—things like search the web, generate images, code interpreter, and it even has a canvas mode where it'll put any sort of code and writing inside of a canvas, very similar to ChatGPT. They do now have a Pro Plan, which I believe is $15 a month. Yeah, so they do have a Pro Plan now available that gives even more access and reduces the limit of messages per day, but even the free version is still pretty dang impressive. The most impressive part about **Mr. AI** is how fast it is. People have been claiming they're getting a **TH000** tokens per second output when they ask it a question, which is mind-blowingly fast. In fact, I came across this video from Val on X here, who is an intern over at **Mr. AI**, and well, just check this out. They give it the prompt, "Generate me a kawaii calculator in canvas," and we can see that it actually generated everything in like near real-time. That calculator that popped up happened in real-time. I didn't speed up this video; they didn't speed up their video. They gave it the prompt to generate the kawaii calculator, and it generated the code, showed an example of it, and they started giving it some extra prompts like, "Now make it nature-themed," and within seconds, created a nature-themed calculator, and it's all like practically instant. That's how fast it is. And we can see Val here says, "No, this video is not sped up—genuinely mind-blowing," and it's available to all users right now. So it's available for free. Just to give it my own test, I'm going to make sure I have the canvas turned on, and I'm going to just type, "Generate a kawaii." **Kawaii calculator, and we'll see how fast this is. I'm not going to speed this up at all; this is my own test here. When I press the button, I will keep on talking. It wrote all that code practically instantly, and that was super fast. Now it created it in HTML, so let's just double-check to see how it did. And here's the calculator that it generated. Let's actually see if it works: 9 + 9 = 18; 18 * 2 = 36. So this calculator actually works. It's pink and yellow, and it generated it in 2 seconds—mind-blowingly fast!** There was a little bit of news out of **Anthropic** this week. They gave us an area to try to jailbreak **Claude** and see if we can get it to output dangerous responses. There are eight levels that it goes through, and they actually have a bounty where they'll pay you if you manage to jailbreak all eight questions. So far, nobody's managed to do it. But there is a little bit of other news around **Anthropic**: **Lyft** is starting to use **Claude** for their customer service, claiming that it reduces the average resolution time for a request by 87%. So if you're using **Lyft** and you run into issues and you try to contact customer support, it's actually using **Claude** to help you get through whatever issue you've got. We also learned that **Amazon Alexa** has an event coming up on February 26. Amazon's holding an event, and a spokesperson said the event is **Alexa**-focused but then declined to elaborate. So really, all we know is that they have an event coming up, and they're going to be talking about **Alexa**. Most people believe that they're going to roll out **Alexa** with a much smarter AI. Amazon said in the past that their AI in **Alexa** is going to be powered by **Anthropic's Claude**, so that's the announcement that everybody's expecting on February 26th: that **Alexa** is now going to use **Claude** and it's not going to be as dumb as it used to be. **GitHub Copilot** now has what they call **Agent Mode**. It says here that the new **Agent Mode** is capable of iterating on its own code, recognizing errors, and fixing them automatically. It can suggest terminal commands and ask you to execute them. It also analyzes runtime errors with self-healing capabilities. So it sounds to me like it's using one of these reasoning models where it will generate code, sort of double-check its own code, and then give you the code. It says in **Agent Mode**, **Copilot** will iterate on not just its own output but the results of that output. It will iterate until it has completed all subtasks required to complete your prompt. Instead of performing just the task you requested, **Copilot** now has the ability to infer additional tasks that were not specified but are also necessary for the primary request to work. Even better, it can catch its own errors, freeing you up from having to copy-paste from the terminal back into chat. I've personally never used **GitHub Copilot**; I've been much more on the **Cursor** train myself. But this sounds really handy for it to double-check its own work and pull stuff in from the terminal when something's not working properly. That just sounds like great quality-of-life updates that I imagine tools like **Cursor** will get as well. And since we mentioned **Cursor**, I want to point this out real quick because I found it fascinating that **Cursor** is literally the fastest-growing SaaS company in the history of SaaS. So SaaS is Software as a Service, and if we look at this chart here, we can see this is **Cursor's** growth curve. It basically took one year to get to $100 million in annual recurring revenue. We can compare it to **OpenAI**, **CoreWeave**, and **DocuSign**, and all of their respective charts. It took **DocuSign** 10 years to get to $100 million in annual recurring revenue; it took **Cursor** only one year. That's pretty mind-blowing how quickly **Cursor** is growing, and I think it comes down to the fact that tools like **Cursor** make it so literally anybody on the planet can write little software for themselves. I've used it multiple times to solve little problems in my own workflows. For example, I wanted a tool to quickly convert files from any image format into a JPEG. I used **Cursor** to create that app in about 15 minutes, and now I have a simple workflow where whenever I grab an image from any app or download it from the Internet, I don't have to open it up in a photos app and save it as a new file. I literally just drag and drop it over a box, and it converts it for me automatically. It saves me so much time, and I've created a handful of little tools like that because of tools like **Cursor**. I don't really know how to code, so I see why it's growing so quickly; it totally democratized the ability to make simple apps. Alright, let's move on to the sort of creative side of AI because there have been a handful of updates in that world as well, including the fact that if you use **Grok** inside of **X**, you can now actually edit images. If I head over to **Grok** inside of **X** here and tell it to generate an image of a wolf howling at the moon, we get four images here. Now, if I click on one of these images, there's a new button that says **Edit with Grok**. I can click on this button and describe what I want to change in the image. I'll say, "Make the sky a red color." We'll give it that prompt, and you can see we get pretty much the same image composition back, but now the sky is a reddish color. **P Labs** rolled out a couple of new features this week, including the **Pika Scenes**, which allows you to upload an image of your pet, and it will actually turn that image into an AI-generated video of your pet doing something interesting. They also rolled out this new feature called **Peak Editions**. This is where you can give a real-life video plus an image, and it will take what was in that image and add it to your video, like this rabbit we see here or this person opening their laundry where an octopus climbs out. Here's a video of a woman with curlers in her hair, and then a lion pushes her aside with curlers in his hair. You can see people playing basketball, and here's an image of a bear; it puts the bear in with them. Somebody opening a door, somebody doing yoga with a train behind them. So you can basically give it any video plus an image, and it will figure out how to work that image into the video. It's called **Peak Editions**, and this little baby popping out of the trash can is probably my favorite scene I've seen from it. But if we head on over to **P.art**, we can see down in the bottom we have a few new buttons like **Pika Scenes** and **Peak Editions**. So if we do **Pika Scenes**, I can throw in a picture of my dog here, give it a prompt like the pet is flying on a private jet, and here's the video we got out of it with my dog flying on a private plane. It actually looks pretty good, kind of looks like him, other than the fact that his back legs don't move properly when he's walking around. It actually got the face and head looking pretty accurate, honestly. So that was the **Pika Scenes**. Now let's try the **Peak Edition**. You'll notice you can upload a video and you can upload an image, and it pre-fills the prompt in for you. It says, "Add this to my video based on the current actions in the original video. Come up with a natural and engaging way to fit the object into the video." So I uploaded a quick video of me talking in front of my camera here, and I threw in an image of a wolf howling at the moon. Let's just see what happens when we try to blend those two together. My first attempt did not work at all; it kind of made my face look a little more AI-generated, but it didn't add the wolf howling at the moon. Let's add a donut and see what happens. Well, this time I can definitely see it added a donut in. Let's see what it looks like. So it pretty much just put the donut in the corner of the video. I guess you probably need a video with a little more action going on than me just talking into the camera like this. But that's **Peak Edition**—something fun to go play around with. I also wanted to show off what came out of **Topaz Labs** this week, a company that makes a really, really good upscaler. I use it to upscale images all the time; I use it to upscale video footage all the time. They actually just released what they call **Project Starlight**, which is the first-ever diffusion model for video restoration. So it takes old, low-quality videos and turns them into high-resolution videos. Let's take a peek at this video down here of a Muhammad Ali fight. You can see on the left how sort of grainy and pixelated it is, and the one on the right is the more upscaled version that uses this **Project Starlight**. It's pretty impressive how much higher quality it is. Here's another example where we can see the side-by-side of what looks like was recorded on a VHS tape to something that looks quite a bit better quality. It looks like it's in early access right now, and you have to like and comment to get access, so I'll link it up in the description if you want to get involved. There's also some really cool research that came out this week, like this **Omnium One**, which is basically a tool where you can give it a single image and an audio file, and it will combine them to make a deep fake. So check this out: here's a 10-second clip of one. The first frame was the image that they uploaded, and then the audio you're going to hear was the audio they uploaded, and it turned it into a deep fake of that person talking. "Give people something to believe in, and they will move from you and me to us." And here's another one with **Einstein**: "What would art be like without emotions? It would be empty. What would our lives be like without emotions? They would be empty of values." So we're at a point now where you can just have an image of a person, a sound bite from that person that could even be made in **Eleven Labs**, so it could be something that they never actually said, and you can combine those two and make a deep fake with them. That's **Omnium One**. Then there's also this one called **Video Jam**, which is like a new way of training video models that make them so much more coherent. We can see gymnastics and what it looks like from most videos on the left here, and then if we look at it again with the person on the right, it actually looks like somebody doing gymnastics. It figured out the proper physics and how people should move. Here's another one of somebody doing a weird ring thing where that doesn't look right, but if we go back and look at the updated version on the right, you can see it actually figured out how to make it look. This is just, again, a new way of training these AI video models so they have a much better understanding of the physics and how they should look. You're going to see this in a lot of other video models; you'll probably see this in **Cling**, **Runway**, **Hallu AI**, and **Pika** and all these other tools because with this research, they can actually sort of attach this to their existing technology. Now, I'm not going to go too deep into these research papers because I actually did a video earlier this week called **Seven Insane AI Video Breakthroughs You Must See**. I talk about those two papers that I just showed you as well as five other papers that I find really fascinating that have come out within the last couple of weeks. So check that out if you want to dive deeper into all of this cool AI research that's coming out that maybe we don't have access to, but it's like within weeks, maybe months away of it being publicly available for anybody to get their hands on. A couple last real quick things: there's a new bill introduced, I believe in the Senate, that wants to make it illegal to download **Deep Seek**, with a penalty of up to 20 years in prison. Now, I don't think this thing's ever going to get passed, but there are people in the government that want to make it illegal to use some of these open-source models. It's something to be aware of. In the final bit of news that I'll share this week, **The Beatles** won a Grammy this week for a song that was assisted with AI. Their song **Now and Then** used AI to clean up some old vocals that **John Lennon** recorded, and they put together a song with these AI-remastered vocals, and well, the song went on to win a Grammy. So that's pretty cool! And that's what I got for you today. Again, another week with tons of news. I mentioned it's not going to slow down anytime soon; it didn't slow down this week, and I doubt it's going to slow down next week. So if you want to make sure you stay looped in on all of the latest AI news, I make a breakdown video every single Friday where I try to cover all of the news that I think is worth talking about from the past week in the world of AI. I also like to create tutorials and talk about different tools and research that are coming out in the AI world. So if that's the kind of stuff you're into, give this video a like and maybe consider subscribing to this channel. That'll make sure more stuff like this shows up in your YouTube feed. I've also been doing some experimenting with the channel; you'll probably notice I've been testing new thumbnail styles, new titling styles, and new video styles and things like that. So if you have feedback for me, I love to hear it. Put it in the comments. I really, really appreciate anything you guys put in the comments. All actually useful feedback is really, really valuable to me. Finally, before I go, I should remind you to check out **Futur Tools**. This is the site that I built where I share all the cool AI tools I come across. I add tons of new tools every single day. There are just so many AI tools, so I made it super easy to filter them and find the exact tool you're looking for for your needs. I even put a match pick on here so you can find the tools that I think are the most interesting right now. I keep the AI news page up to date on a daily basis. I keep it simple and basic, just a list of here's all the important AI news that's happening. If you want to get the latest news and the coolest tools mailed to you twice a week, join the free newsletter. I'll keep you looped in directly in your inbox, and by joining the free newsletter, you also get access to the AI income database, which is a little database I've been building out of cool ways to make money using the various AI tools that are available. Again, it's all free over at **Futur Tools**. Thank you so much again for tuning in. Thank you for nerding out with me today. Thanks so much to **Chatbase** for sponsoring this video. I really appreciate all of you for tuning in, and I will hopefully see you in the next one. Bye-bye!

Назад

Залогинтесь, что бы оставить свой комментарий

Copyright © StockChart.ru developers team, 2011 - 2023. Сервис предоставляет широкий набор инструментов для анализа отечественного и зарубежных биржевых рынков. Вы должны иметь биржевой аккаунт для работы с сайтом. По вопросам работы сайта пишите support@ru-ticker.com