New ✨ AI Movie Generation by Meta

The Zuckaissance sets its sights on Hollywood with new research surrounding AI movie-making solutions. But who knows when it will come out.

Today on AI For Humans The Newsletter!
The golden age of video or another Hollywood cost-cutting measure?
OpenAI ships a big quality of life improvement for ChatGPT
Pika surprises us with a new approach to generative video
Plus, our can’t-miss AI feature of the week!

Welcome back to the AI For Humans Newsletter!

This week’s big story is Meta Movie Gen.

If you launch a new video model in a Moo Deng world you’d better have some baby hippos in your teaser videos!

Movie Gen is Meta’s brand new text-to-video AI model that, in early examples, compares remarkably well to OpenAI’s SORA. And, at least for now, it seems slightly better than models like Runway’s GEN-3, Kling, LumaLabs or Minimax.

Even more exciting, Meta runs through some remarkable examples of using the model to not only generate footage from scratch but allowing the user to manipulate that footage in very specific ways.

Using what looks to be a pretty good in-painting feature (changing just a part of the image), you will be able to do subject specific changes to the video outputs. See the example below from Meta.

This is an incredibly powerful tool as the slot machine nature of AI video often means having to generate dozens (sometimes hundreds) of videos to land on one you’re happy with.

They’re promising a few more cool things as well, specifically being able to put you — as in your picture — into specific video generations and the examples they show off are quite extraordinary.

But it’s important to know that none of this is released out to the public yet.

Much like Sora, we have no idea when this will actually come out. There’s a ton of information in the VERY long technical document as well as a lot of talk about safety which means we might not see this for a long time.

-Gavin & Kevin

3 Things To Know

Canvas: A New Direction for ChatGPT
Announced Friday, Canvas is a new UI panel in the ChatGPT experience where the output of your generation, like your document or body of code, is presented separately. It puts the assistant conversation happens alongside the output for the first time, similar to Claude Artifacts. If you’re on a paid plan of ChatGPT, you can take it for a spin today by selecting ChatGPT 4o with Canvas from the model selector dropdown.

AI Video Keeps Getting More Weird Excellent
AI video platform Pika launched Pika 1.5, introducing “Pikaffects” like melt, inflate, crush or explode. They also launched “Big Screen Shots” for specific camera actions like bullet time, dolly shots and crane movements. It’s a more on-rails approach to generative video than the open-ended prompting of Runway.

Hey Cuisinart 4 Slice Stainless Steel Toaster…
ChatGPT’s killer feature Advanced Voice is now available to developers via API, bringing compelling voice support to… just about anything. Here’s a cool example of its use in a language learning app, but don’t be surprised if a growing number of apps and household appliances start calling out for your attention.

What it is: A magic wand to fix image details. Be it a beautiful landscape photo or a portrait of ye hot dogge. It’s an example of inpainting, and as easy as selecting the area of the image, providing a prompt (or not), and clicking Generate.

Why we love it: It’s a must-have feature for both professional and consumer tools alike! Many apps like Midjourney now have a version of this built-in, and is a great example of generative AI speeding up the most common creative workflows.

Are you a creative or brand looking to go deeper with AI?
Three ways AI For Humans can help:

Join our community of collaborative creators on the AI4H Discord
Get exclusive access to all things AI4H on our Patreon
If you’re an org, consider booking Kevin & Gavin for your next event!