OpenAI's Video Model Has Arrived! šŸŽ„

We finally have Sora... does it live up to the hype?

Today on AI For Humans The Newsletter!
First impressions of OpenAIā€™s Sora Release
A very watchable 10 minute AI batman film
And a whole lot of updates from those racing to keep up with OAI šŸ˜µā€šŸ’«
Plus, our canā€™t-miss AI feature of the week!

Welcome back to the AI For Humans Newsletter!

OpenAIā€™s Sora is here! And itā€™sā€¦ pretty good?

For Day 3 of the 12 Days of ā€˜Shipmasā€™, OpenAI yesterday announced that Sora, their long awaited AI video model, was going live. And while the launch was not without bumps, itā€™s now here and mostly useable IF you have a paid ChatGPT account

Sora Generation Made By Gavin With Their Loop Tool

Reactions have been somewhat mixed, especially given the fact that Soraā€™s servers were massively overloaded yesterday AND the normal ChatGPT users only get around 50 generations per month. If youā€™re willing to shell out the $200 bucks for ChatGPT Pro, you will get unlimited generations though and image-to-video generations of people, which, surprisingly Sora has limited to the their most expensive tier.

Itā€™s worth diving into their blog post about the new features because honestly they make Sora super interesting and powerful. The Loop feature automagically creates a loop of the best part of your video and the Blend feature brings two videos together in the best way the AI can figure out.

Weā€™ll be spending more time with this today and tomorrow and have new stuff to say on the show this week but check out Gavinā€™s first reactions here.

-Kevin & Gavin

3 Things To Know

Previously, On 12 Days of Shipmas
OpenAI is spoiling us with 12 days of new releases, and 9 days remain. Day one saw the release of the full o1 model, and a new $200/mo ChatGPT Pro subscription that offers unlimited access to models and a slightly more powerful o1. Day two brought a new approach to fine-tuning models for specific jobs.

A Three Person Batman Film
Three people + three weeks + $200 of Kling credits yielded this very watchable 10 minute Batman film. Everything about it encapsulates the state of AI video today. Itā€™s clearly opening up production capabilities that once required larger teamsā€¦ but is this fair use? Warner Bros thinks not, and itā€™s already been taken down from all but Reddit.

Plus All The People Trying To Steal OpenAIā€™s Thunder
xAI launched a new image model thatā€™s pretty darn good. Runwayā€™s Act-One, the feature that overlays a driving performance onto another image, now works with video. ElevenLabs launched an entire toolkit for Conversational AI. Interact with your favorite ElevenLabs voices in real time, augment them with your own knowledge base + functional calling. And Google Deepmind is showing off another extremely impressive model to generate 3d game environments.

We šŸ’› This - Viggle

Tools like Runway Act-One and Hedra take the facial expressions of a human performance and overlay them onto a new character, while Viggle does the same for a full-body performances.

Simply take an input video of the performance - someone dancing around, moving through a scene etc - provide an image of a new character, and itā€™ll composite that character into the original scene.

The outputs can be a little more janky than an Act-One, but you can create 10 of these videos per day FOR FREE šŸ¤Æ What a time to be a creative.

Are you a creative or brand looking to go deeper with AI?
Join our community of collaborative creators on the AI4H Discord
Get exclusive access to all things AI4H on our Patreon
If youā€™re an org, consider booking Kevin & Gavin for your next event!