- AI For Humans - The Newsletter
- Posts
- How 'AI Actors' Illustrate The Need For Human Creative Input in AI Output
How 'AI Actors' Illustrate The Need For Human Creative Input in AI Output
Why AI creative messaging goes wrong & how we're trying a new path forward with our upcoming start-up AndThen.

Today on AI For Humans: The Newsletter!
How AI ‘Acting’ Gets Messaging Wrong
ChatGPT’s New Mainstream AI Agent
Plus, Kling 2.5 Turbo Is SOTA AI Video
Welcome back to the AI For Humans newsletter!
A story crossed my desk yesterday that made me think about why I do what I do here and, specifically, why our new start-up AndThen is different than most AI media plays.
If you’ve been wondering what AndThen is, scroll down & you’ll get some info. But also…expect an email here on Wednesday for our beta launch with more specifics.
But first, we need to talk about ‘AI Actors’…
This news broke in Deadline Saturday and set the AI rage machine aflame yet again.
Particle Six (an AI + Trad production company) is bringing “AI Actress” Tilly Norwood to Hollywood & according to the headline, she seems to have quite a bit of heat.

The backlash to this particular headline has been significant.
The video below introduces us to Tilly though a series of VEO3 (mostly?) clips and gives you an idea of how Particle Six is marketing her and their services.
For some context here: Founder of Particle Six, Eline Van Der Velden was interviewed on stage at an event in Zurich where she said the following:
“We were in a lot of boardrooms around February time, and everyone was like, ‘No, this is nothing. It’s not going to happen’. Then, by May, people were like, ‘We need to do something with you guys.’ When we first launched Tilly, people were like, ‘What’s that?’, and now we’re going to be announcing which agency is going to be representing her in the next few months,” said Van der Velden.
Of course, because we live in the world we live in, headlines get written that capture the conversation but then amplify the message for clicks. And, whoo boy, does AI rage cause those clicks. Many people got very, very, very mad about this.
Not as mad as they were about the Friend ads in the NYC subway but still very mad.
So what, if anything could Particle Six (or Deadline) done to make this feel less like an ‘attack on humanity’? And maybe more like something interesting for creative people or perhaps even an opportunity to invent something new?
How To Put A Human Face on AI Creation
Look, I’m not here to tell you that AI actors aren’t going to happen in some form.
It’s clear that AI is moving fast in Hollywood & that more of these sorts of stories are going to break through. I know getting work as an actor is hard even before they had to compete with companies like this. Very, very few people ‘make it’, which for actors might just mean a living wage.
However, this narrative can be changed. And it should be.
Because truthfully, it’s not some random rogue AI making these things. Again, like last week, it’s us… the humans who are responsible for these inventions. But also, it’s us who are responsible for how we talk about these things to the world.
In my opinion, AI creations need a human face, especially now. I wonder how this story would’ve been different if either a named actor (prob too risky career-wise) or some human being would have been the face & personality behind Tilly Norwood. What if an actual young actress started her career seeing this sort of opportunity and said “Hey y’all, this is me & I’m going to use this new tech to open myself to new things…”?
There still would be flame wars but, maybe, you’d get some sense of understanding. Some idea that, yes, humans creativity isn’t being replaced but the idea of how we see it might be changing.

This charming human is VERY well known for playing a ‘generated’ character.
To be honest, this idea is not even all that new. Yes, the ‘generative AI’ part of it IS new but Hollywood has been giving us animated films which have ‘generated’ characters since nearly the dawn of cinema. Some of the most beloved films of the last 25 years have been made by Pixar, a studio that literally uses computers to generate entirely new worlds.
And, of course, Generative AI is different (mostly because of the underlying training problem we’ve discussed ad nauseam) but there is a way to bring humans into the mix.
What We’re Doing At AndThen Differently
And thus, we come to our new start-up AndThen.

An early look at our AndThen homepage!
AndThen is a weird, new thing. The basic idea is that you’ll have a live, real-time conversation with an AI character (sometimes multiple!) in a voice-first experience where there’s something to do. They’re fun, interactive & because it’s audio, you find yourself getting lost in the experience. And, because it’s driven by AI, each experience is unique.
This isn’t a re-hash of podcasts or movies or some other format. We wanted to do something that’s only possible with AI technology & feels like the start of something different.
But, maybe more importantly to me, we also wanted to make it crystal clear that these are ‘human-crafted’ experiences. What makes each of these cool & different is that a human comes up with the scenario & character, sets the wheels in motion and more.
Humans will have bylines on AndThen experiences, much like a human writer does on an article & could use their own voices if they want.

One of our AndThen experiences coming into beta on Wednesday!
We think there’s a large opportunity here to make really interesting and compelling experiences and, assuming we get to keep making it, a platform for everyone to do this as well.
As mentioned earlier, I’ll be sending out the RARE second email this week when AndThen’s beta goes live. We’d love you to try it and give us feedback. It is early but we think the sign of something cool.
That’s it for today. See you on Friday for the podcast!
- Gavin (and Kevin)
In this week’s AI For Humans: New AI Drugs & The Massive New OpenAI / NVIDIA deal👇
3 Things To Know About AI Today
ChatGPT Pulse Is OpenAI’s First Mainstream AI Assistant
This week (right after the AI For Humans episode taped OF COURSE) OpenAI introduced Pulse, a new service via ChatGPT that suggests stuff for you based on your usage of their service.
Now in preview: ChatGPT Pulse
This is a new experience where ChatGPT can proactively deliver personalized daily updates from your chats, feedback, and connected apps like your calendar.
Rolling out to Pro users on mobile today.
— OpenAI (@OpenAI)
5:05 PM • Sep 25, 2025
This is 100% an AI agent they’re wrapping in cute little Instagram-style graphics and, once it’s rolled out to more than just $200-dollar-a-month Pro users, I think will be impactful on many, many people. It’s also another potential spot for OpenAI to lock in users if they can deliver significant value.
And it’s yet another reason my family is going to have to get individual ChatGPT Plus accounts or maybe OpenAI could FINALLY introduce a Family account?
Are We In an AI Bubble? A Strong Contrary Opinion Says No
Anthropic researcher Julian Schrittweiser has worked everywhere in AI and wanted to step into the ‘Are We in an AI bubble?’ discussion.
As a researcher at a frontier lab I’m often surprised by how unaware of current AI progress public discussions are.
I wrote a post to summarize studies of recent progress, and what we should expect in the next 1-2 years:
julian.ac/blog/2025/09/2…— Julian Schrittwieser (@Mononofu)
10:58 PM • Sep 27, 2025
The main point here is no, we are not in a bubble, at least when it comes to model improvement. In fact, just the opposite. Julian points out that the METR benchmark that shows every seven months AI are doubling the length of tasks they can handle. Which, when you pull out, definitely looks like an exponential curve.
However, after listening to Derek Thompson’s very good episode of Plain English with Paul Kedrosky about the financial side of the AI bubble, it’s obvious that we as a society might have to separate AI capability from AI spending for our bubble talk.
The AI Paper You Must Read: Video Models Are Zero Shot Reasoners
AI naysayers (or at least those who think we’re not moving as quickly as others) suggest that we’ll need additional insights outside of LLMs to move to AGI & super intelligence. Well, this new paper from Google’s VEO3 team might pave the way.

Video Models Understand How The World Actually Works??
Essentially, there are emergent behaviors coming out of advanced video models like VEO 3 where they innately understand physics & other aspects of the physical world. The paper is slightly nerdy but a worth while dive into how visual AI models are likely the next step to advancement.
We 💛 This: Kling 2.5 Turbo AI Video
Yes, another week and another update to an AI video model.
This time, it’s Kling’s turn as they make 2.5 Turbo cheaper, faster, (harder), stronger. Woof, I really shoehorned in that Daft Punk reference didn’t I?
⚡ Introducing Kling AI 2.5 Turbo Video Model!
Next-Level Creativity, Turbocharged! Now at Even Lower Price!
— From Kling AI Creative Partner @wildpusa— Kling AI (@Kling_ai)
10:47 AM • Sep 23, 2025
I feel a little bad as I hadn’t played with it a ton yet when we recorded the show (see my dog breakdancing clip which didn’t work SO well) but since I’ve been doing a few renders for AndThen with Kling 2.5 and it’s next level at consistency & motion.
Below are some of the best examples we’ve seen online (many with prompts!)
Are you a creative or brand looking to go deeper with AI?
Join our community of collaborative creators on the AI4H Discord
Get exclusive access to all things AI4H on our Patreon
If you’re an org, consider booking Kevin & Gavin for your next event!