Today on AI For Humans:
The AI Design Moment Is Here
Is Claude’s new AI Mythos the BIG one?
Plus, The Best Seedance 2.0 Tutorial
Welcome to the AI For Humans newsletter!
There’s an X post that sent shock waves through the AI community this weekend and, while it’s pretty nerdy, it made me think about a lot of stuff.
And, at 12.1 million views and counting, it seems others felt the same way.
Cheng Lou is a software engineer who’s worked at Facebook, Apple and now at Midjourney and spent his career thinking about how code and interfaces collide.
Now, he’s turned his attention towards how text is displayed across the web.
His new release Pretext uses AI models that have been trained on all the weird quirks of how different browsers draw text, covering every font and language, including complex ones like Chinese, Japanese, Korean, and right-to-left scripts like Arabic.
In technical jargon, this gives it pixel-perfect accuracy without needing any CSS at all.
You can render text smoothly and directly on Canvas or WebGL, perfect for dynamic content that generates (and changes) layouts on the fly.
For us normal humans… it means cool new ways to make static text more interesting.
But why has something this nerdy taken off? And why are designers pooh-poohing it?
More importantly, what does this mean for you?
Please support AI For Humans by learning about our sponsors below:
1,000+ Proven ChatGPT Prompts That Help You Work 10X Faster
ChatGPT is insanely powerful.
But most people waste 90% of its potential by using it like Google.
These 1,000+ proven ChatGPT prompts fix that and help you work 10X faster.
Sign up for Superhuman AI and get:
1,000+ ready-to-use prompts to solve problems in minutes instead of hours—tested & used by 1M+ professionals
Superhuman AI newsletter (3 min daily) so you keep learning new AI tools & tutorials to stay ahead in your career—the prompts are just the beginning
Massive Hype Meets Blowback
The demos that Cheng shared are undeniably cool to see (especially for those not drenched in the history of text on the web) and below I’ll share the incredible things people have already made.
But one of the more interesting aspects of Pretext is the blowback it’s gotten from those who’ve worked in this space for a bit.
Even Cheng himself acknowledges that it’s gotten a little too much attention for what it is and this post made me chuckle.
But the hype and the blowback get to the heart of both the biggest problem and most exciting thing about the current rise of AI coding and really AI at large…
AI is making what was institutional knowledge into general knowledge faster than we’ve ever seen before.
On one hand, if you’re outside of the design engineering world, you’ll see something like this and say ‘Whoa! I can do that now? Maybe that side project I have can look much cooler!!’
However, from inside, you might say: ‘This isn’t as big a deal as it seems because <insert person> did this ten years ago and sure, it’s cool but I mean it hasn’t changed my day-to-day life’.
And, weirdly, us humans can feel one way about AI hitting their specialty vs something they don’t know.
I have a friend who’s a writer who HATES the idea of AI writing anything and believes it will never be able to capture what he does.
On the other hand, he loves AI image and art tools and often uses them to bring for things from his imagination.
Like I discussed last week, my ultimate take is that this becomes a numbers issue.
Sure, there will still be specialized knowledge workers but as tools like this spread to the masses we’ll see things that never would’ve come from the inside.
Love this newsletter? Forward it to one curious friend. They can join in one click.
The Collective Imagination of Humanity is VAST
As I’ve said before, I suspect what makes humans special isn’t the way our brains function but our ability to generate novel ideas.
And, of course, our ability to build off of other humans’ ideas in unique ways.
When a tool like this comes out, the most exciting thing to me is to see that collective ability kick into action across all the different use cases that each individual human comes up with.
You certainly don’t have to be a design engineer to see the incredible creativity on display in the following:
And these examples are from mere days after the tool was released into the world at large… who knows what we’ll see in the coming weeks.
Oh and, yes, the ‘going viral’ part does really matter.
Sure it’s what gets hackles up because it feels like it’s about people trying to hijack attention and post for clicks.
But it also brings much larger audience and almost forces you, the human, to sit up and notice. And maybe actually try it for yourself.
The First of Many Of These Moments
This isn’t the first of these viral tool moments and, if this one doesn’t tickle you or make you think about what’s possible in your world, that’s totally fine.
The important part is that there will be many, many more of these moments and one of them might get you off your butt and make something cool.
When we talk about AI speeding up, it’s not just the models getting better.
It’s also speeding up how we humans can think about our own creativity and get what pops into our brains out into the world.
See y’all next week for a new episode!
-Gavin
Last Friday’s AI For Humans: OpenAI’s new Spud model needs to be good👇
3 Things To Know About AI Today
Anthropic’s Mythos Might Be The Step-Change We’ve Been Waiting For
We rarely get big leaks in the AI world. The labs have gotten pretty buttoned up when it comes to what they’re working on next.
That changed late last week when Anthropic admitted a presentation about their new flagship model Mythos was accidentally leaked.
The above article is paywalled but the basics are:
This is an entirely new level of AI model sitting ABOVE Opus (which we assume they will continue)
New model has dramatically higher scores on ‘software coding, academic reasoning & cybersecurity’ than Opus models.
Hugely expensive for Anthropic to serve and will cost a lot for you to use
There’s actually an X post out there that claims to have the full leaked release and, to be honest, it looks pretty valid if you want to see for yourself.
The supposed date on this is Q3 of this year so we shall see what happens. Sounds like ain’t nothin’ slowing down anytime soon.
Claire Vo Lays Our Her OpenClaw Journey on Lenny’s Pod
One of the fun things about being an AI Youtube-r is that my feed fills up with a whole lot of other YouTube creators talking about AI stuff.
One my favorites now is Claire Vo who runs the How I AI podcast and just spoke with Lenny Rachitsky on his Lenny Pod.
What I appreciate about Claire’s work (here with OpenClaw but also in general) is her ability to make it understandable for the average person but also not to sugarcoat it or to hype it up. OpenClaw can be hard but Claire makes it clear here why it matters.
Open Source & Local AI Video Seinfeld
Yes, Sora has been cancelled for real and that leaves only a few major players in the AI video market (Google, Bytedance and maybe Kling) but LTX 2.3 continues to push forward the open source world and… it’s getting good.
This isn’t easy to pull off (ComfyUI can still be intimidating) but just seeing what’s possible on locally-running consumer hardware already is fascinating.
We 💛 This: Theoretically Media’s Short Film & Seedance 2.0 Tutorial
Our Youtube buddy Tim (aka Theoretically Media) dropped a very cool Seedance 2.0 short film featuring his signature AI actor FlameGirl.
But even better is Tim’s full breakdown video on how he made this and a lot of the learnings along the way. The video is amazing and make sure you follow Tim for more.
Speaking of Seedance 2.0 prompts, Techhalla shared a full thread of them that you should try as well.
Are you a creative or brand looking to go deeper with AI?
Join our community of collaborative creators on the AI4H Discord
Get exclusive access to all things AI4H on our Patreon
If you’re an org, consider booking Kevin & Gavin for your next event!



