- AI For Humans - The Newsletter
- Posts
- How ChatGPT Apps Will Change Everything Again
How ChatGPT Apps Will Change Everything Again
OpenAl's sneaky big announcement points to a voice-first way of interfacing with information across the Internet

Today on AI For Humans:
ChatGPT App Integration Means Big Things
Meta Spends $3B on ONE
Plus, Sora 2 Professional Workflow
Welcome back to the AI For Humans newsletter!
Kevin & I just wrapped an awesome week in SF for a16z Speedrun’s DemoDay and are currently chatting with a bunch of firms about seed funding for our new start-up AndThen.
It’s been a great experience and, as someone who’s spent my life between NY & LA, I think I fell in love with San Francisco. It’s a special place.
And speaking of special things that happened in SF this week…
OpenAI’s Developer Day Was Bigger Than You Think…
Amongst all the crazy Sora 2 blowback this week (covered in detail in our episode from Friday), Sam Altman got up in front of over 1000 AI developers and introduced what might actually be the future of the internet, or at least one he’d like to see.
You can now chat with apps in ChatGPT.
— OpenAI (@OpenAI)
6:07 PM • Oct 6, 2025
But Gavin, you might be saying, what exactly is so big about Apps in ChatGPT?
I mean, sure it’s cool to be able to check Zillow prices or make Spotify playlists from within that little window but… how in the heck is this the future of the Internet?
Also, don’t you remember GPTs, OpenAI’s ill-fated attempt to make it’s own early app store?
I do remember GPTs (RIP) but I have a few thoughts on what makes this bigger now & exactly what could take this to the next level: voice-interaction.
ChatGPT is now a MASSIVE platform…
It’s hard to imagine that when ChatGPT was launched, no one internally at OpenAI thought it would amount to much.
I mean, it’s just a chat interface after all, one of the oldest form factors on the Internet.
If you read any of the books about OpenAI (I recommend this one as a balanced look at their origin), you’ll see that most of the company thought the API would be where all the interesting stuff would get made.
But history turned out vastly different and the world embraced ChatGPT at a level even Sam Altman couldn’t have foreseen…
Something changes when you reach that sort of scale. You can see it in how Facebook bullied the entire internet into posting content there first in the 2010s and how YouTube’s current reach can dwarf that of Netflix or other streamers.
And something that makes ChatGPT quite different right now is that LOTS of people are starting to run their lives through the thing.
Search was the obvious entry point to start cleaving off traffic from other companies (Google has their own AI search tool as well) but if the people are there… and they are… why not start thinking about the other things people do over the Internet, mostly on their phones and within apps.
However, I personally don’t think the chat interface is the thing that’s going to matter in the long run. It’s clunky & doesn’t feel like an easy way to manage everything.
Plus, Apple’s ecosystem is very robust and touch + visually interesting apps make it super easy to find and access your data.
But… there is a big shift coming. One we’ve talked here about before that will make all of this a lot easier to do across the board.
Voice-first Interactions Will Eventually Dominate AI Interfaces
The brilliant innovation of LLMs isn’t that they’re some sort of incredible new computer (they kind of are) or that they’re more efficient at information retrieval than the logical systems that proceeded them (they definitely aren’t).
It’s that you can talk to them in natural language in a way we’ve never been able to do before.
But, as mentioned above, typing as an interface still feels old fashioned. And slow as hell. However… once voice-interfaces are fully unlocked, all that changes.
ChatGPT Apps are, right now, extremely underpowered and not all that exciting.
But imagine a not-too-distant future where you’re running your entire life from an OpenAI assistant that you speak with and no longer even have to think about interfacing with text or apps or really anything except your voice.
I love the clip above from Star Trek 4 (the plot involves something about time travel & whales) and how it demonstrates that of course in the future we’ll be talking to our devices and it will seem crazy that people before didn’t.
When Sam and team first premiered the Advance Voice demo last year, you could see them getting at their ‘always on assistant’, it was just crazy early. And it was just the interface and didn’t have the data to do all the stuff it might actually be able to do in the future.
ChatGPT Apps + Advanced Voice + some form of personalized memory still to come is absolutely the blueprint for this going forward.
It Likely Won’t Just Be OpenAI Doing This
Obviously, it’s not just OpenAI that sees this new world incoming.
Google has been pushing forward into both AI voice technology & apps within Gemini. And, Meta, Apple & Amazon are all diving in as well.
But none of them currently has an AI chat platform with 800m weekly active users and that, for now, is a GIANT advantage. That, plus their new AI device (see below) probably gives them a decent leg-up.
I’m not sure how long it will take to make ChatGPT apps really useful or whether or not it will be the ultimate winning strategy but I can tell you directly that I think voice-driven interfaces sure feel like the next big thing. And that, dear reader, is why we’re working on them as well.
That’s it for today. See you on Friday for the podcast!
- Gavin (and Kevin)
In this week’s AI For Humans: Sora 2’s Nerfing & Influencer Outrage Rises👇
3 Things To Know About AI Today
The Sam Altman / Jony Ive Device Might Take a While
It kind of flew under the radar last week, but there was a Financial Times story about the mysterious new AI device from OpenAI’s Sam Altman and former Apple designer Jony Ive struggling to get through development.
Specifically, it calls out issues with compute capacity for an always-on AI device and how the device will handle privacy.
Sam took some time (I think in Developer Day press) to tell everyone that while it’s not a this year or even maybe not a next year thing, it will be something new.
Sam Altman says AI may need a new hardware form factor
OpenAI and Jony Ive are exploring a family of devices to make AI easier to use — this will take time (not this year, maybe not next)
The Goal is a new kind of computer built around AI, a companion through your life
— Haider. (@slow_developer)
1:40 PM • Oct 12, 2025
Meta’s New 3 Billion Dollar Man?
Zuck got his guy, if a little late.
A month or so after failing to lure Thinking Machines co-founder Andrew Tulloch to the Meta Super Intelligence team, the WSJ is reporting that Mark Zuckerberg has succeeded in bringing him over.

If at first you don’t succeed…
Tulloch turned down a reported 1.5 BILLION dollar pay package over six years the first time & the rumors are that Zuck doubled his original offer to get him to jump. Prob pretty hard to turn that down. He does have a long history with Meta before stints at OpenAI & Thinking Machines so it’s a bit like going home. But that home can be MUCH nicer now.
Frostbite: A fully-generated Sora 2 short film from Dave Clark
One thing we keep saying about AI is the more you use these tools, the better you get at them. It’s a bit of a learning curve to start (you have to understand prompting, the different models, the slot-machine nature) but once you lean in, you can get a LOT out of them, especially as they improve over time.
So it’s not a suprise to see AI video master Dave Clark’s latest fully Sora 2 generated film “Frostbite” which, I’ll be honest, looks insanely good.
My Very First Sora 2 Short Film 🤯
100% Text to Video
This is FROSTBITE
Created with Sora 2 Pro
Edited and Directed by Me
— Dave Clark (@Diesol)
9:07 AM • Oct 11, 2025
We 💛 This: Sora 2 in Professional Workflows
If you’re interested in how AI video can be brought into a real-world professional workflow, I implore you to spend the full 20 minutes (or 10 at 2x speed) and watch the video below from OpenAI’s Developer Day.
The team working on Critters, a fully generated AI movie that OAI plans to help bring to theaters next year, walks through their custom tool to go from concept sketches to full blown production using Sora 2 in the API.
This 100% feels like the future of filmmaking and it’s fascinating to see it broken down.
Are you a creative or brand looking to go deeper with AI?
Join our community of collaborative creators on the AI4H Discord
Get exclusive access to all things AI4H on our Patreon
If you’re an org, consider booking Kevin & Gavin for your next event!