Today on AI For Humans:
OpenAI’s New Tools Combine Powerfully
DeepSeek v4: Less Powerful Than SOTA
Plus, GPT-5.5 360 Walkthrough History!

Welcome to the AI For Humans newsletter!

OpenAI finally had their big week with two different massive launches: GPT-5.5, their new flagship model, and ChatGPT Images 2.0, which catapulted ahead of Google’s NanoBanana Pro in AI image generation.

Both of these are state-of-the-art releases that bring OpenAI back to the front of the pack, but it’s the combination of them that opens the door to an entirely different way of thinking about AI coding and creation.

In my own experiments and others (see the bottom of this newsletter), I continue to be impressed with how AI models aren’t just getting better but are now allowing us humans to do more previously impossible things… let’s get into it!

Please support AI For Humans by learning about our sponsors below:

We’ve partnered with HP & Intel to promote the Zbook Fury Pro workstation.

For more info & to support A4H, click here: https://bit.ly/4uapNHs

GPT-5.5: More Powerful But Also… Faster

Yes, GPT-5.5 is the most powerful AI model to date and, in our tests over the last few days, we’ve found it to be a remarkable upgrade from 5.4.

But the bigger deal might be that they’ve found a way to make it faster too.

If you’ve used any thinking model (especially OpenAI’s GPT-5.4) you know that feeling of waiting while the model goes through its process. It feels like this new version cruises through thinking much faster.

They also found a way for it to use significantly less tokens (this prob accounts for some of the speed increase) which is good because the model is 20% more expensive than Opus 4.7 to use.

But, weirdly, Sam Altman only appeared in one livestream this week and it wasn’t for their new flagship LLM…

ChatGPT Images 2.0 Is Shockingly Good

Yes, Sam livestreamed the drop of their new image model, ChatGPT Images 2.0, and good golly it is incredible.

There’s a ton of great examples (click on each word) of what it can do but the easiest way to catch up on everything might be our mid-week episode below…

Yes, it’s very good a recreating reality but the thing that really got me was the ability to now call directly to the ImageGen tool within their Codex app.

Often when building apps or vibe coding, you end up getting the same-ish looking outputs and it’s created a swath of AI apps that feel very sloppy.

Now, you can not only invoke the ImageGen tool within your coding projects but you can also use it to generate new and unique looking front end design.

The tool can generate a fake web page for your new app, use it to create individual art assets and then you can give Codex those images to build the thing.

This may sound complicated on the surface but it’s remarkably easy overall to make this sort of thing function, especially when you ask Codex to just walk you through it.

Love this newsletter? Forward it to one curious friend. They can join in one click.

AI As The Sum Of All Its Agentic Parts

The combination of these tools proves to me that AI has fully entered into the next stage of development, the world where the tools combine into something much more powerful than what they are individually.

There’s been a ton of talk about AI ‘superapps’ lately and I think I’m finally beginning to see the vision.

We’ll likely get the next version of this at Google’s upcoming I/O event in May but both OpenAI and Anthropic have mentioned shipping more very, very soon.

That’s the last thing we want to leave you with… OpenAI’s CSO Jakub Pachocki had this to say:

Things are moving ever faster. Get involved and try out these new Codex tools today and be sure to @ me and show me what you make!

-Gavin

This week on AI For Humans: GPT-5.5 Has Landed and…It’s Very Good!👇

3 Things To Know About AI Today

DeepSeek v4 Is Finally Here… And It’s Fine?

This week, the big new model from Chinese AI juggernaut DeepSeek launched and… you probably didn’t hear much about it.

When the last big DeepSeek model launched, it tanked NVIDIA and the entire stock market and this time… it just didn’t.

That’s because, on the whole, the new DeepSeek is underwhelming when compared to GPT-5.5 & Opus 4.7.

In fact, it doesn’t even top the open-source models list.

Obviously, it’s still a huge deal that an open source model trained on lesser GPUs can achieve this sort of performance at a much lower price point.

But DeekSeek v4 points to the struggles for Chinese AI companies to keep up with their American counterparts due, in part, to lack of compute power.

I find myself thinking a fair amount about what the end game for all this looks like and how much AI power I’ll personally need in a year or two vs what sort of AIs will be employed to solve the world’s problems.

Eventually, the idea that incredibly powerful open source models can be run locally (on your own computer) will revolutionize the entire AI industry.

AI Agents (For Now) Aren’t Cheap As Human Replacements

The story the big AI labs have been selling to large enterprises is that (especially with software engineers) there will soon be a world where you could replace (er, ‘amplify’) entry level employees with 24/7 AI agents.

But, at least for right now, it seems that the AI agents might be more expensive than their human counterparts, mostly due to the costs of state-of-the-art AI model tokens.

Axios’ story above has a good round-up of the struggles of large companies to not only pay for these API costs (this is why Anthropic is currently clocking a $30B ARR) but also about the cost of figuring out what to do with all this new software.

We’re currently in the bumpy in-between times and maybe it’s not a terrible thing that mass unemployment slows a little bit as we figure it out.

Dwarkesh Patel Gets The NYT Treatment

We’ve been fans of AI insider and podcaster Dwarkesh Patel for some time and it’s cool to see him crossing over into the mainstream with this large profile in the New York Times.

His latest interview with NVIDIA CEO Jensen Huang is a must-listen for anyone interested in the current state of the AI race and the coverage of it likely brought this profile into being.

Did you wake up a loser? We don’t think we did?

We 💛 This: GPT-5.5 Google Maps For History

As mentioned above, GPT-5.5’s coding abilities are great but what you might not realize is just how much faster it is than it was before.

It may still take a while trying to learn Slay The Spire 2, but watching how quickly it can spin up demos of random ideas you have is pretty mind-blowing (see the second half of Friday’s episode).

It also means that people are getting significantly more ambitious in their vibe coding projects.

Did I know that I needed a Google Maps 360 walk-around of the Hanging Gardens of Babylon? No.

Did I immediately go and try to create one of these myself? Yes.

Are you a creative or brand looking to go deeper with AI?
Join our community of collaborative creators on the AI4H Discord
Get exclusive access to all things AI4H on our Patreon
If you’re an org, consider booking Kevin & Gavin for your next event!

Keep Reading