- The Neuron
- Posts
- šŗ OpenAI burning $14B+ in 2025?!
šŗ OpenAI burning $14B+ in 2025?!
PLUS: GPT-4 died too young...

Welcome, humans.
Today, at The Neuron, weāre pouring one out to celebrate the life of our first digital homie: GPT-4. OpenAI quietly pulled the plug on GPT-4 this week, officially retiring the model that changed everything just 2 years after its release on March 14, 2023.
For perspective, that's less time than it takes most software to get out of beta. GPT-4 reshaped entire industries, launched thousands of startups, and sparked an AI arms race that's still acceleratingāall before reaching what would have been its toddler years in human terms.
The legacy lives on through GPT-4o, the rise of research models across the industry (and GPT-5?). But it's worth pausing to appreciate how quickly we normalize revolutionary technology. Last March, GPT-4 was literal magic. Today, it's yesterday's news.
goodbye, GPT-4. you kicked off a revolution.
we will proudly keep your weights on a special hard drive to give to some historians in the future.
ā Sam Altman (@sama)
2:23 AM ⢠May 1, 2025
Hereās what you need to know about AI today:
Just how viable IS OpenAI?
Salesforce launched tools to fix ājagged intelligenceā in AI systems.
Computer scientists questioned whether AI truly understands language.
Studies showed AI could persuade humans more effectively than other people.

OpenAI's $20K AI doctorate might be groundbreaking...if the company can afford to keep the lights on.
OpenAI has been on an adoption tear recently (weāre talking 800M-1B weekly users), but behind a wave of flashy new features (image generation, memory, new coding AI, new tax tool) lurks a financial time bomb that could reshape the entire AI industry.
Consider whatās coming next: a $20K per month doctorate-level AI system powered by the o3 and o4 models.
According to The Information (who spoke with early testers), this ultra-premium AI can invent novel research ideas by connecting concepts from different scientific fieldsābasically doing the Nikola Tesla thing of bridging different scientific domains for breakthrough innovations.
For example, scientists at Argonne National Laboratory who tried these models reported being seriously impressed with how they accelerated experiment planning.
Let's talk about that $20K monthly price tag, though. Who's supposed to pay that? Actually, plenty of people:
Oil and gas execs who'd happily replace a $240K/year scientist.
Pharma companies racing for patents.
Hedge funds seeking trading advantages.
Any research organization wanting to compress years of work into months.
But here's where things get dicey. Finance journalist Ed Zitron has been digging into the publicly reported info on OpenAI's books, and the numbers are...concerning:
OpenAI just āraisedā $10B (with $30B more promised) at a $300B valuationāthat's 75x its 2024 revenue. For context, that's a bigger gulf than Tesla at its peak market cap..though perhaps modest compared to the rumored $7 trillion Altman once supposedly asked for to guarantee AGI.
Meanwhile, the company is burning cash faster than a Silicon Valley bonfire:
Spent $9B to lose $5B in 2024.
Projected to spend $28B+ in 2025 (losing $14B+).
Spends about $2.25 for every $1 it makes.
Expects to burn through $320B between 2025-2030.
What's driving this cash incinerator? Computing costs. Their new o3 reasoning model reportedly costs $30K per task to run. Yes, you read that right.
Ed says this matters because if OpenAI implodes, it takes the entire AI ecosystem with it. The company has become systemically important to tech, with its financial tentacles wrapped around Microsoft, Oracle, and GPU makers like NVIDIA.
What might trigger a collapse? Several possibilities:
SoftBank can't secure the remaining $30B promised (cause global recession).
OpenAI fails to convert to a for-profit by end of 2025 (cause Elon lawsuit).
They run out of GPUs (Altman's been publicly begging for more).
Their data center partners (former crypto miners with no AI experience) can't deliver.
The irony? The very companies who'd benefit most from that $20K/month super-AI might not get the chance to use it if the financial foundation crumbles first.
Our take: While AI capability continues to advance dramatically, the economics supporting it remain fundamentally broken. Something has to give: either AI capabilities plateau until the economics improve, or we're headed for a spectacular crash that reshapes the industry.
And despite confident predictions from boosters and doomers alike, no one really knows which variable gives first. Between shifting geopolitics and sheer economic reality, it's a fascinating, high-stakes tangle with countless up-in-the-air variables.
What does this mean for you? If you're building a business or career on the AI boom, diversify your options. Don't assume the exponential growth continues uninterrupted. And maybeājust maybeābe skeptical when someone tells you their AI will solve all your problems for just $20K a month. At least until we can actually try it out.

FROM OUR PARTNERS
š§° The Tools, Templates & Playbook for Your AI Consultancy
Remember when AI experts were just āthat person who knows ChatGPTā?
Well, companies are now dropping serious cashāthe AI consulting market is exploding 8x from $6.9B to $54.7B by 2032.
The problem? Everyone wants to be an AI consultant, but most are just winging it. No system, no frameworks. Translation? They're leaving money on the table while working twice as hard.
Our friends at Innovating with AI have welcomed 700 students into The AI Consultancy Project.
Hereās why they like it:
Tools to find clients and deliver top-notch services.
A 6-month plan for a 6-figure consulting business.
First clients in as little as 3 days.

Prompt Tip of the Day
The Assumption Hunter hack turns ChatGPT into your reality-check wingman. I dumped my āfoolproofā product launch into it yesterday, and within seconds it flagged my magical thinking about market readiness and competitor responseāboth high-risk assumptions I was treating as facts.
Paste this prompt:
āAnalyze this plan: [paste plan] List every assumption the plan relies on. For each assumption:
Rate its risk (low / medium / high)
Suggest a specific way to validate or mitigate it.ā
Thisāll catch those sneaky āof course it'll workā beliefs before they catch you with your projections down. Way better than waiting for your boss to ask ābut what if...?ā
Need to catch up on our recent tips? Check out our Prompt Tips of the Day April digest to see them all in one place!

Treats To Try.
Google's Gemini chatbot added image editing features allowing users to modify both AI-generated and personal images, with multi-step editing, better prompt control, and SynthID watermarks for security.
Anthropic launched "Integrations" enabling Claude AI to connect directly with popular work tools, shifting from simple chatbot functionality toward agentic AI.
Data Science Agent is Googleās free AI that automates your data analysis setup.
Teamble helps you give and receive better workplace feedback via a conversation coach that works inside Slack and Microsoft Teams.
Pika 2.2 lets you generate HD video clips (1080p resolution) up to 10 seconds long with āendlessā transformations of content per clip (demo).
Chikka interviews your customers with AI voice agents, giving you deeper insights without doing the interviews yourself (free to try).
Currents analyzes social media discussions to deliver real-time insights about what your target audience is talking about.
Guse lets you automate any workflow using a familiar spreadsheet interface (demo)āfree to try.
SmolVLM2 is a new small open-source AI model you can run on your device that understands videos, images, and textātry it here (video demo).
*This is sponsored content. Advertise in The Neuron here.

Around the Horn.
Pinterest launched tools to identify and filter AI-generated content after user complaints about āAI slopā overwhelming the platform.
Palo Alto Networks acquired Protect AI to strengthen its security offerings amid rising cyber threats from AI.
AI helped power Microsoft and Alphabet growth while tariffs threatened Apple and other consumer tech firms.
Salesforce launched tools and benchmarks to measure and improve AI consistency in business environments, addressing the ājagged intelligenceā problem where systems perform unpredictably despite high capabilities.

FROM OUR PARTNERS
Voice AI in healthcare: No longer a security nightmare with this breakthrough solution
Deploying AI in healthcare? Youāre working with sensitive data and security requirements that can be tricky to navigate. Rely Health figured it out, and their CTO wants to show you how they built reliable voice AI agents (plus a live demo!).

Intelligent Insights
Ethan Mollick wrote about the convergence of ChatGPTās recent sycophancy and the ability of AI to be āhyper persuasiveāāwhich is a good reminder to never fully trust AI.
Check out this medical AI that helps surgeons plan lung operations more accurately by creating detailed 3D maps of body parts, cutting down mistakes that could harm patients.
Stuart J. Russell won the 2025 AAAI Award for his pioneering work in AI safety and beneficial AI development.
Researchers created the āMinimal Turing Testā requiring participants to use just one word to prove they were human rather than AI.
MIT studied a financial services company that used AI to deconstruct, redeploy, and reconstruct work processes, resulting in a 50% workforce reduction (a LOT, but actually lower than some estimates weāve seen), 18% lower turnover, and 40% cost savings while improving customer outcomes.
Computer scientist Ellie Pavlick explored whether AI truly understands language, noting that even creators can't fully explain how their models work despite knowing the code.

A Cat's Commentary.


![]() | Thatās all for today, for more AI treats, check out our website. NEW: Our podcast is making a comeback! Check it out on Spotify. The best way to support us is by checking out our sponsorsātodayās are Innovating with AI and Vellum. |

| ![]() |