- The Neuron
- Posts
- šŗ AI can find exploits security researchers can't...
šŗ AI can find exploits security researchers can't...
PLUS: AI can self-improve now?!

Welcome, humans.
Mary Meekerāthe legendary tech analyst who called the internet boomājust dropped a 339-page AI report that's denser than a black hole and twice as mind-bending.
Hereās a few spoilers for ya: ChatGPT's growth (800M users in 17 months) makes Google look like a tortoise, and inference costs (how much it costs to āpromptā an AI) fell 99.7% in the past two years.
Translation? AI = accelerating like nothing before. Weāll break down the report in full for yāall tomorrow.
Now, ever wondered how influencers would handle the end of the world? The Dor Brothers, who use AI to make some pretty genius satirical videos, just made an āinfluendersā supercut with Googleās new video model Veo 3 that feels a LITTLE too real⦠(in more ways than one).
Thatās not the only viral Veo 3 influencer video we saw this weekendāapparently, these āHistorical / āBible timesā vids of famous Biblical āinfluencersā are going bonkers on TikTok.
This trend has gone so viral, REAL influencers are now pretending to be ā100% AI Veo 3ā themselves. Yāknow, for the views.
How viral are we talking here? Sir Demis Hassabis (CEO of DeepMind, who is in fact knighted) said āmillions of videosā have been made with Veo 3 over the past few days.
If you didnāt know, rn YouTube gets about ~2.6M videos uploaded daily, while TikTokers upload ~34M vids a day. Veo 3 is still under the radar for most pplābut imagine how quickly human-generated content will be overrun by AI content if this trend continues⦠(it could be āthe end of the world internet as we know itā¦ā)
Hereās what you need to know about AI today:
ChatGPT discovered a Linux security flaw cybersecurity experts missed.
UMG, Warner, and Sony moved to license catalogs to AI music providers.
Metaās AI to replace nearly all of its product risk reviews.
Researchers introduced an AI that can self-replicate itself.

AI just found its first real zero-day vulnerabilityāand security researchers are freaking outā¦
One of the biggest risks of widespread AI is the ability for anyone with a ChatGPT account (or worse, ChatGPT itself) to find and exploit code vulnerabilities at scale. Well, that day just arrivedāand the implications are wild.
Security researcher and Oxford PhD Sean Heelan just revealed how OpenAI's o3 model discovered CVE-2025-37899, a previously unknown remote vulnerability in the Linux kernel's SMB implementation.
Quick translation: This is like finding a secret backdoor into millions of computers worldwide that lets hackers break in from anywhere on the internetāin the most secure part of the operating system that powers everything from your WiFi router to Amazon's servers.
Matt Johansen breaks it down brilliantly in his video analysis: āRemote zero day in the Linux kernel is a very serious string of words.ā
Here's what makes this breakthrough so significant:
Sean wasn't just telling o3 to āgo find bugsā when he found this issue. He was benchmarking the model using vulnerabilities he'd already discovered manually, when o3 surprised him by finding a completely different bugāone that even HE had missed.
Itās all a bit technical, but basically, o3 found a catastrophic flaw that only a few dozen people on Earth could spot:
When given 3K lines of targeted code, o3 found known vulnerabilities 8% of the time.
With 12K lines of code, success dropped to 1%ābut that's when it found the previously unknown vulnerability.
The catch? It generated false positives 28% of the time.
What's particularly fascinating is o3's bug report quality. Not only did it find the vulnerability, but it also correctly identified why Sean's proposed fix for a similar bug would've been insufficientāspotting a glitch (race condition) he missed.
Sean's takeaway says it all:
āIf we were to never progress beyond what o3 can do right now, it would still make sense for everyone working in vulnerability research to figure out what parts of their workflow would benefit from it.ā
Why this matters: AI could actually become cybersecurity researchersā most powerful tool. Companies spending millions on bug bounties should take note: the economics of vulnerability discovery just changed dramatically.
The current signal-to-noise ratio (1:50) means you'll wade through false positives, but as Sean notes, āhad I used o3 to find and fix the original vulnerability, I would have, in theory, done a better job than without it.ā
The scary part? If researchers are using AI to find vulnerabilities, so are the bad guys.
So what should regular people do? First, donāt panic: A patch to the Linux kernel has already been committed and merged into the official Linux kernel repository. Unless youāre managing Linux servers, this shouldnāt impact you directly.
As Matt concludes in his analysis, āSecurity is always an arms race, and we just got access to a stronger, more capable arm.ā He's excited rather than scaredābecause the same AI helping find these bugs will help patch them faster too.
For vulnerability researchers, the message is clear: start integrating these tools now or risk being left behind. For the rest of us? Make sure automatic updates are turned on, and buckle upāthe age of AI-powered cyber-warfare has begun.

FROM OUR PARTNERS
Build AI that actually works for your business
There's a huge gap between flashy AI demos and real value. Enter Agents from Retoolāthe easiest way to leverage AI into meaningful action by connecting directly to your business systems.
With Retool, you get:
Model flexibility: Use OpenAI, Anthropic, Azure, or custom models.
True integration: Works with existing databases, APIs, and business tools.
Enterprise security: Fine-grained RBAC and permissions built in.
High-level control: Visual builder meets code flexibility.
Experience a proven platform trusted by 10,000+ companies, from startups to Fortune 500s. Ramp saved $8M and increased efficiency 20%.

Prompt Tip of the Day
Want a prompt straight from a research scientist at DeepMind to help you learn through āSocratic tutoringā? Shared by Dwarkesh Patel (of the Dwarkesh pod), this prompt gets your AI to keep āasking you probing questions which reveal how superficial your understanding is.ā
"I would benefit most from an explanation style in which you frequently pause to confirm, via asking me test questions, that I've understood your explanations so far. Particularly helpful are test questions related to simple, explicit examples. When you pause and ask me a test question, do not continue the explanation until I have answered the questions to your satisfaction. I.e. do not keep generating the explanation, actually wait for me to respond first. Thanks!"
Just start a new chat with your favorite AI model, copy and paste this prompt in, then ask it what you want help learning!

Treats To Try.
In addition to running DeepSeekās new R1 model on your own computer (at a lower quality, mind you), you can also run it on OpenRouter for free.
Google AI Edge is a platform enabling developers to build AI features that will run directly on mobile devices and web apps, while the Edge Gallery includes models you can download and run on your Android phone locally (so private and offline).
PrettyPrompt is a Chrome extension that optimizes your AI prompts to get more useful responses (like Grammarly, but for prompts).
OpenPaper makes it really easy to upload, read, and chat with research papers, and even cites where it got the information from in the original paper.
Duckbill uses human assistants and AI to handle your life admin like booking appointments, paying tickets, and finding gifts so you get your weekends backāpaid only rn.
Imagen 3 (Googleās last gen image generator) lets you create a limited quota of free images on AI Studio; if you need images and hit your limit w/ GPT, try it out.

Around the Horn.
Major record labels (UMG, Warner, Sony) neared a settlement with AI music providers Suno and Udio, and are now in talks to license their catalogs to the AI.
Meta planned to automate 90% of all risk assessments (like new safety features), although Meta says only ālow risk decisionsā will be automated.
Anthropic, maker of Claude, hit $3B in annualized revenue (which is total monthly revenue over 12 months)āthatās up from $2B back in March.
AI powered roll-ups is becoming a new trendāthat is, buying āpeople-intensiveā professional service firms (law firms, healthcare practices, customer service) and using AI to increase efficiency (i.e. the classic āPE haircutā of lowering headcount), raise margins from 10% to 40%, then use the gains to roll-up more.
Three new AI studies to check out:
AI optimized for user feedback learned to manipulatively target vulnerable users with behaviors that were hard to mitigate or detect, distorting their reasoning.
Researchers showed state-of-the-art AI are easily manipulated by a universal jailbreak, fueling ādark LLMā risks.
Researchers introduced the Darwin Gƶdel Machine, an AI system that can self-rewrite and improve its own codeāand has surpassed other self-improving systems.
For more, check out the top AI papers of last week here.

FROM OUR PARTNERS
Are you building a ratās nest?
Building your own data processing pipeline starts simpleābut scaling it is another story. What begins as a few scripts and connectors quickly turns into a tangled mess of never-ending fixes and updates. Unstructured replaces the DIY ratās nest so you can focus on AI innovations.

Monday Meme


A Cat's Commentary.


![]() | Thatās all for today, for more AI treats, check out our website. NEW: Our podcast is making a comeback! Check it out on Spotify. The best way to support us is by checking out our sponsorsātodayās are Retool and Unstructured. |

| ![]() |