- The Neuron
- Posts
- šŗ Multimodal AI = multi-danger
šŗ Multimodal AI = multi-danger
PLUS: AI use hurts credibility w/ coworkers?!

Welcome, humans.
Itās casual Friday, so letās meme it up in here. Wanna see ChatGPT being an absolutely ruthless s.o.b? Hereās what happened when someone asked it to roast every European country and all 50 states.
As someone currently on a cross country road trip across the U.S., I can confirm at least some of GPTās takes are accurate. As someone from California specifically, the accuracy stings.
Also, did yāall hear we have a new pope? I was exploring DC all day and I heard something about seagulls stealing the show⦠but I didnāt know this is what they meant!
Hereās what you need to know about AI today:
Multimodal AIs can be hijacked with a JPEG.
Metaās new smart glasses can recognize people by name.
Googleās AMIE AI doctor analyzes medical images to aid in diagnosis.
Adobe and MIT built an AI that makes videos faster than your coffee brews.

Todayās AI can now do A LOT more than chat⦠including ignore safety filters.
When you think of AI, you probably think of ChatGPT, a basic chat-based interface where you type questions like you would Google and then get a response.
Thatās just the surface, my friends. The hottest trend in AI right now? Multimodal modelsāAI tools that donāt just read text but also process images, audio, and other media. Theyāre smarter, more flexible, and frankly, kind of impressive.
Hereās a quick explainer if you need a visual on what makes them such a big deal.
But behind all that progress? A whole lot of new vulnerabilities.
According to a new report from Enkrypt AI, multimodal models have opened the door to sneakier attacks (like Oceanās Eleven, but with fewer suits and more prompt injections).
Naturally, Enkrypt decided to run a few experiments⦠and things escalated quickly.
They tested two of Mistralās newest modelsāPixtral-Large and Pixtral-12B, built to handle words and visuals.
What they found? Yikes:
The models are 40x more likely to generate dangerous chemical / biological / nuclear info.
And 60x more likely to produce child sexual exploitation material compared to top models like OpenAIās GPT-4o or Anthropicās Claude 3.7 Sonnet.
So, yeah. Multimodal AI is powerful, but itās also like giving two kids access to a nuclear launch panelāand watching them argue over who gets to press the red button.
Why this matters: Because the dangerous bits of code hackers use are often buried in media, not text, theyāre much harder to detect, which makes multimodal AI more vulnerable. These vulnerabilities are happening at the exact moment AI tools are becoming easier to access, faster to scale, and harder to audit. Perfect timing, really.
Multimodal AI is starting to appear in everything from enterprise platforms to education tools. If we donāt patch the holes soon, someone will exploit themāin ways we havenāt even imagined yet.
Our take: Weāre teaching AI to see, hear, and speakāmaybe letās also teach it not to hand out weapon blueprints and war crimes on command?? Safety work may not be sexy, but itās what keeps these models from accidentally becoming Bond villains.
On the other hand, open source AI tools like DeepSeek give everyday people access to top models (for free, or near free), and we donāt want the AI baby to get thrown out with the bioweapon-producing bathwater. Microsoft just admitted it banned its employees from using DeepSeek's app due to concerns about Chinese data collection and propaganda, and the US government could soon make that ban nationwide.
Surely thereās a compromise here somewhere between greater access and safety?

Prompt Tip of the Day
Reddit users have cracked the code on creating āexceptionalā resumes that are landing interviews even for positions āway above their level.ā
Their secret? Feed ChatGPT your real experiences organized by category (LEADERSHIP, CHALLENGES, TEAMWORK, etc.) along with the job description to generate tailored for STAR-method responses.
If you donāt know, STAR = an interview trick where you tell a story about what happened (story), what you needed to do (task), what you actually did (action), and how it turned out (result). Super helpful for not rambling when they ask about your experience!
Their simple two-step process has hiring managers consistently calling applications "exceptional."
Step 1: Feed ChatGPT your CV and the job description with this prompt: āOptimize my CV and experience to perfectly match this specific job.ā
Step 2: Use this follow-up: āGive me excellent answers to potential interview questions using my CV and the STAR method with specific examples of how I'm suitable for this role.ā
The key? Don't fabricate anythingājust organize your authentic experience in a way that precisely matches what employers are seeking.
Interviewers respond dramatically better to applications that speak their language. When you match your genuine experience to their specific needs using AI, you bypass the algorithms and speak directly to the hiring manager's wishlist.
But don't stop thereāyou can also create a complete AI interview system: use ChatGPT to generate practice questions, conduct mock interviews in voice mode, and even get pre-interview pep talks to calm your nerves.
One user who applied this method went from 151 applications with 6 interviews (and no offers) to 10 applications resulting in 3 interviews and counting (honestly, you should swipe their entire method of job hunting; copy what they wrote on Reddit into GPT and ask it to create a training doc to help you follow their methodology yourselfāhereās an example of what we mean, based on this chat)!
Pro tip: For resumes, always humanize the AI output by removing telltale signs like blue headers and excessive bullet points. Also, ask GPT to ādumb downā the writing.
Our favorite IRL insight: Approach your interviews with a ātake it or leave itā attitude. Several users reported their best interviews happened when they relaxed after thinking they'd already blown it! If you use these tactics and they help you get a job, let us know!

Treats To Try.
*Speak, pause, send. Flow polishes your voice into share-ready text 3Ć faster than typingāno typos, no fuss. Get Wispr Flow Today.
Figma Sites lets you design, build, and publish websites directly within its platform.
Zapierās MCP connects your AI to 8,000+ apps so it can send emails, update spreadsheets, and post to social media for youāfree to try.
Hedy coaches you through meetings and conversations in real-time, providing smart questions, insights, and comprehensive summariesāfree to try, then $9.99/month (but Mac only rn).
TranslateAIr instantly translates your selected text and captures text from images without ever leaving your current appāfree to try (mac only rn).
Agenda Hero Magic Pages turns your event flyers or PDFs into shareable calendar events your entire team can instantly add to their calendarsāfree trial available.
Korl creates personalized presentations that turn your Jira and Salesforce data into customer-ready slides.
Explorium MCP plugs your AI agents directly into live B2B data on 150M+ companies, letting you instantly find and research accounts without managing multiple data vendorsāfree options available.
Ciro builds your perfect sales prospect list in under 5 minutes, finding exactly who you need with verified emails and mobile numbers ready for your CRM.
ChatGPT now connects to Github and Amazon has a new Enhance my Listing feature.
*This is sponsored content. Advertise in The Neuron here.

Around the Horn.
Meta is building smart glasses with āsuper-sensingā vision capable of recognizing people by name.
Googleās AMIE AI can now interpret visual medical data, like rashes or ECGs, alongside patient conversations.
Adobe Research and MIT teamed up to develop āCausVid,ā a hybrid AI model that can generate videos in seconds.
US Treasury Secretary Scott Bessent told Congress this week that fired IRS enforcement agents will be replaced with AI.

Intelligent Insights
A new study analyzing nearly 275K references generated by GPT-4o found that the model systematically reinforced the Matthew effect in citations by consistently favoring already popular papers, with 90% of its references appearing in the top 10% most cited papers within their respective fields.
Researchers found that asking chatbots for shorter answers can actually increase hallucinations, according to a new study on AI response accuracy.
The Massachusetts Green High Performance Computing Center received a $31M state grant to support AI research and applications.
Yale researchers showed that AI hallucination detection requires labelling of negative examples, highlighting the importance of expert human feedback.
Google DeepMind CEO Demis Hassabis advised students to prioritize fundamentals over trends and blend core skills with their passions.
Studies suggest that AI use can damage your professional reputation, according to new research examining the perceptions of work produced with AI assistance.

A Cat's Commentary.


![]() | Thatās all for today, for more AI treats, check out our website. NEW: Our podcast is making a comeback! Check it out on Spotify. The best way to support us is by checking out our sponsorsātodayās is Wispr Flow. |

| ![]() |