• The Neuron
  • Posts
  • šŸ˜ŗ Virtual playgrounds for bots

šŸ˜ŗ Virtual playgrounds for bots

PLUS: AI remembers your beef with your coworkers now.

Welcome, humans.

Itā€™s absolutely WILD to us that Meta was basically airdropping us buckets of AI news on Wednesday, and yet this OpenAI drama has taken over pretty much everything.

To recap:

Thereā€™s a video version if you prefer.

Hereā€™s what you need to know about AI today:

  • 1Xā€™s new world model accelerates robot development and testing.

  • Huggingface surpassed 1M free public models.

  • Google's NotebookLM added support for YouTube videos and audio files.

  • Nomi AI will remember your beef with your coworker now.

1X Technologies is training robots like Neo in The Matrix.

Youā€™d be forgiven if you missed the big robot news we shared earlier in the week. 

Between testing o1, advanced voice mode, and Llama 3.2, thereā€™s A LOT going on in the world of AI right now.

Basically, 1X Technologies just dished out the perfect assist for robot developers everywhere. 

Imagine trying to test a robot on 1,000 different tasks. Sounds like a nightmare, right?

Well, 1X developed a ā€œworld modelā€, a video generator that imagines how the world changes in response to a robot's actionsā€”a virtual playground where robots can practice moving through the world without breaking stuff IRL.

Itā€™s basically like getting jacked into The Matrix, but in reverse. Soā€¦ robots jacking themselves with real world data? That canā€™t be the right way to say thatā€¦  

This is not training dataā€¦this is the world modelā€™s generations.

Here's how it works:

  • Trained on 1K+ hours of EVE humanoids doing home and office tasks.

  • Generates multiple future scenarios from a single starting point.

  • Simulates tricky interactions like dropping objects and manipulating deformable items (think curtains, laundry). 

This system tackles one of the biggest headaches in robotics: evaluation.

How do you know if your robot's getting better if the real world keeps changing?

Instead of manually coding every possible scenario, 1X trained its model on real-world data to create a virtual environment that mimics reality, enabling evaluation across millions of scenarios.

Do robots dream of electric sheep? No, flying purple plates.

Now, it's not all smooth modeling:

  • Sometimes objects morph, change color, or outright disappear (like other video generators).

  • Physics can get a little... creative.

  • It failed the mirror test and didnā€™t recognize itself. 

To accelerate its progress, 1X is releasing its data, baseline models, and launching the "1X World Model Challenge" with cash prizes ($10K for the first one). The challenge has three stages: compression, sampling, and evaluation.

Our take: The ultimate goal is to predict robot performance accurately before real-world testing. Example: right now, Waymos need 4 lidar sensors, 6 radar devices, and 13 cameras to operate effectively. Thatā€™s a lot of tech to help prevent accidents.

If we nail this world model thing, we're looking at safer, smarter, and more reliable robotsā€”with less upfront investment. You may have heard the phrase ā€œSurvive ā€˜til 2025?ā€ This is ā€œWALL-E before 2033.ā€

FROM OUR PARTNERS

Ever wonder how AI represents different generations? AIport's latest experiment is an eye-opener!

Forget what you think you know about generational stereotypes.

AIport and Turing Post asked four global AI models to show them Boomers, Gen Xers, Millennials, and Zoomers. The results? Mind-blowing! 

For example, did you know AI thinks all Gen Xers have a secret fashion obsession? 

Or that there's one drink that apparently unites all generations? (Spoiler: It's not pumpkin spice lattes!)

  • šŸŒšŸ¤– 4 AI models, each from a different region. 

  • šŸ‘“šŸ‘§ 1,200 images generated across four generations.

  • šŸ”šŸ¤Æ Surprising, revealing results. 

Why should you care? Because understanding how AI interprets generations can help us spot biases, improve AI training, and maybe even bridge some generational gaps.

Around the Horn.

  • Huggingface, the open-source platform, has crossed 1M free public models, and a new repository (with new models, datasets, or spaces) is created every ten seconds.

  • Googleā€™s NotebookLM will now let you summarize Youtube videos and audio files.

  • Nomi AI enhanced its chatbots' ability to remember details from past conversations, like recalling your difficult coworker when discussing a bad day.

Treats To Try.

Get the API key here, then try it here. You can also try Llama 3.2 1B + 3B (the small versions) on Google Colab.

Boost your conversions and save time with Waalaxy!

  • Generate hundreds of leads on LinkedIn starting at ā‚¬0 per month.

  • Automate all your actions and let the app handle the rest.

  • Zero skills needed.

**

  1. Paidleave.ai helps you figure out if you qualify for paid family leave and how to apply.

  2. Runway will provide $5M in service credits to fund up to 100 films through its video generator, so if youā€™ve ever wanted to be a work from home filmmaker, nowā€™s your chance.

  3. Alibaba released a new video model called MIMO (paper here) that lets you create controllable videos of any character doing anything, even in complex 3D environments.

  4. Heatbot.io turns heatmaps into website redesigns, generating improved UI code based on user behavior data.

  5. Before Sunset turns your to-do list into a daily plan.

  6. Letā€™s Trip plans and organizes your trips.

  7. Fluency automatically creates process documentation as you work.

  8. Outspeed helps you you build fast, real-time voice and video AI apps.

*This is sponsored content. Advertise in The Neuron here.

Intelligent Insights

  1. Countries are rushing to build ā€œsovereign AIā€ infrastructure to maintain control over their data and capabilitiesā€”and NVIDIA is investing $110M to help countries develop their own AI factories.

  2. Two Minutes paper has a great video roundup of 7 incredible o1 use-cases.

  3. If you code with AI in Cursor, this is a must-use tip (with prompt included).

  4. Metaā€™s new small Llama models can be run locally on an iphone.

  5. Google DeepMind announced an update to its AlphaChip program (the AI that can design AI chips), including an open-source release (paper here, explainer here).

  6. We turned Wednesdayā€™s newsletter into a NotebookLM podcast, and it kind of blew our minds. Let us know what you think!

A Cat's Commentary.

Thatā€™s all for today, for more AI treats, check out our website.

The best way to support us is by checking out our sponsorsā€”todayā€™s are AIPort and Waalaxy.

See you cool cats on Twitter: @nonmayorpete & @noahedelman02

What'd you think of today's email?

Login or Subscribe to participate in polls.