• The Neuron
  • Posts
  • 🐈 AI changes everything for non-native English speakers

🐈 AI changes everything for non-native English speakers

PLUS: Stability AI teases a new image model

Morning team! This is The Neuron. In between cat naps, we study what's going on AI and let you know what's up.

Today in AI:

  • Killer AI Use Case: Polishing Written English

  • Another Image Model On The Horizon?

  • Around the Horn

  • Leo Sends His Regards

Killer AI Use Case: Polishing Written English

Writing in a second language is hard.

We once wrote "estoy embarazada" to an online penpal and couldn't hear the end of it from our teacher.

English? Double the trouble. You could study for years and still throw off someone's "non-native" radar.

Non-native English speakers know this well, so they're turning to AI tools to polish their writing. For some, it's a "take me to the finish line" product:

For others, it's where they start:

This week, researchers, many of whom are multilingual, were up in arms when a major conference announced a ban on language model text.

They got such stiff feedback that they had to quickly clarify they just meant wholesale copy/paste was banned. Using it to edit is allowed.

Because ChatGPT doesn't always get the facts right, there are only a few use cases that it's fully ready for. This is one of them.

And the audience is huge. 68 million Americans speak something other than English at home.

Another Image Model On The Horizon?

Is this just good timing or some friendly competition?

Yesterday, we talked about Google's new Muse text-to-image model. A slight detail we didn't point out: their project page shows rotating images all saying "Muse".

That's the researchers saying, "Hey, Muse is also good at making images that have text in them."

Which, by the way, neither DALL-E 2 nor Stable Diffusion can do well:

DALL-E 2

Stable Diffusion, same prompt

Leo, get your painting clothes on - we have to do some redecorating. We're now called The RerrJnn!!

Stability AI tweeted a few pictures today to say, "Yeah, we're about to be real good at making images that have text in them."

What's the gossip? This is likely a new model called IF (see bottom-right of images above), developed by a new Stability AI subdivision called DeepFloyd.

Some are hoping that this is also the long-awaited "distilled" model, which could speed up image generation 20x. That was one of Muse's central pitches, too.

Finally, Stability AI being Stability AI, the tweet drops a big fat pointer to open-source, which Muse is not.

Nothing's official yet, and we could be totally wrong. We'll be back when we've got news on IF and DeepFloyd.

Around the Horn

    Leo Sends His Regards

    That's all we have for today. See you cool cats on Twitter if you're there: @nonmayorpete and @TheNeuronDaily

    What'd you think of today's email?

    Login or Subscribe to participate in polls.