• The Neuron
  • Posts
  • 🐈 Microsoft is bringing GPT to Bing

🐈 Microsoft is bringing GPT to Bing

PLUS: detecting AI-generated text, Google's new model and more

Good morning! This is The Neuron. Get in the car, we have a long one today.

Today in AI:

  • GPT-Powered Bing Coming Soon

  • Did We Figure Out How to Identify AI-Generated Text?

  • Google's Announces a New Text-to-Image Model

  • Around the Horn

  • Leo Sends His Regards

GPT-Powered Bing Coming Soon

Microsoft strikes at Google in the post-ChatGPT era.

The details of the announcement:

  • Microsoft will be using OpenAI's GPT model to power Bing

  • Some Bing search queries will return a ChatGPT-style answer instead of a list of links

  • Aiming for release by end of March 2023

How we got here:

  • In 2009, Bing launches as a "decision engine" that would "empower users to gain and use knowledge", not just provide search results

  • 13 years of slow going for Bing. In the US, market share stagnates under <10%, even as Yahoo Search dies completely

  • In 2019, Microsoft invests $1 billion in OpenAI

  • ChatGPT explodes onto the scene. Microsoft wonders if $1 billion wasn't enough

After all these years, this could be the big break Bing needs to leave a dent in Google.

They're moving fast. There were rumors that Microsoft's investment lets it have some *ahem* advance notice about what's coming down the OpenAI hatch.

That would explain how it got to move so quickly. But, it turns out that this was all part of Microsoft CEO Satya Nadella's plan, dating back to the $1 billion investment in 2019. Genius.

Tread carefully, Goliath. As the dominant player, Google can't afford as many mistakes as Microsoft can. We broke down the factors any search engine has to get right.

Did We Figure Out How to Identify AI-Generated Text?

Someone's cracked the code...for now.

Lots of stuff can break if ChatGPT goes unchecked: high school homework and admissions essays, to name two. Being able to flag something from ChatGPT is useful.

Princeton senior Edward Tian hacked together GPTZero to do that. Paste in your text, and it will suggest if it's likely AI-generated or not.

How does it work? If you've ever read enough ChatGPT examples and felt that they all sound kinda similar, you're onto something.

Human writing is dramatic, creative, unpredictable. GPTZero tries to measure this: the less "random" and "bursty" the text is, the more likely it's AI-generated.

So, is it solved? Not quite. Researchers can just train language models to specifically cheat these metrics. It'd only be a matter of time.

Meanwhile, OpenAI is reportedly working on a way to "watermark" its outputs. We'll see if that approach is any better.

Google's Announces a New Text-to-Image Model

They claim it's better, faster and comes with batteries included.

Google had a big year in 2022 with text-to-image, releasing two models called Imagen and Parti last summer.

Now, they've got Muse, a model that revisits an older architecture, and they think they've got the sexy new stuff beat.

Stable Diffusion, DALL-E 2 and Midjourney are all diffusion models. How it works is outside the scope of this newsletter, but you should know that they're slow.

For example, Stable Diffusion took 18.5 seconds to generate their best image. Muse takes 1.3 seconds, and it creates even better output.

Plus, it also comes with editing included - that's an add-on for Stable Diffusion.

Sounds great? Take it with a grain of salt. Muse is not yet open source or available to the public. We'll know if it actually lives up to its claims once we can get our hands dirty with it.

Around the Horn

Leo Sends His Regards

That's all we have for today. See you cool cats on Twitter if you're there: @nonmayorpete and @TheNeuronDaily

What'd you think of today's email?

Login or Subscribe to participate in polls.