AI ‘controls humanity’ in worst-case scenario but might just find us boring, says Stability AI CEO Emad Mostaque

Emad Mostaque hopes AI will find us “a bit boring” but acknowledges that in the worst-case scenario it “basically controls humanity.”

Mostaque is the CEO of fast-growing London-based Stability AI, which popularized Stable Diffusion. That’s a generative AI tool that allows users to create often incredibly sophisticated images using nothing but a text prompt. He made the comments on a BBC interview released this week.

“If you have something that is more capable than you, what is democracy in that kind of environment? It’s a known unknown,” he told the British broadcaster. “Because we can’t think of a something more capable than us, but we all know people who are more capable than us. So, my personal belief is that it will be like the movie his with Scarlett Johansson and Joaquin Phoenix: People are kind of boring, and it’s going to be like, ‘Goodbye’ and ‘You’re boring.’”

“But I could be wrong,” he added. “I think it should be discussed in a public place.”

In March, Mostaque joined Tesla CEO Elon Musk and Apple cofounder Steve Wozniak of sign an open invitation letter for stopping AI development for anything more advanced than GPT-4, the AI ​​chatbot from Microsoft-based OpenAI, which also makes ChatGPT and DALL-E 2 (the latter, like Stable Diffusion, -convert text prompts to images).

“If we have agents more capable than us that we can’t control that go on the internet and [are] hook up and they’ve achieved a level of automation,” he told the BBC, “what does that mean?

Stability AI is racing ahead, however, to develop new products—including a text-to-animation tool released this week—and attract investors. It seeks to raise funds at a $4 billion valuation, after a $1 billion valuation last October after raising about $100 million. (Coatue Management and Lightspeed Venture Partners are among its investors.)

At the same time, the Stability AI is sued by Getty Images in a significant copyright case. Such a case is perhaps inevitable given that text-to-image AI models like Stable Diffusion are trained using billions of images pulled from the internet.

Asked by the BBC what the worst-case scenario was, Mostaque said: “The worst-case scenario is that it proliferates and basically it controls humanity. Because you could have a million of these things effectively replicated. ”

Uniquely, Stable Diffusion is open source, meaning anyone can check out the code, share it, and use it.

In March, Musk, who founded and helped fund OpenAI, criticized it for moving away from a nonprofit model, taking large investments from Microsoft, and not being open source. she Tweet:

“OpenAI was created as an open source (that’s why I named it ‘Open’ AI), non-profit company to serve as a counterweight to Google, but now it has become a closed-source, maximum-profit company effectively controlled by Microsoft. It was not my intention at all.”

“I don’t think there should be a need for trust,” Mostaque told the BBC. “When you create open models and you do it in the open, you should be criticized when you do things wrong and expected to be praised when you do some things right.”

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *