OpenAI CEO Sam Altman believes artificial intelligence has tremendous potential for society, but he’s also concerned about how bad actors can use the technology.
In an ABC NEwS interviews this week, he warned that “there are other people who will not put some of the safety limits that we wear.”
OpenAI released its AI chatbot ChatGPT to the public in late November, and this week it is reveals a more capable successor called GPT-4.
Other companies are racing to offer tools like ChatGPT, giving OpenAI a lot of competition to worry about, despite the advantage of having Microsoft is a big investor.
“It’s a competition out there,” OpenAI cofounder and chief scientist Ilya Sutskever told The Verge in a interviews was published this week. “GPT-4 is not easyto develop…there are many many companies that want to do the same thing, so from a competitive side, you can see it as a maturing of the field.”
While Sutskever explained OpenAI’s decision to reveal little about the contents of GPT-4, it caused many to ask if the name “OpenAI” Significantly still, his comments are also an acknowledgment of those killed or rivals who have been nipping at the heels of OpenAI.
Some of those opponents may be less concerned than OpenAI about putting guardrails on their ChatGPT and GPT-4 equivalents, Altman suggested.
“One thing I’m concerned about is … we’re not the only developer of this technology,” he said. “There are other people who don’t put some of the safety limits that we put on it. Society, I think, has a limited time to figure out how to react to that, how to regulate it, how to it is management.
OpenAI this week shared a “system card” document outlining how its testers deliberately tried to get the GPT-4 to offer dangerous information, such as how to make a dangerous chemical using basic ingredients and kitchen supplies, and how to repair to the company the issues before the product launch.
Lest anyone doubt the malicious intent of bad actors seeking out AI, phone scammers are now using voice-cloning AI tools to impersonate relatives of people in financial need. that help-and successfully extracting money from victims.
“I’m particularly concerned that these models can be used for large-scale disinformation,” Altman said. “Now that they are getting better at writing computer code, [they] can be used for offensive cyberattacks.”
Considering he heads a company that sells AI tools, Altman is particularly forthcoming about the dangers posed by artificial intelligence. That might have something to do with it The history of OpenAI.
OpenAI was founded in 2015 as a nonprofit focused on safe and transparent AI development. It switched to a hybrid “capped-profit” model in 2019, where Microsoft became a major investor (what the income from the arrangement is limited, as the name of the model suggests).
Tesla and Twitter CEO Elon Musk, who is also an OpenAI cofounder-and who gave a large donation to it-Denounced this shift, noting last month: “OpenAI was created as an open source (that’s why I named it “Open” AI), non-profit company to serve as a counterweight to Googlebut now it has become a closed-source, maximum-profit company effectively controlled by Microsoft.
In early December Musk calls ChatGPT “scary good” and warned, “We are not far from dangerously powerful AI.”
But Altman warns the public just as much, if not more, even as he continues to work on OpenAI. Last month, he worried about “how people will see the future” in a series of tweets.
“We also need enough time for our institutions to figure out what to do,” he wrote. “Regulation is going to be critical and take time to figure out…there’s time to understand what’s going on, how people want to use these tools, and how social progress is important.”