Geoffrey Hinton’s artificial intelligence (AI) research has helped advance technologies that were once the stuff of Sci-Fi flicks, from facial recognition to chatbots like OpenAI’s ChatGPT and Google’s Bard. The British-Canadian computer scientist earned the title, “the godfather of AI,” by dedicating his career to the study of neural networks—complex computer models whose layered structures mimic the human brain—for decades before the technology became mainstream. But Hinton resigned from a position he held at Google for more than a decade last month, told the New York Times he made the decision so he could freely talk about the “dangers of AI” without thinking about what it was EFFECTS the company.
Since then, he has been on a Paul Revere-esque campaign to warn about the existential dangers to humanity posed by AI in a series of interviews that have even garnered the DEAL to rapper Snoop Dogg, who recently referenced Hinton’s claim that AI is “not safe.” “Snoop got it,” Hinton said SPOKE Wired monday
The AI pioneer’s latest warning message? Even the threat of climate change is nothing compared to AI
“I don’t want to underestimate the value of climate change. I don’t want to say, ‘You shouldn’t worry about climate change.’ That is also a big risk,” he said SPOKE Reuters Friday. “But I think it might be more urgent.”
Hinton believes that AI systems will eventually become more intelligent than humans and take over the planet, or that bad actors may use the technology to foment division in society in hopes of gaining power—and that all before the threat of job loss. And while the solutions to climate change are pretty obvious (“just stop burning carbon)” when it comes to AI, Hinton cautions that “it’s never clear what you’re going to do.”
In his campaign to warn of the dangers of AI, Hinton has COMPARING the technology of the birth of Nuclear weapons, and admits that he regrets much of his work now that he sees its destructive potential. “I comforted myself with the normal excuse: If I hadn’t done it, there would have been someone else,” he spoke to New York Times in late April.
Comparing the rise of artificial intelligence to the creation of nuclear weapons may sound hyperbolic, but even Warren Buffett sees parallels. The 92-year-old investing legend talks about a CAUTIONS Albert Einstein delivered after the birth of the atomic bomb at Berkshire Hathaway’s annual conference over the weekend, saying that AI “will change everything in the world except how people think and behave.”
And Hinton, who won the Turing Award for his lasting contributions of technical importance computer science in 2018, warned earlier this month of an interview with the BBC in a “nightmare scenario” where chatbots like ChatGPT are used to seek power. “It’s hard to see how you can prevent bad actors from using it for bad things,” he said.
In a separate interview at MIT Technology Review’s EmTech Digital conference last week, the computer scientist told the crowd: “These things can learn from us, by reading all the novels of the past and all that Machiavelli wrote, how to manipulate people. Even if they cannot directly pull the levers, surely that they can get us to pull the levers.”
“I wish I had a nice simple solution for this, but I don’t,” he added. “I’m not sure there is a solution.
But no AI stop?
The potential risks posed by AI topped 1,100 prominent tech figures, including Tesla CEO Elon Musk and Apple cofounder Steve Wozniak, who will sign an open letter called six months stop to develop advanced AI systems earlier this year. But Hinton said Reuters Wednesday that stopping the progress of AI is “unreal.”
“I’m in the camp that thinks this is an existential risk, and we’re about to have to work harder now, and put more resources into figuring out what we can do about it,” he said.
In a interviews with CNN last week, the computer scientist explained that if the US stops developing AI tech, “China will not.” And on May 5 tweethe made his position clear:
“There’s so much potential benefit that I think we need to continue to develop it but also put the same resources into making sure it’s safe.”
To that end, President Biden and Vice President Harris met AI leaders including Alphabet CEO Sundar Pichai and OpenAI CEO Sam Altman last week to discuss the need for safety and transparency in the field as well as the potential for new regulations. And the European Union AI Act—which classifies AI systems into different risk categories, adds transparency requirements, and laws to prevent bias—is expected to be in place by the end of the year. Following Musk’s letter, a committee of EU lawmakers also agreed to a new set of proposals that would force AI companies to disclose whether they use copyright material to train their systems, Reuters first reported on May 1.