At 99, an elder statesman like Henry Kissinger could be forgiven for not getting ahead of artificial intelligence. But the former diplomat who served under two presidents and played a major role in determining American foreign policy during the Cold War has been a frequent proponent of the latest developments in AI, and Kissinger’s campaign to them of the dangers of technology may be one of the last pieces of the puzzle of his legacy.
The conversation about AI has hit a fever pitch in recent months, since the debut of OpenAI’s ChatGPT in November was pushed Microsoft, Googleand other tech startup companies an AI arms race. People and businesses are now using AI to recording numberswhile companies may be close to cracking the code such as human artificial intelligence.
But Kissinger, the former Secretary of State and National Security Advisor who will be a centenarian on May 27, was concerned with AI years before intelligent chatbots entered the cultural zeitgeist. He is now calling on governments to take responsibility for the dangers of technology, just as he has spent years championing for ending the proliferation of nuclear weapons.
“The speed of movement of artificial intelligence will make it problematic in crisis situations,” Kissinger said in a interviews with CBS aired on Sunday. “I am now trying to do what I have done with regard to nuclear weapons, to call attention to the importance of the impact of this evolution.”
The inherent risks of AI
Kissinger’s interest in the consequences of AI began in 2016, when he ATTENDANCE Bilderberg Conference that year, a forum held since the 1950s for the alignment of US and European interests.
He attended the conference by invitation from Google then-Executive Chairman Eric Schmidt, according to a 2021 PERIOD article. The two co-authored a book, along with computer scientist Daniel Huttenlocher, in 2021 titled The Age of AIwhich argues that AI is on the precipice of sparking widespread revolutions in human society, while questioning whether we are ready for it.
That moment may have arrived, and it is not yet clear whether society is ready. Geoffrey Hinton, a former Google employee often referred to as the “Godfather of AI,” recently issued a series of warnings about the dangers of AI after leaving Google in part to speak openly about the topic.
Today’s AI capabilities are “very scary,” Hinton said spoke to BBC last week, and as machines become more adept at more tasks, the opportunities for “bad actors” to use them for “bad things” will also grow, he spoke to New York Times earlier this month. on another interview with Reuters last week, Hinton warned that the existential threat of AI could “end up being more urgent” than climate change.
More than 1,000 technologists, historians, and computer scientists have called for a moratorium on the development of advanced AI systems in an open letter in March to gain a better understanding of the capabilities and risks of the technology, especially when companies are working on AI that can equal to or greater than human intelligence. Other experts, incl Stopargued that it may be impossible to solve the problem, because the US and China already compete internationally in front of AI.
Kissinger, Schmidt, and Huttenlocher warned that AI capabilities could “expand dramatically as technology advances” in a February op-ed for the Wall Street Journal. The increasing complexity of AI with each new iteration means that even its creators don’t fully know what it can do, the co-authors warn. “As a result, our future now has an entirely new element of mystery, danger and surprise,” they wrote.
Calls to regulate
The AI situation is compared to the crisis of unknown risks surrounding the development of nuclear weapons in the second half of the 20th century that required international coordination to control. Berkshire Hathaway CEO Warren Buffett SAYS During the company’s shareholder meeting last week that AI, although “amazing,” could be compared to the development of the atomic bomb because of its potential dangers and because “we cannot un-invent it.”
Hinton also compared the existing threat of AI to that posed by nuclear weapons an interview with CNN last week, as a possible area where the US and China could cooperate on AI regulation.
“If there’s a nuclear war we all lose, and it’s the same if these things take over,” he said, though he noted his New York Times interview that the situation with AI is completely different, because it is easier for companies and countries to develop technology behind closed doors than to make nuclear weapons.
Michael Osborne, a machine learning researcher at Oxford University, called for a non-proliferation treaty similar to the one governing nuclear weapons that would prevent AI during a interviews with Daily Telegraph in January. “If we get an understanding that advanced AI is as dangerous as nuclear weapons, then maybe we’ll come up with the same frameworks for managing it,” he said.
But in his interview with CBSKissinger acknowledged that an AI arms race represents a completely different ball game from the race to develop nuclear weapons, due to many unknowns.
“[I]t’s can be different. Because in past arms races, you can develop plausible theories about how you won. It’s a new problem intellectually,” he said.