Tech luminaries are mixed with dire warnings

Computer scientists who helped build the foundations of today’s artificial intelligence technology warn of its dangers, but that doesn’t mean they agree on what those dangers are or how to prevent them.

Humanity’s survival is at risk if “intelligent things can defeat us,” said the so-called Godfather of AI Geoffrey Hinton at a conference Wednesday at the Massachusetts Institute of Technology.

After the retirement of Google so that he can speak more freelythe 75-year-old Hinton said he recently changed his views about the reasoning capabilities of the computer systems he has spent a lifetime researching.

“These things can be learned from us, by reading all the novels of the past and everything that Machiavelli wrote, how to manipulate people,” said Hinton, addressing the crowd in attendance. MIT Technology Review’s EmTech Digital conference from home via video. “Even if they can’t pull the levers directly, they can certainly get us to pull the levers.”

“I wish I had a nice simple solution for this, but I don’t,” he added. “I’m not sure there is a solution.”

Fellow AI pioneer Yoshua Bengio, co-winner of the Hinton at top prize in computer sciencetold The Associated Press on Wednesday that he was “somewhat in tune” with Hinton’s concerns brought up by chatbots like ChatGPT and related technologies, but worried that simply saying “We’re doomed” isn’t helpful.

“The main difference, I would say, is that he is kind of a pessimistic person, and I am more on the optimistic side,” said Bengio, a professor at the University of Montreal. “I think the dangers – the short-term ones, the long-term ones – are very serious and should be taken seriously not only by some researchers but by governments and the population.”

There are many signs that governments are listening. The White House called Google CEOs, Microsoft and ChatGPT-maker OpenAI to meet Thursday with Vice President Kamala Harris in what officials described as a frank discussion about how to mitigate near- and long-term risks to their technology. European lawmakers are also speeding up negotiations to pass the new AI rules.

But all the talk of most terrible future dangers there are some who worry that the hype around superhuman machines — which don’t exist yet — will distract from attempts to create practical safeguards for today’s largely unregulated AI products.

Margaret Mitchell, a former leader of Google’s AI ethics team, said she was upset that Hinton did not speak up about his decade in a position of power at Google, especially after the 2020 ouster of prominent Black scientist Timnit Gebru. , which studies injuries. in large language models before it was widely commercialized into products such as ChatGPT and Google’s Bard.

“It is a privilege to be able to escape from the realities of today’s rampant discrimination, the spread of hate speech, poisoning and nonconsensual pornography of women, all of these issues actively hurt people who are marginalized in technology,” said Mitchell, who was also forced out of Google after Gebru’s departure. “He skips all those things to worry about something so far away.”

Bengio, Hinton and a third researcher, Yann LeCun, who worked on Facebook parent Meta, all awarded the Turing Prize in 2019 for their achievements in the field of artificial neural networks, instrumental in the development of current AI applications such as ChatGPT.

Bengio, the only one of the three who does not work at a tech giant, has expressed concerns for years about the imminent dangers of AI, including the destabilization of the job market, automated weapons and the danger of biased data sets.

But those concerns have grown recently, leading Bengio to join other computer scientists and technology business leaders like Elon Musk and Apple co-founder Steve Wozniak to call for a six-month moratorium on the development of AI systems more powerful than the latest OpenAI model, GPT-4.

Bengio said Wednesday he believes the latest AI language models have passed the so-called “Turing test” British codebreaker and AI pioneer Alan Turing’s method introduced in 1950 to measure when AI becomes indistinguishable from a human – at least on the surface.

“That’s a milestone that has big consequences if we’re not careful,” Bengio said. “My main concern is how they can be exploited for evil purposes to undermine democracies, for cyber attacks, disinformation. You can talk to these systems and think that -you’re with someone. They’re hard to find.”

Where researchers disagree is how current AI language systems — which have many limitations, including a tendency to fabricate information — can become smarter than humans.

Aidan Gomez is one of the co-authors of the pioneering 2017 paper that introduces a so-called transformer technique – the “T” at the end of ChatGPT – for improving the performance of machine-learning systems, especially how they -on from the passages. in the text. Then just a 20-year-old intern at Google, Gomez remembers lying on a couch in the company’s California headquarters when his team sent the paper around 3 a.m. when it was due.

“Aidan, this is going to be huge,” he remembers a colleague telling him, about work that has since helped guide new systems that generate human prose and imagery.

Six years later and now CEO of his own AI company called Cohere, in which Hinton invested, Gomez is excited about the potential applications of these systems but plagued by the fear that he says of being “isolated from reality” of their true capabilities and “depends. on extraordinary leaps of imagination and reasoning.”

“The idea that these models could somehow gain access to our nuclear weapons and launch some kind of extinction-level event is not a productive discourse to have,” Gomez said. “It’s damaging to really pragmatic policy efforts that are trying to do something good.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *