In an interview with the BBC, he said he does not believe that AI systems are yet as intelligent as humans. “But I think they soon may be,” he said. Hinton cited that risk in explaining his resignation from Google on Monday after 10 years at the tech giant. He said he wants to be able to talk freely about the risks posed by AI — but emphasized his belief that the company “has acted very responsibly.”
“I’ve come to the conclusion that the kind of intelligence we’re developing is very different from the intelligence we have,” he told the British broadcaster.
“The big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world,” Hinton said. “All these copies can learn separately but share their knowledge instantly. So it’s as if you had 10,000 people and whenever one person learned something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.”
In particular, he warned of the potential for AI technology to be manipulated by malevolent actors — such as authoritarian leaders — to use AI for harm. If a system attained the ability to create its own “sub-goals,” for example, Hinton said, it might independently decide to start pursuing power for itself. This outcome would be “a nightmare scenario,” he said.
He also said that AI advances in text could lead to “very effective spambots,” which could allow “authoritarian leaders to manipulate their electorates.”
After the computer scientist joined Google in 2013, he designed machine-learning algorithms and was eventually promoted to vice president. According to his Google profile, Hinton contributed to “major breakthroughs in deep learning that have revolutionized speech recognition and object classification.”
Big Tech was moving cautiously on AI. Then came ChatGPT.
In a separate interview, Hinton told the New York Times that the possibility of AI surpassing human intelligence was becoming reality faster than he previously anticipated.
“I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that,” he said. He called for top scientists to work together to control the technology to mitigate potential risks. However, he also acknowledged that, unlike with nuclear weapons, there are no international regulations to prevent or penalize the secret use of AI by governments or companies.
He said another immediate concern is the risk that fake images, videos and text could leave most people “not able to know what is true any more.” One example emerged in March, when an image of Pope Francis wearing a white puffer coat went viral — but the picture turned out to be fake.
“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” he told the Times, referring to his key role in developing AI technology.
“We remain committed to a responsible approach to AI,” Google’s chief scientist, Jeff Dean, said in a statement, published in a number of news outlets, in response to Hinton’s resignation and comments. “We’re continually learning to understand emerging risks while also innovating boldly.”
Google did not immediately respond to an overnight request from The Washington Post for comment.
Earlier this year, The Post reported that the launch of OpenAI’s ChatGPT was forcing tech giants such as Meta and Google to move more quickly to release their own AI products. Since then, others have spoken out about the potential risks of the technology.
Elon Musk and a handful of AI leaders ask for ‘pause’ on the tech
In March, around 1,000 business leaders, academics and tech workers signed an open letter calling on companies such as OpenAI, Google and Microsoft to “pause” work on AI systems until their risks can be determined. No senior executives from OpenAI or companies such as Google added their names to the letter, however.
Hinton, an emeritus distinguished professor at the University of Toronto, received his PhD in Artificial Intelligence from the University of Edinburgh in 1978 and began working part-time for Google in 2013. In 2018, Hinton was one of three computer scientists to win the prestigious Turing Award — often called the Nobel Prize for technology — for their work in artificial intelligence.
Artificial-intelligence pioneers win $1 million Turing Award