‘Godfather of AI’ raises the possibility of humanity being wiped out by technology in the next 30 years Artificial Intelligence (AI)

The British-Canadian computer scientist often referred to as the “Godfather” of artificial intelligence has raised the possibility of AI wiping out humanity within the next three decades, warning that the pace of change in technology is “much faster” than expected. .

Professor Geoffrey Hinton, who was awarded the Nobel Prize in Physics this year for his work in AI, said there is a “10 to 20” percent chance that AI will cause human extinction within the next three decades.

Earlier Hinton said there was 10% chance of technology Causing disastrous consequences for humanity.

When asked on BBC Radio 4’s Today program whether he had changed his analysis of a possible AI apocalypse and said there was a one in 10 chance of it happening, he said: “Not really, 10 in 20.” [per cent],

Hinton’s estimate prompted Today’s guest editor, former Chancellor Sajid Javid, to say “You’re going over the top”, to which Hinton replied: “If anything. You see, we’ve got to go further than we ever have before.” Didn’t have to deal with anything more intelligent than that.”

He continued: “And how many examples do you know of a more intelligent thing being controlled by a less intelligent thing? There are very few such examples. There is a mother and child. Evolution did a lot of work in allowing the child to control the mother, but this is the only example I know of.

London-born Hinton, a professor emeritus at the University of Toronto, said humans would be like children compared to the intelligence of extremely powerful AI systems.

“I like to think about it this way: Imagine yourself and your three-year-old child. We’ll be three-year-olds,” he said.

AI can be broadly defined as computer systems that perform tasks that typically require human intelligence.

Last year, Hinton made headlines after resigning from her job at Google to speak more openly about the risks posed by unrestricted AI development, citing concerns that “bad actors” could use the technology to harm others. Will use. A major concern of AI safety campaigners is that the creation of artificial general intelligence, or systems that are smarter than humans, could pose an existential threat to technology by escaping human control.

Reflecting on where he thought the development of AI would have reached when he first began his work on AI, Hinton said: “I didn’t think it would be where we are.” [are] Now. I thought we would get here at some point in the future.”

Skip past newsletter promotions

He added: “Because of the situation we are in right now, most experts in the field think that sometime, maybe within the next 20 years, we are going to develop AI that will be smarter than people. And that’s a very scary thought.”

Hinton said that the pace of development was “very, very fast, much faster than I expected” and called for government regulation of the technology.

“My concern is that the invisible hand will not keep us safe. So leaving it solely to the benefit of big companies will not be enough to ensure that they develop it safely,” he said. “The only thing that can force those big companies to do more research on security is government regulation.”

Hinton is one of three “Godfather of AI” Who has won the ACM AM Turing Award – the computer science equivalent of the Nobel Prize – for his work. However, one of the three, Yann LeCun, is the chief AI scientist at Mark Zuckerberg’s Meta. reduced existential threat and has said that AI “could actually save humanity from extinction”.