Stephen Hawking sloeg al alarm, net als Elon Musk. Maar ook bij techgigant Google zijn ze er niet gerust op. Sterker nog, de persoon die verantwoordelijk is voor Googles kunstmatige intelligentie-tak voorspelt dat technologie bij zal dragen aan het einde van de mensheid.
Google has assembled a team of experts in London who are working to “solve intelligence.” They make up Google DeepMind, the US tech giant’s artificial intelligence (AI) company, which it acquired in 2014.
In an interview with MIT Technology Review, published yesterday, Demis Hassabis, the man in charge of DeepMind, spoke out about some of the company’s biggest fears concerning the future of AI.
Hassabis and his team are creating opportunities to apply AI to Google services. AI is about teaching computers to think like humans, and improved AI could help forge breakthroughs in loads of Google’s services. It could enhance YouTube recommendations for users for example, or make the company’s mobile voice search better.
But it’s not just Google product updates that DeepMind’s cofounders are thinking about. Worryingly, cofounder Shane Legg thinks the team’s advances could be what finishes off the human race. He told the LessWrong blog in an interview:
“Eventually, I think human extinction will probably occur, and technology will likely play a part in this”.
He adds he thinks AI is the “no.1 risk for this century.”
People like Stephen Hawking and Elon Musk are worried about what might happen as a result of advancements in AI. They’re concerned that robots could grow so intelligent that they could independently decide to exterminate humans. And if Hawking and Musk are fearful, you probably should be too.
Hassibis showcased some DeepMind software in a video back in April. In it, a computer learns how to beat Atari video games — it wasn’t programmed with any information about how to play, just given the controls and an instinct to win. AI specialist Stuart Russell of the University of California says people were “shocked”.
Google is also concerned about the “other side” of developing computers in this way. That’s why it set up an “ethics board”. It’s tasked with making sure AI technology isn’t abused. As Hassibis explains: “It’s (AI) something that we or other people at Google need to be cognizant of.” Hassibis does concede that “we’re still playing Atari games currently” — but as AI moves forward, the fear sets in.
The main point of Google DeepMind’s AI, says Hassabis, is to create computers that can “solve any problem”. “AI has huge potential to be amazing for humanity”, he mentions in the Technology Review interview. Accelerating the way we combat disease is one idea. But it’s exactly technology capable of such brilliance which makes people so afraid.