Berlin, Germany – From autonomous cars to computers that can beat game programs, humans have a natural curiosity and interest in artificial intelligence (AI). As scientists continue to make machines smarter and smarter, some wonder “what happens when computers get too smart for their own good?” From “Matrix” to “The Terminator”, the entertainment industry has already begun to consider whether future robots will ever threaten the human race. Now, a new study concludes that there may be no way to stop the increase in machines. An international team said that humans would not be able to prevent artificial superintelligence from doing what it wanted.
Scientists at the Center for Humans and Machines at the Max Planck Institute began to imagine what this machine would look like. Imagine an AI program with an intelligence far superior to that of humans. So much so that you could learn on your own without new programming. If connected to the internet, researchers say that AI would have access to all of humanity’s data and could even take control of other machines around the globe.
The study’s authors ask what such intelligence would do with all that power. Would it work to make our lives better? Would he dedicate his processing power to correct problems like climate change? Or would the machine try to control the lives of its human neighbors?
Controlling the uncontrollable? The dangers of artificial superintelligence
Both computer programmers and philosophers have studied whether there is a way to prevent superintelligent AI from reaching its human creators; ensuring that future computers do not harm their owners. The new study unfortunately reveals that it appears to be virtually impossible to maintain super-smart AI online.
“A super-smart machine that controls the world looks like science fiction. But there are already machines that perform certain important tasks independently, without programmers fully understanding how they learned them. Therefore, the question arises as to whether this could at any time become uncontrollable and dangerous for humanity, ”says study co-author Manuel Cebrian, leader of the Digital Mobilization Group at the Center for Humans and Machines, in a university launch.
The international team looked at two different ways to control artificial intelligence. The first restricted the power of superintelligence, blocking it and preventing it from connecting to the Internet. He was also unable to connect to other technical devices in the outside world. The problem with this plan is quite obvious; such a computer would not be able to do much to really help humans.
Being nice to humans doesn’t compute
The second option focused on creating an algorithm that would give the supercomputer ethical principles. Hopefully, this would force AI to consider humanity’s best interests.
The study created a theoretical containment algorithm that would prevent AI from harming people in any circumstances. In simulations, AI would stop working if researchers considered its actions harmful. Despite preventing AI from reaching world domination, the study’s authors say it just wouldn’t work in the real world.
“If you break the problem down into the basic rules of theoretical computer science, it turns out that an algorithm that would command an AI not to destroy the world could inadvertently interrupt its own operations. If this happens, you will not know whether the containment algorithm is still analyzing the threat or whether it has stopped to contain harmful AI. This actually makes the containment algorithm unusable, ”said Iyad Rahwan, Director of the Center for Humans and Machines.
The study concludes that containing artificial intelligence is an incomputable problem. No computer program can find a surefire way to prevent AI from being harmful if it wants to. The researchers add that humans may not even notice when super-smart machines actually reach the world of technology. So, are they here yet?
The study appears in the Journal of Artificial Intelligence Research.