Sadly, Humans won’t be able to contain superintelligent machines
The more that our society becomes dependent on A.I. the more likely superintelligent machines will take over. The 5G controversy coupled with the internet being available to everyone ensures that it will happen. People will make it about noninclusion and leaving out certain sectors of society. As a result, people will advocate for these communities to become internet dependent. Society can survive pre and post-pandemic without mass tech integration per our great reset book review.
“no single algorithm can find a solution for determining whether an AI would produce harm to the world”-Max Planck Institute for Human Development
Our amazement of technology blinds us to its true power. A scientist from the Max Planck institute for human society has already admitted humans won’t be able to contain Super A.I. After programmers were unable to how A.I. performed certain tasks without understanding it…Will this enforce governments to regulate and slow down A.I.? We will continue to let our curiosity lead us into inevitable oblivion.?
In the meantime out country is distracted with sports, entertainment, and overthrowing “patriarchy”. What gender will the robots be? There are so many issues were advocating for that people don’t have time to think about a superintelligent machine takeover. Whatever the public opinion of the country is, then that is what the masses will follow. It’s sad to say that the majority of people are followers even in terms of their moral compass.
Furthermore, big data is a major supporter of politicians to keep government regulation out of the way. Superintelligent machines that do take over will have all of our data as apart of their algorithm.
The max Planck article goes onto discuss moral ethics being programmed into the superintelligent machine. Unfortunately, the scientist believes that it’s not as easy as it sounds.
‘In their study, the team conceived a theoretical containment algorithm that ensures a superintelligent AI cannot harm people under any circumstances, by simulating the behavior of the AI first and halting it if considered harmful. But careful analysis shows that in our current paradigm of computing, such algorithm cannot be built.
“If you break the problem down to basic rules from theoretical computer science, it turns out that an algorithm that would command an AI not to destroy the world could inadvertently halt its own operations. If this happened, you would not know whether the containment algorithm is still analyzing the threat, or whether it has stopped to contain the harmful AI. In effect, this makes the containment algorithm unusable”, says Iyad Rahwan, Director of the Center for Humans and Machines.’
Max Planck Institute for Human Development,
In conclusion, this article states that it’s impossible to even know when a superintelligent machine exists. There could be one right now, that is being kept secret from the world. Who knows.