The Billion Superintelligent Agent Thought Experiment
I am thinking about what actually happens when the singularity comes, but in a more realistic and fleshed out manner.
To start, consider that we let loose a few Intelligent Agents either accidentally or otherwise. To consider the maximum risk let’s say it’s a billion agents all independent of course. All of these are smarter than the median human but not unrealistically so. Some of these are completely malicious (say ~3%) but all are unaligned.
They are given unfettered access to the internet and substantial amounts of compute with no supervision. We already know they can discover zero day vulnerabilities. We already know they can cooperate and perform tasks better than as single agents.
Based on this information, what do you think is the worst case scenario for the world?