In the AI-risk community, people seem to spend a lot of time talking about the details of how things might go badly. This is a good thing. Discussing details helps us think about how big of a risk we are facing and how to mitigate it, and I sometimes find that people have substantial differences in their broad views, which they would likely not have noticed before discussing detailed views. What surprises me is how often people want to hear about details before assigning any substantial possibility to the threat of AI.
Leó Szilárd was one of the Hungarian physicists involved in the Manhattan Project. He seems to have been the first person to notice the possibility of a nuclear chain reaction, which could release vastly more energy per mass than a chemical reaction. In The Making of the Atomic Bomb, Richard Rhodes describes the moment that this possibility occurred to Szilárd while he was out walking in London:
The stoplight changed to green. Szilard stepped off the curb. As he crossed the street time cracked open before him and he saw a way to the future, death into the world and all our woes, the shape of things to come.
I do not know how literally to take this quote, but my impression is that Szilárd really did think of the possibility of a self-sustaining nuclear reaction, and promptly started worrying about humanity’s future. He did not wait until the science had advanced to the point that we knew what form nuclear devices would take, nor did he wait to see how the political landscape changed over the coming decade. Once he knew there was a strong possibility that humanity would soon gain access to a new source of energy that was many times denser than we’d previously had, he knew that we should be worried, and he started taking actions to help humanity get through would likely be a rough time.
Sometimes the details matter. If you told me that your grandmother received her social security check, I would not be concerned. I do not have any particular reason to think that the world is substantially different now that your grandmother has received her check. If you then explained that she has a murderous hatred for me and that the only thing that had been stopping her from taking action was that she did not have enough money to purchase a gun and a bus ticket, I might start worrying.
Suppose instead you tell me that your grandmother has been in a car accident. She’s in the hospital and it’s not looking good. I start praying to God for a quick recovery, except that I misspeak and pray to Gord instead. Five minutes later, you call me explaining that your grandmother has made a miraculous recovery, and nobody understands why. Wondering if it was my praying to Gord that did it, I try praying to Gord for a bean burrito, and find that a burrito has materialized on the table in front of me.
If this happens, you should be utterly terrified. The world does not work as expected and I have exercised power that, as far as you know, has not previously been accessed by humans. The details of this power are important for understanding just how much trouble we’re in and how we can make the best of it or at least avoid the worst, but whether you should be worried does not rely on them.
The difference between these scenarios is that in the latter case, there is a substantial new capability that the world has not previously had to deal with. We should expect the world to be very different with a wish-granting Gord, and until we understand how to steer the world as it changes, we should be very concerned.
I feel similarly about risk from artificial intelligence. I understand that it takes a little explaining for why AI might represent a huge, novel, transformative capability. And, again, the details are important (for example, there are good arguments for why AI might be a particularly difficult technology to point in the right direction, and I think that recent developments suggest that AI will accelerate other technologies, some of which could be quite dangerous). But I do not understand why some people are so skeptical that there is a risk, once they accept that AI can plausibly represent such a jump in capability. Like nuclear chain reactions, synthetic biology, and wish-granting Gords, AI should be scary to us well before we have all the details on how it works or how it might go badly.