Edit: people have asked for a more concrete list of arguments I have found persuasive. A lot of these have been private discussions, but there are some publicly available essays that I have found compelling: This one debunks some common arguments. Michael Huemer is also very good for a brief intro to why you should not worry about AI. And Jose Ricon de la Puente here. And a short history of Artificial Intelligence for perspective.
Edit 2: this is not a post aimed to comprehensively debunk “doomer” arguments. Others have done it better than me. It’s more of a personal journey story.
In the last couple of months I’ve had a lot of posts on X arguing with so called “AI doomers.” That is, people who think that there is a quick path from current LLMs to AGI and that we are all at serious risk of dying from AI. Doomerism comes in different flavours: at its extreme it induces the kind of anxiety that makes people who would probably consider themselves techno-optimists and pro-freedom, seriously say stuff like: “make it illegal for researchers to communicate technical knowledge about machine learning or AI; this includes publishing papers, engaging in informal conversations, tutoring, talking about it in a classroom.” I am confident that these ideas are pernicious and I say so vocally on my X feed. To be clear: I think some form of “AI safety” is necessary. In my opinion, the right approach to it involves what Tyler Cowen describes in a recent Bloomberg article: pragmatic, empirically-supported checks embedded in the technology itself and implemented as it is being developed. Just like we do with any other technology!
However, it hasn’t always been like this. I am at the point where I must come out about something that I have been hiding: I used to be a doomer! Right after ChatGPT-4 was released, I experienced about a month and a half of extreme anxiety. At the time, I had not previously engaged with doomer arguments seriously: I was engaging long-term doomers like Eliezer seriously for the first time. His arguments seemed neat and plausible. And there was a social consensus around this: a lot of people on X were extremely bombastic, arguing how we're all gonna die or at the very least, our jobs will become obsolete in no more than a year. My brain was fried: I would scroll past doom-argument, past doom-argument, past doom-argument. In one of my worst moments, I asked my boyfriend to quit his job and become a construction worker (thankfully, he did not.) Like many doomers, I like to think of myself as relatively immune to trendy but ultimately bad ideas. Except that I wasn’t.
I owe an incredible amount to people like Yann LeCun and Ben Recht, who were some of the lone voices on X arguing against the madness. Following them made me realize AI experts did not unanimously believe in doom (in fact, most of them do not, despite what doomers say.) If this is not already obvious, I do not agree with everything LeCun says and he was more like a “gateway drug” than the end-all-be-all of anti-doom arguments.
I slowly started to go beyond the panic induced consensus and engage seriously with the arguments for doom. It took me a few months to slowly come to the realization that a lot of assumptions have to be true for the doomer scenario to happen. I had ignored what was in front of me and got carried away due to the social consensus. Another thing that really opened my eyes was the weakness of arguments in a domain I could scientifically assess: biorisk. Irresponsible claims regarding how much current models were increasing the ability of nefarious actors to design new viruses were going viral. For example, see this at the time very popular paper that made the rounds on X, which did not even use Google as a control versus chatGPT, as
correctly pointed out at the time.We live in times when the cost of freaking out about things is not properly recognised — I call this the “Cultural Anxietying.” "Better safe than sorry", the story goes. This is how we ended up locking kids out of schools because of COVID-19, with severe negative implications for their educational and social skills down the line. This is how we ended up delaying nuclear energy and exacerbating the problem of climate change. But the cost of freaking out is real. Getting people anxious for no good reason is bad. Over regulating for no reason is bad and stealing away from our future. Yet there is no penalty imposed to those who make us freak out about things and take bad decisions.
The tricky thing with freaking out is that people who do it often manage to convince themselves that the situation they are scared of is uniquely dangerous. That was the case for GMOs, for nuclear power, for COVID-19 and the list goes on. You will always find a group of people who think their latest worry is literally the end of the world. Through the rule of the engaged minority, they can impose their preferences on the rest of us. But the costs are real. They will always be real.
A lot of this piece is about how many doomers make poor arguments and overstate the consensus and risks. I very much agree, and I think a lot of the soldier mindset and focus on unworkable solutions I seen in AI risk discourse reminds me of some folks with another existential anxiety, climate change.
That being said I do think that AI is the most consequential issue of our time and does pose serious existential risks, just on longer timelines than most doomers.
One of the biggest biases of a certain kind of Rationalist is that they think and talk a lot about things they find interesting, then argue that those things are as important as they are interesting. We have bigger, more pressing problems than Paperclip Machines, but they aren't as neato to talk about.