That is, I wasn’t viscerally worried. I had the concepts. But I didn’t have the “actually” part.
For me I don’t think having a concrete picture of the mechanism for how AI could actually kill everyone ever felt necessary to viscerally believing that AI could kill everyone.
And I think this is because every since I was a kid, long before hearing about AI risk or EA, the long-term future that seemed most intuitive to me was a future without humans (or post-humans).
The idea that humanity would go on to live forever and colonize the galaxy and the universe and live a sci-fi future has always seemed too fantastical to me to assume as the default scenario. Sure it’s conceivable—I’ve never assumed it’s extremely unlikely—but I have always assumed that in the median scenario humanity somehow goes extinct before ever getting to make civilizations in hundreds of billions of star systems. What would make us go extinct? I don’t know. But to think otherwise would be to think that all of us today are super special (by being among the first 0.000...001% (a significant number of 0s) of humans to ever live). And that has always felt like an extraordinary thing to just assume, so my intuitive, gut, visceral belief has always been that we’ll probably go extinct somehow before achieving all that.
So when I learned about AI risk I intellectually though “Ah, okay, I can see how something smarter than us that doesn’t share our goals could cause our extinction; so maybe AI is the thing that will prevent us from making civilizations on hundreds of billions of stars.”
I don’t know when I first formulated a credence that AI would cause doom, but I’m pretty sure that I always viscerally felt that AI could cause human extinction ever since first hearing an argument that it could.
(The first time I heard an argument for AI risk was probably in 2015, when I read HPMOR and Superintelligence; I don’t recall knowing much at all about EY’s views on AI until Jan-Mar 2015 when I read /r/HPMOR and people mentioned AI) I think reading Superintelligence the same year I read HPMOR (both in 2015) was roughly the first time I thought about AI risk. Just looked it up actually: From my Goodreads I see that I finished reading HPMOR on March 4th, 2015, 10 days before HPMOR finished coming out. I read it in a span of a couple weeks and no doubt learned about it via a recommendation that stemmed from my reading of HPMOR. So Superintelligence was my first exposure to AI risk arguments. I didn’t read a lot of stuff online at that time; e.g. I didn’t read anything on LW that I can recall.)
For me I don’t think having a concrete picture of the mechanism for how AI could actually kill everyone ever felt necessary to viscerally believing that AI could kill everyone.
And I think this is because every since I was a kid, long before hearing about AI risk or EA, the long-term future that seemed most intuitive to me was a future without humans (or post-humans).
The idea that humanity would go on to live forever and colonize the galaxy and the universe and live a sci-fi future has always seemed too fantastical to me to assume as the default scenario. Sure it’s conceivable—I’ve never assumed it’s extremely unlikely—but I have always assumed that in the median scenario humanity somehow goes extinct before ever getting to make civilizations in hundreds of billions of star systems. What would make us go extinct? I don’t know. But to think otherwise would be to think that all of us today are super special (by being among the first 0.000...001% (a significant number of 0s) of humans to ever live). And that has always felt like an extraordinary thing to just assume, so my intuitive, gut, visceral belief has always been that we’ll probably go extinct somehow before achieving all that.
So when I learned about AI risk I intellectually though “Ah, okay, I can see how something smarter than us that doesn’t share our goals could cause our extinction; so maybe AI is the thing that will prevent us from making civilizations on hundreds of billions of stars.”
I don’t know when I first formulated a credence that AI would cause doom, but I’m pretty sure that I always viscerally felt that AI could cause human extinction ever since first hearing an argument that it could.
(The first time I heard an argument for AI risk was probably in 2015, when I read HPMOR and Superintelligence; I don’t recall knowing much at all about EY’s views on AI until Jan-Mar 2015 when I read /r/HPMOR and people mentioned AI) I think reading Superintelligence the same year I read HPMOR (both in 2015) was roughly the first time I thought about AI risk. Just looked it up actually: From my Goodreads I see that I finished reading HPMOR on March 4th, 2015, 10 days before HPMOR finished coming out. I read it in a span of a couple weeks and no doubt learned about it via a recommendation that stemmed from my reading of HPMOR. So Superintelligence was my first exposure to AI risk arguments. I didn’t read a lot of stuff online at that time; e.g. I didn’t read anything on LW that I can recall.)