That is, I wasnât viscerally worried. I had the concepts. But I didnât have the âactuallyâ part.
For me I donât think having a concrete picture of the mechanism for how AI could actually kill everyone ever felt necessary to viscerally believing that AI could kill everyone.
And I think this is because every since I was a kid, long before hearing about AI risk or EA, the long-term future that seemed most intuitive to me was a future without humans (or post-humans).
The idea that humanity would go on to live forever and colonize the galaxy and the universe and live a sci-fi future has always seemed too fantastical to me to assume as the default scenario. Sure itâs conceivableâIâve never assumed itâs extremely unlikelyâbut I have always assumed that in the median scenario humanity somehow goes extinct before ever getting to make civilizations in hundreds of billions of star systems. What would make us go extinct? I donât know. But to think otherwise would be to think that all of us today are super special (by being among the first 0.000...001% (a significant number of 0s) of humans to ever live). And that has always felt like an extraordinary thing to just assume, so my intuitive, gut, visceral belief has always been that weâll probably go extinct somehow before achieving all that.
So when I learned about AI risk I intellectually though âAh, okay, I can see how something smarter than us that doesnât share our goals could cause our extinction; so maybe AI is the thing that will prevent us from making civilizations on hundreds of billions of stars.â
I donât know when I first formulated a credence that AI would cause doom, but Iâm pretty sure that I always viscerally felt that AI could cause human extinction ever since first hearing an argument that it could.
(The first time I heard an argument for AI risk was probably in 2015, when I read HPMOR and Superintelligence; I donât recall knowing much at all about EYâs views on AI until Jan-Mar 2015 when I read /âr/âHPMOR and people mentioned AI) I think reading Superintelligence the same year I read HPMOR (both in 2015) was roughly the first time I thought about AI risk. Just looked it up actually: From my Goodreads I see that I finished reading HPMOR on March 4th, 2015, 10 days before HPMOR finished coming out. I read it in a span of a couple weeks and no doubt learned about it via a recommendation that stemmed from my reading of HPMOR. So Superintelligence was my first exposure to AI risk arguments. I didnât read a lot of stuff online at that time; e.g. I didnât read anything on LW that I can recall.)
For me I donât think having a concrete picture of the mechanism for how AI could actually kill everyone ever felt necessary to viscerally believing that AI could kill everyone.
And I think this is because every since I was a kid, long before hearing about AI risk or EA, the long-term future that seemed most intuitive to me was a future without humans (or post-humans).
The idea that humanity would go on to live forever and colonize the galaxy and the universe and live a sci-fi future has always seemed too fantastical to me to assume as the default scenario. Sure itâs conceivableâIâve never assumed itâs extremely unlikelyâbut I have always assumed that in the median scenario humanity somehow goes extinct before ever getting to make civilizations in hundreds of billions of star systems. What would make us go extinct? I donât know. But to think otherwise would be to think that all of us today are super special (by being among the first 0.000...001% (a significant number of 0s) of humans to ever live). And that has always felt like an extraordinary thing to just assume, so my intuitive, gut, visceral belief has always been that weâll probably go extinct somehow before achieving all that.
So when I learned about AI risk I intellectually though âAh, okay, I can see how something smarter than us that doesnât share our goals could cause our extinction; so maybe AI is the thing that will prevent us from making civilizations on hundreds of billions of stars.â
I donât know when I first formulated a credence that AI would cause doom, but Iâm pretty sure that I always viscerally felt that AI could cause human extinction ever since first hearing an argument that it could.
(The first time I heard an argument for AI risk was probably in 2015, when I read HPMOR and Superintelligence; I donât recall knowing much at all about EYâs views on AI until Jan-Mar 2015 when I read /âr/âHPMOR and people mentioned AI) I think reading Superintelligence the same year I read HPMOR (both in 2015) was roughly the first time I thought about AI risk. Just looked it up actually: From my Goodreads I see that I finished reading HPMOR on March 4th, 2015, 10 days before HPMOR finished coming out. I read it in a span of a couple weeks and no doubt learned about it via a recommendation that stemmed from my reading of HPMOR. So Superintelligence was my first exposure to AI risk arguments. I didnât read a lot of stuff online at that time; e.g. I didnât read anything on LW that I can recall.)