In fairness you don’t need high p|doom to think AI safety should be the no.1 priority if you a) think that AI is a non-negligible extinction risk (say >0.1%), b) no other extinction risk has an equal degree of combined neglectedness and size and c) the expected value of the future conditional on us not going extinct in the next 100 years is astronomically high, and d) AI safety work makes a significant difference to how likely doom is to occur.. None of these are innocent or obvious assumptions, but I think a lot of people in the community hold all 3. I consider myself a critic of doomers in one sense, because I suspect p|doom is under 0.1%, and I think once you get down below that level, you should be nervous about taking expected value calculation that include your p|doom literally, because you probably don’t really know whether you should be 0.09% or several orders of magnitude lower. But even I am not *sure* that this is not swamped by c). (The Bostrom Pascal’s Mugging case involves probabilities way below 1 in a million, never mind 1 in a 1000.) Sometimes I get the impression though, that some people think of themselves as anti-doomer, when their p|doom is officially more like 1%. I think that’s a major error, if they really believe that figure. 1% is not low for human extinction. In fact, it’s not low even if you only care about currently existing people being murdered: in expectation that is 0.01x8 billion=80 million deaths(!). Insofar as what is going on is really just that people in their heart of hearts are much lower than 1%, but don’t want to say that because it feels extreme maybe this is ok. But if people actually mean figures like 1% or 5%, they ought to basically be on the doomers’ side, even if they think very high p|doom estimates given by some doomers are extremely implausible.
In fairness you don’t need high p|doom to think AI safety should be the no.1 priority if you a) think that AI is a non-negligible extinction risk (say >0.1%), b) no other extinction risk has an equal degree of combined neglectedness and size and c) the expected value of the future conditional on us not going extinct in the next 100 years is astronomically high, and d) AI safety work makes a significant difference to how likely doom is to occur.. None of these are innocent or obvious assumptions, but I think a lot of people in the community hold all 3. I consider myself a critic of doomers in one sense, because I suspect p|doom is under 0.1%, and I think once you get down below that level, you should be nervous about taking expected value calculation that include your p|doom literally, because you probably don’t really know whether you should be 0.09% or several orders of magnitude lower. But even I am not *sure* that this is not swamped by c). (The Bostrom Pascal’s Mugging case involves probabilities way below 1 in a million, never mind 1 in a 1000.) Sometimes I get the impression though, that some people think of themselves as anti-doomer, when their p|doom is officially more like 1%. I think that’s a major error, if they really believe that figure. 1% is not low for human extinction. In fact, it’s not low even if you only care about currently existing people being murdered: in expectation that is 0.01x8 billion=80 million deaths(!). Insofar as what is going on is really just that people in their heart of hearts are much lower than 1%, but don’t want to say that because it feels extreme maybe this is ok. But if people actually mean figures like 1% or 5%, they ought to basically be on the doomers’ side, even if they think very high p|doom estimates given by some doomers are extremely implausible.