I don’t suppose either of you have read Petra Kosonen’s, Tiny probabilities and the value of the far future? I found it a little hard to follow but am I right in thinking that one of the main arguments is the following?
Even if we think this ‘time of perils’ is here to stay and assume a constant existential risk per century of 1 in 6 from now on—and with no sci-fi space colonization or ‘digital people’ in the meantime—an individual’s donations to AI safety efforts are still more cost-effective than donations to AMF, since biological humanity’s expected lifespan on Earth only needs to be at least another ~250 years to make that the case.
(This assumes that current existential risk from AI is >0.1% in the next 100 years, that an additional $1bn would reduce this probability by >1% - i.e. from 0.1% to 0.099% - and that we should treat an individual’s contribution to this $1bn as non-negligible just as we do with voting.)
That might not be enough to convince most people who have very high confidence in person-affecting ethics, but it could be persuasive for some others with broadly ‘neartermist’ leanings.
I’m not sure if you are already aware of it, but we featured a conversation with Petra in an early issue of our newsletter, where she discusses some of these topics (including probability discounting and its implications for longtermism). I mention it in case it helps clarify some of the claims she makes in the paper.
Thanks so much for this!
I don’t suppose either of you have read Petra Kosonen’s, Tiny probabilities and the value of the far future? I found it a little hard to follow but am I right in thinking that one of the main arguments is the following?
Even if we think this ‘time of perils’ is here to stay and assume a constant existential risk per century of 1 in 6 from now on—and with no sci-fi space colonization or ‘digital people’ in the meantime—an individual’s donations to AI safety efforts are still more cost-effective than donations to AMF, since biological humanity’s expected lifespan on Earth only needs to be at least another ~250 years to make that the case.
(This assumes that current existential risk from AI is >0.1% in the next 100 years, that an additional $1bn would reduce this probability by >1% - i.e. from 0.1% to 0.099% - and that we should treat an individual’s contribution to this $1bn as non-negligible just as we do with voting.)
That might not be enough to convince most people who have very high confidence in person-affecting ethics, but it could be persuasive for some others with broadly ‘neartermist’ leanings.
Hi Ubuntu,
I’m not sure if you are already aware of it, but we featured a conversation with Petra in an early issue of our newsletter, where she discusses some of these topics (including probability discounting and its implications for longtermism). I mention it in case it helps clarify some of the claims she makes in the paper.