Good question. Short answer: despite being an April Fools post, that post seems to encapsulate much of what Yudkowski actually believes – so the social context is that the post is joking in its tone and content but not so much the attitude of the author; sorry I can’t link to anything to further substantiate this. I believe Yudkowski’s general policy is to not put numbers on his estimates.
Better answer: Here is a somewhat up-to-date database about predictions about existential risk chances from some folks in the community. You’ll notice these are far below near-certainty.
One of the studies listed in the database is this one in which there are a few researchers who put the chance of doom pretty high.
Thanks for the reply. I had no idea the spread was so wide (<2% to >98% in the last link you mentioned)!
I guess the nice thing about most of these estimates is they are still well above the ridiculously low orders of magnitude that might prompt a sense of ‘wait, I should actually upper-bound my estimate of humanity’s future QALYs in order to avoid getting mugged by Pascal.’ It’s a pretty firm foundation for longtermism imo.
Good question. Short answer: despite being an April Fools post, that post seems to encapsulate much of what Yudkowski actually believes – so the social context is that the post is joking in its tone and content but not so much the attitude of the author; sorry I can’t link to anything to further substantiate this. I believe Yudkowski’s general policy is to not put numbers on his estimates.
Better answer: Here is a somewhat up-to-date database about predictions about existential risk chances from some folks in the community. You’ll notice these are far below near-certainty.
One of the studies listed in the database is this one in which there are a few researchers who put the chance of doom pretty high.
Thanks for the reply. I had no idea the spread was so wide (<2% to >98% in the last link you mentioned)!
I guess the nice thing about most of these estimates is they are still well above the ridiculously low orders of magnitude that might prompt a sense of ‘wait, I should actually upper-bound my estimate of humanity’s future QALYs in order to avoid getting mugged by Pascal.’ It’s a pretty firm foundation for longtermism imo.