I am now significantly less confident about existential risk mitigation being the best way to improve the world
Meanwhile, I have updated further away from existential risk mitigation. I only plan to donate late in the year, but, if I was to do it now, I would go for the best animal welfare interventions (e.g. the ones recommended by Giving What We Can) instead of LTFF. On top of what I said above:
Even conditional on a nuclear/âvolcanic/âimpact winter causing human extinction, I believe the probability of not fully recovering would only be 0.0513 % (relatedly). I guess this would be even lower for a pandemic not involving advanced AI, as it would arguably not lead to so many extinctions in humansâ past evolutionary path.
I become more sceptical about bio extinction risk:
Reading more posts of David Thorstadâs series on bio risk, and skimming some of the linked sources.
Gettinga sense that the cost-effectiveness of solutions to mitigate bio risk is often overestimated.
Having a negative impression of the methodology used in Appendix 1 and 2 of this report to estimate the probability of wildfire and stealth pandemics[1]. Not ideal because I am not aware of many attempts to estimate bio risk, and I tend to put more weight on quantitative estimates. Millett 2017 is another quantitative such attempt, and I agree with David Thorstad is has serious flaws (for example, it does not account for tail risk usually decaying faster as severity increases).
I feel like the power of governments to mitigate global catastrophic risk if they perceive there is such risk is often underestimated.
It is unclear to me whether tail risk is neglected in the relevant sense.
If the goal is saving lives, spending should a priori be proportional to the product between deaths and their probability density function (PDF). If this follows a Pareto distribution, such a product will be proportional to âdeathsâ^-alpha, where alpha is the tail index.
âdeathsâ^-alpha decreases as deaths increase, so there should be less spending on more severe catastrophes. Consequently, I do not think one can argue for greater spending on more severe catastrophes just based on it currently being much smaller than that on milder ones.
For example, for conflict deaths, alpha is â1.35 to 1.74, with a mean of 1.60â, which means spending should a priori be proportional to âdeathsâ^-1.6. This suggests spending to decrease deaths in wars 1 k times as deadly should be 0.00158 % (= (10^3)^(-1.6)) as large.
In reality, saving lives in more severe catastrophes should be weighted more heavily. However, it looks like saving lives in normal times is better to improve the longterm future than doing so in catastrophes.
I very much agree with Matthew Barnettâs points about human disempowerment due to advanced sentient AI not being obviously bad (relatedly). To illustrate:
Humans currently have control over the future, as advanced misaligned AI about to cause human extinction would have.
Humans have caused the extinction of many less powerful species without arguably posing any meaningful existential risk in the process of doing so.
I have been going through posts tagged under AI risk skepticism, and finding some of the arguments for lower risk quite good.
Meanwhile, I have updated further away from existential risk mitigation. I only plan to donate late in the year, but, if I was to do it now, I would go for the best animal welfare interventions (e.g. the ones recommended by Giving What We Can) instead of LTFF. On top of what I said above:
I think extinction risk from wars, nuclear wars, asteroids and comets, and supervolcanoes is astronomically low, and has often been greatly overestimated in the effective altruism community (see comparison with Toby Ordâs estimates).
Even conditional on a nuclear/âvolcanic/âimpact winter causing human extinction, I believe the probability of not fully recovering would only be 0.0513 % (relatedly). I guess this would be even lower for a pandemic not involving advanced AI, as it would arguably not lead to so many extinctions in humansâ past evolutionary path.
I become more sceptical about bio extinction risk:
Reading more posts of David Thorstadâs series on bio risk, and skimming some of the linked sources.
Getting a sense that the cost-effectiveness of solutions to mitigate bio risk is often overestimated.
Listening to Sonia Ben Ouagrham-Gormley on Barriers to Bioweapons.
Having a negative impression of the methodology used in Appendix 1 and 2 of this report to estimate the probability of wildfire and stealth pandemics[1]. Not ideal because I am not aware of many attempts to estimate bio risk, and I tend to put more weight on quantitative estimates. Millett 2017 is another quantitative such attempt, and I agree with David Thorstad is has serious flaws (for example, it does not account for tail risk usually decaying faster as severity increases).
I feel like the power of governments to mitigate global catastrophic risk if they perceive there is such risk is often underestimated.
It is unclear to me whether tail risk is neglected in the relevant sense.
To illustrate, I commented that:
If the goal is saving lives, spending should a priori be proportional to the product between deaths and their probability density function (PDF). If this follows a Pareto distribution, such a product will be proportional to âdeathsâ^-alpha, where alpha is the tail index.
âdeathsâ^-alpha decreases as deaths increase, so there should be less spending on more severe catastrophes. Consequently, I do not think one can argue for greater spending on more severe catastrophes just based on it currently being much smaller than that on milder ones.
For example, for conflict deaths, alpha is â1.35 to 1.74, with a mean of 1.60â, which means spending should a priori be proportional to âdeathsâ^-1.6. This suggests spending to decrease deaths in wars 1 k times as deadly should be 0.00158 % (= (10^3)^(-1.6)) as large.
In reality, saving lives in more severe catastrophes should be weighted more heavily. However, it looks like saving lives in normal times is better to improve the longterm future than doing so in catastrophes.
I found the submissions of the winners of the 2023 Open Philanthropy AI Worldviews Contest quite compelling.
I very much agree with Matthew Barnettâs points about human disempowerment due to advanced sentient AI not being obviously bad (relatedly). To illustrate:
Humans currently have control over the future, as advanced misaligned AI about to cause human extinction would have.
Humans have caused the extinction of many less powerful species without arguably posing any meaningful existential risk in the process of doing so.
I have been going through posts tagged under AI risk skepticism, and finding some of the arguments for lower risk quite good.
Kevin Esvelt discussed wildfire and stealth pandemics on The 80,000 Hours Podcast.