Thanks for helping organise the donation events, Lizka!
In agreement with my comment last year, I made 97 % of my year donations a few months ago to the Long-Term Future Fund (LTFF). However, I am now significantly less confident about existential risk mitigation being the best way to improve the world:
David Thorstadâs posts, namely the ones on mistakes in the moral mathematics of existential risk, epistemics and exaggerating the risks, increased my general level of scepticism towards deferring to thought leaders in effective altruism before having engaged deeply with the arguments. It is not so much that I got to know knock-down arguments against existential risk mitigation, but more that I become more willing to investigate the claims being made.
I noticed my tail risk estimates tend to go down as I investigate a topic. In the context of:
Climate risk, I was deferring to a mix between 80,000 Hoursâ upper bound of 0.01 % existential risk in the next 100 years, Toby Ordâs best guess of 0.1 %, and John Halsteadâs best guess of 0.001 %. However, I looked a little more into Johnâs report, and think it makes sense to put more weight in his estimate.
Nuclear risk, I was previously mostly deferring to Luisaâs (great!) investigation for the effects on mortality, and to Toby Ordâs 0.1 % existential risk in the next 100 years. However, I did an analysis suggesting both are quite pessimistic:
âMy estimate of 12.9 M expected famine deaths due to the climatic effects of nuclear war before 2050 is 2.05 % the 630 M implied by Luisa Rodriguezâs resultsfor nuclear exchanges between the United States and Russia, so I would say they are significantly pessimistic[3]â.
âMitigating starvation after a population loss of 50 % does not seem that different from saving a life now, and I estimate a probability of 3.29*10^-6 of such a loss due to the climatic effects of nuclear war before 2050[58]â.
AI risk, I noted I am not confident superintelligent AI disempowering humanity would necessarily be bad, and wonder whether the vast majority of technological progress will happen in the longterm future.
AI and bio risk, I suspect the risk of a terrorist attack causing human extinction is exagerated.
I said 97 % above rather than 100 % because I have just made a small donation to the EA Forum Donation Fund[1], distributing my votes fairly similarly across the LTFF, Animal Welfare Fund, and Rethink Priorities. LTFF may still be my top option, so I might have put all votes on LTFF (related dialogue). On the other hand:
I was more inclined to support Rethinkâs (great!) work on the CURVE sequence (whose 1st post went out about 1 month after I made my big annual donation). I think it is stimulating some great discussion on cause priritisation, and might (I hope!) eventually influence Open Philâs allocation.
I agree animal welfare should be receiving more resources, and wanted to signal my support. Also, even though I am all in for fanaticism in principle (not in practice), I also just feel like it is nice to donate to something reducing suffering in a more sure-way now and then!
I am now significantly less confident about existential risk mitigation being the best way to improve the world
Meanwhile, I have updated further away from existential risk mitigation. I only plan to donate late in the year, but, if I was to do it now, I would go for the best animal welfare interventions (e.g. the ones recommended by Giving What We Can) instead of LTFF. On top of what I said above:
Even conditional on a nuclear/âvolcanic/âimpact winter causing human extinction, I believe the probability of not fully recovering would only be 0.0513 % (relatedly). I guess this would be even lower for a pandemic not involving advanced AI, as it would arguably not lead to so many extinctions in humansâ past evolutionary path.
I become more sceptical about bio extinction risk:
Reading more posts of David Thorstadâs series on bio risk, and skimming some of the linked sources.
Gettinga sense that the cost-effectiveness of solutions to mitigate bio risk is often overestimated.
Having a negative impression of the methodology used in Appendix 1 and 2 of this report to estimate the probability of wildfire and stealth pandemics[1]. Not ideal because I am not aware of many attempts to estimate bio risk, and I tend to put more weight on quantitative estimates. Millett 2017 is another quantitative such attempt, and I agree with David Thorstad is has serious flaws (for example, it does not account for tail risk usually decaying faster as severity increases).
I feel like the power of governments to mitigate global catastrophic risk if they perceive there is such risk is often underestimated.
It is unclear to me whether tail risk is neglected in the relevant sense.
If the goal is saving lives, spending should a priori be proportional to the product between deaths and their probability density function (PDF). If this follows a Pareto distribution, such a product will be proportional to âdeathsâ^-alpha, where alpha is the tail index.
âdeathsâ^-alpha decreases as deaths increase, so there should be less spending on more severe catastrophes. Consequently, I do not think one can argue for greater spending on more severe catastrophes just based on it currently being much smaller than that on milder ones.
For example, for conflict deaths, alpha is â1.35 to 1.74, with a mean of 1.60â, which means spending should a priori be proportional to âdeathsâ^-1.6. This suggests spending to decrease deaths in wars 1 k times as deadly should be 0.00158 % (= (10^3)^(-1.6)) as large.
In reality, saving lives in more severe catastrophes should be weighted more heavily. However, it looks like saving lives in normal times is better to improve the longterm future than doing so in catastrophes.
I very much agree with Matthew Barnettâs points about human disempowerment due to advanced sentient AI not being obviously bad (relatedly). To illustrate:
Humans currently have control over the future, as advanced misaligned AI about to cause human extinction would have.
Humans have caused the extinction of many less powerful species without arguably posing any meaningful existential risk in the process of doing so.
I have been going through posts tagged under AI risk skepticism, and finding some of the arguments for lower risk quite good.
Thanks for helping organise the donation events, Lizka!
In agreement with my comment last year, I made 97 % of my year donations a few months ago to the Long-Term Future Fund (LTFF). However, I am now significantly less confident about existential risk mitigation being the best way to improve the world:
David Thorstadâs posts, namely the ones on mistakes in the moral mathematics of existential risk, epistemics and exaggerating the risks, increased my general level of scepticism towards deferring to thought leaders in effective altruism before having engaged deeply with the arguments. It is not so much that I got to know knock-down arguments against existential risk mitigation, but more that I become more willing to investigate the claims being made.
I noticed my tail risk estimates tend to go down as I investigate a topic. In the context of:
Climate risk, I was deferring to a mix between 80,000 Hoursâ upper bound of 0.01 % existential risk in the next 100 years, Toby Ordâs best guess of 0.1 %, and John Halsteadâs best guess of 0.001 %. However, I looked a little more into Johnâs report, and think it makes sense to put more weight in his estimate.
Nuclear risk, I was previously mostly deferring to Luisaâs (great!) investigation for the effects on mortality, and to Toby Ordâs 0.1 % existential risk in the next 100 years. However, I did an analysis suggesting both are quite pessimistic:
âMy estimate of 12.9 M expected famine deaths due to the climatic effects of nuclear war before 2050 is 2.05 % the 630 M implied by Luisa Rodriguezâs results for nuclear exchanges between the United States and Russia, so I would say they are significantly pessimistic[3]â.
âMitigating starvation after a population loss of 50 % does not seem that different from saving a life now, and I estimate a probability of 3.29*10^-6 of such a loss due to the climatic effects of nuclear war before 2050[58]â.
AI risk, I noted I am not confident superintelligent AI disempowering humanity would necessarily be bad, and wonder whether the vast majority of technological progress will happen in the longterm future.
AI and bio risk, I suspect the risk of a terrorist attack causing human extinction is exagerated.
I said 97 % above rather than 100 % because I have just made a small donation to the EA Forum Donation Fund[1], distributing my votes fairly similarly across the LTFF, Animal Welfare Fund, and Rethink Priorities. LTFF may still be my top option, so I might have put all votes on LTFF (related dialogue). On the other hand:
I was more inclined to support Rethinkâs (great!) work on the CURVE sequence (whose 1st post went out about 1 month after I made my big annual donation). I think it is stimulating some great discussion on cause priritisation, and might (I hope!) eventually influence Open Philâs allocation.
I agree animal welfare should be receiving more resources, and wanted to signal my support. Also, even though I am all in for fanaticism in principle (not in practice), I also just feel like it is nice to donate to something reducing suffering in a more sure-way now and then!
Side note. No donation icon showed up after my donation. Not sure whether this is supposed to or not. Update: you have to DM @EA Forum Team.
Meanwhile, I have updated further away from existential risk mitigation. I only plan to donate late in the year, but, if I was to do it now, I would go for the best animal welfare interventions (e.g. the ones recommended by Giving What We Can) instead of LTFF. On top of what I said above:
I think extinction risk from wars, nuclear wars, asteroids and comets, and supervolcanoes is astronomically low, and has often been greatly overestimated in the effective altruism community (see comparison with Toby Ordâs estimates).
Even conditional on a nuclear/âvolcanic/âimpact winter causing human extinction, I believe the probability of not fully recovering would only be 0.0513 % (relatedly). I guess this would be even lower for a pandemic not involving advanced AI, as it would arguably not lead to so many extinctions in humansâ past evolutionary path.
I become more sceptical about bio extinction risk:
Reading more posts of David Thorstadâs series on bio risk, and skimming some of the linked sources.
Getting a sense that the cost-effectiveness of solutions to mitigate bio risk is often overestimated.
Listening to Sonia Ben Ouagrham-Gormley on Barriers to Bioweapons.
Having a negative impression of the methodology used in Appendix 1 and 2 of this report to estimate the probability of wildfire and stealth pandemics[1]. Not ideal because I am not aware of many attempts to estimate bio risk, and I tend to put more weight on quantitative estimates. Millett 2017 is another quantitative such attempt, and I agree with David Thorstad is has serious flaws (for example, it does not account for tail risk usually decaying faster as severity increases).
I feel like the power of governments to mitigate global catastrophic risk if they perceive there is such risk is often underestimated.
It is unclear to me whether tail risk is neglected in the relevant sense.
To illustrate, I commented that:
If the goal is saving lives, spending should a priori be proportional to the product between deaths and their probability density function (PDF). If this follows a Pareto distribution, such a product will be proportional to âdeathsâ^-alpha, where alpha is the tail index.
âdeathsâ^-alpha decreases as deaths increase, so there should be less spending on more severe catastrophes. Consequently, I do not think one can argue for greater spending on more severe catastrophes just based on it currently being much smaller than that on milder ones.
For example, for conflict deaths, alpha is â1.35 to 1.74, with a mean of 1.60â, which means spending should a priori be proportional to âdeathsâ^-1.6. This suggests spending to decrease deaths in wars 1 k times as deadly should be 0.00158 % (= (10^3)^(-1.6)) as large.
In reality, saving lives in more severe catastrophes should be weighted more heavily. However, it looks like saving lives in normal times is better to improve the longterm future than doing so in catastrophes.
I found the submissions of the winners of the 2023 Open Philanthropy AI Worldviews Contest quite compelling.
I very much agree with Matthew Barnettâs points about human disempowerment due to advanced sentient AI not being obviously bad (relatedly). To illustrate:
Humans currently have control over the future, as advanced misaligned AI about to cause human extinction would have.
Humans have caused the extinction of many less powerful species without arguably posing any meaningful existential risk in the process of doing so.
I have been going through posts tagged under AI risk skepticism, and finding some of the arguments for lower risk quite good.
Kevin Esvelt discussed wildfire and stealth pandemics on The 80,000 Hours Podcast.