So I’m an example of someone in that position (I’m trying to work out how to contribute via direct work to a cause area) so I appreciate the opportunity to discuss the topic.
Upon reflection, maybe the crux of my disagreement here is that I just don’t agree that the uncertainty is wide enough to effect the rankings (except in each tier) or to make the direct-work decision rule robust to personal fit.
I think that X-risks have non-overlapping confidence intervals with non-x-risks because of the scale of the problem, and I don’t feel like this changes from a near-term perspective. Even small chances of major catastrophic events this century seen to dwarf other problems.
80k’s second top priority areas are Nuclear security, Climate Change (extreme) and Improving Institutional decision making. For the first two, these seem to be associated with major catastrophe’s (maybe not x-risks) which also might be considered not to overlap with the next set of issues (factory farming/global health).
With respect to concerns that demographics might be heavily affecting cause prioritisation, I think it would be helpful to have specific examples of causes you think are under-estimated and the biases associated with them.
For example, I’ve heard lots of different arguments that x-risks are concerning even if you don’t buy into long-termism. To a similar end, I can’t think of any causes that would be under-valued because of not caring adequately about balance/harmony.
“I think that X-risks have non-overlapping confidence intervals with non-x-risks because of the scale of the problem, and I don’t feel like this changes from a near-term perspective. Even small chances of major catastrophic events this century seen to dwarf other problems.”
If you agree with the astronomical waste argument for longtermism, then this is true. But within x-risks, for example, I imagine that the confidence intervals for different x-risks probably overlap.
So as an imaginary example, I think it’d be suboptimal if all EAs worked on AI safety, or only on AI safety and engineered pandemics, and no EAs were working on nuclear war, supervolcanoes or climate change.
And then back to the real world, (without data on this), I’d guess that we have fewer EAs than is optimal currently working on supervolcanoes.
With respect to concerns that demographics might be heavily affecting cause prioritisation, I think it would be helpful to have specific examples of causes you think are under-estimated and the biases associated with them.
I think there are unknown unknowns here, but a concrete example which I offered above:
“For example, it seems plausible to me that we value animal welfare less than an EA movement with more Hindu / Buddhist cultural influences would.”
For example, I’ve heard lots of different arguments that x-risks are concerning even if you don’t buy into long-termism.
If you don’t buy longtermism, you probably still care about x-risks, but your rejection of longtermism massively affects the relative importance of x-risks compared to nearterm problems, which affects cause prioritisation.
Similarly, I don’t expect diversity of thought to introduce entirely new causes to EA or lead to current causes being entirely abandoned, but I do expect it to affect cause prioritisation.
I don’t entirely understand what East Asian cultures mean by balance / harmony so can’t tell how it would affect cause prioritisation, I just think there would be an effect.
I agree that confidence intervals between x-risks are more likely to overlap. I haven’t really looked into super-volcanoes or asteroids and I think that’s because what I know about them currently doesn’t lead me to believe they’re worth working on over AI or Biosecurity.
Possibly, a suitable algorithm would be to defer to/check with prominent EA organisations like 80k to see if they are allocating 1 in every 100 or every 1000 EAs to rare but possibly important x-risks. Without a coordinated effort by a central body, I don’t see how you’d calibrate adequately (use a random number generator and if the number is less than some number, work on a neglected but possibly important cause?).
My thoughts on EA allocation to cause areas have evolved quite a bit recently (partly due to talking 80k and others, mainly in biosecurity). I’ll probably write a post with my thoughts, but the bottom line is that, basically, the sentiment expressed here is correct and that it’s easier socially to have humility in the form of saying you have high uncertainty.
Responding to the spirit of the original post, my general sense is that plenty of people are not highly uncertain about AI-related x-risk—you might have gotten that email from 80k titled “A huge update to our problem profile — why we care so much about AI risk”. That being said, they’re still using phrases like “we’re very uncertain”. Maybe the lack of uncertainty about some relevant facts is lower than their decision rule. For example, in the problem profile, they write:
Overall, our current take is that AI development poses a bigger threat to humanity’s long-term flourishing than any other issue we know of.
Different Views under Near-Termism
If you don’t buy longtermism, you probably still care about x-risks, but your rejection of longtermism massively affects the relative importance of x-risks compared to nearterm problems, which affects cause prioritisation.
This seems tempting to believe, but I think we should substantiate it. What current x-risks are not ranked higher than non-x-risks (or how much less of a lead do they have) relative to non-x-risks causes from a near-term perspective?
I think this post proposes a somewhat detailed summary of how your views may change under a transformation from long-termist to near-termist. Scott says:
Does Long-Termism Ever Come Up With Different Conclusions Than Thoughtful Short-Termism?
I think yes, but pretty rarely, in ways that rarely affect real practice.
His arguments here are convincing because I find an AGI event this century likely. If you didn’t, then you would disagree. Still, I think that even were AI not to have short timelines, other existential risks like engineered pandemics, super-volcanoes or asteroids might have milder only catastrophic variations, which near-termists would equally prioritise, leading to little practical variation in what people work on.
Talking about different cultures and EA
Similarly, I don’t expect diversity of thought to introduce entirely new causes to EA or lead to current causes being entirely abandoned, but I do expect it to affect cause prioritisation.
I don’t entirely understand what East Asian cultures mean by balance / harmony so can’t tell how it would affect cause prioritisation, I just think there would be an effect.
Can you reason out how “there would be an effect”?
Thanks for clarifying.
So I’m an example of someone in that position (I’m trying to work out how to contribute via direct work to a cause area) so I appreciate the opportunity to discuss the topic.
Upon reflection, maybe the crux of my disagreement here is that I just don’t agree that the uncertainty is wide enough to effect the rankings (except in each tier) or to make the direct-work decision rule robust to personal fit.
I think that X-risks have non-overlapping confidence intervals with non-x-risks because of the scale of the problem, and I don’t feel like this changes from a near-term perspective. Even small chances of major catastrophic events this century seen to dwarf other problems.
80k’s second top priority areas are Nuclear security, Climate Change (extreme) and Improving Institutional decision making. For the first two, these seem to be associated with major catastrophe’s (maybe not x-risks) which also might be considered not to overlap with the next set of issues (factory farming/global health).
With respect to concerns that demographics might be heavily affecting cause prioritisation, I think it would be helpful to have specific examples of causes you think are under-estimated and the biases associated with them.
For example, I’ve heard lots of different arguments that x-risks are concerning even if you don’t buy into long-termism. To a similar end, I can’t think of any causes that would be under-valued because of not caring adequately about balance/harmony.
If you agree with the astronomical waste argument for longtermism, then this is true. But within x-risks, for example, I imagine that the confidence intervals for different x-risks probably overlap.
So as an imaginary example, I think it’d be suboptimal if all EAs worked on AI safety, or only on AI safety and engineered pandemics, and no EAs were working on nuclear war, supervolcanoes or climate change.
And then back to the real world, (without data on this), I’d guess that we have fewer EAs than is optimal currently working on supervolcanoes.
I think there are unknown unknowns here, but a concrete example which I offered above:
“For example, it seems plausible to me that we value animal welfare less than an EA movement with more Hindu / Buddhist cultural influences would.”
If you don’t buy longtermism, you probably still care about x-risks, but your rejection of longtermism massively affects the relative importance of x-risks compared to nearterm problems, which affects cause prioritisation.
Similarly, I don’t expect diversity of thought to introduce entirely new causes to EA or lead to current causes being entirely abandoned, but I do expect it to affect cause prioritisation.
I don’t entirely understand what East Asian cultures mean by balance / harmony so can’t tell how it would affect cause prioritisation, I just think there would be an effect.
Sorry for the slow reply.
Talking about allocation of EA’s to cause areas.
I agree that confidence intervals between x-risks are more likely to overlap. I haven’t really looked into super-volcanoes or asteroids and I think that’s because what I know about them currently doesn’t lead me to believe they’re worth working on over AI or Biosecurity.
Possibly, a suitable algorithm would be to defer to/check with prominent EA organisations like 80k to see if they are allocating 1 in every 100 or every 1000 EAs to rare but possibly important x-risks. Without a coordinated effort by a central body, I don’t see how you’d calibrate adequately (use a random number generator and if the number is less than some number, work on a neglected but possibly important cause?).
My thoughts on EA allocation to cause areas have evolved quite a bit recently (partly due to talking 80k and others, mainly in biosecurity). I’ll probably write a post with my thoughts, but the bottom line is that, basically, the sentiment expressed here is correct and that it’s easier socially to have humility in the form of saying you have high uncertainty.
Responding to the spirit of the original post, my general sense is that plenty of people are not highly uncertain about AI-related x-risk—you might have gotten that email from 80k titled “A huge update to our problem profile — why we care so much about AI risk”. That being said, they’re still using phrases like “we’re very uncertain”. Maybe the lack of uncertainty about some relevant facts is lower than their decision rule. For example, in the problem profile, they write:
Different Views under Near-Termism
This seems tempting to believe, but I think we should substantiate it. What current x-risks are not ranked higher than non-x-risks (or how much less of a lead do they have) relative to non-x-risks causes from a near-term perspective?
I think this post proposes a somewhat detailed summary of how your views may change under a transformation from long-termist to near-termist. Scott says:
His arguments here are convincing because I find an AGI event this century likely. If you didn’t, then you would disagree. Still, I think that even were AI not to have short timelines, other existential risks like engineered pandemics, super-volcanoes or asteroids might have milder only catastrophic variations, which near-termists would equally prioritise, leading to little practical variation in what people work on.
Talking about different cultures and EA
Can you reason out how “there would be an effect”?