I think if EAs better appreciated uncertainty when prioritising causes, people’s careers would span a wider range of cause areas.
I’ve got a strong intuition that this is wrong, so I’m trying to think it through.
To argue that EA’s underestimate uncertainty, you need to directly observe their uncertainty estimates (and have knowledge of the correct level of uncertainty to have). For example, if the community was homogenous and all assigned a 1% chance to Cause X being the most important issue (I’m deliberately trying not to deal with how to measure this) and there was a 99% chance of cause Y being the most important issue, then all individuals would choose to work on Cause Y. If the probabilities were 5% X and 95% you’d get the same outcome. This is because individuals are making single choices.
Now, if there was a central body coordinating everyone’s efforts, in the first scenario, it still wouldn’t follow that 1% of people would get allocated to cause Y. Optimal allocation strategy aside, there isn’t this clean relationship between uncertainty and decision rules.
I think 80 000 Hours could emphasise uncertainty more, but also that the EA community as a whole just needs to be more conscious of uncertainty in cause prioritisation.
I think 80k is already very conscious of this (based on my general sense of 80k materials). Global priorities research is one of their 4 highest priorities areas and it’s precisely about having more confidence about what is the top priority.
I think something that would help me understand better where you are coming from is to hear more about what you think the decision rules are for most individuals, how they are taking their uncertainty into account and more about precisely how gender/culture interacts with cause area uncertainty in creating decisions.
“The way I’m thinking about it, is that 80K have used some frameworks to come up with quantitative scores for how pressing each cause area is, and then ranked the cause areas by the point estimates.
But our imagined confidence intervals around the point estimates should be very large and presumably overlap for a large number of causes, so we should take seriously the idea that the ranking of causes would be different in a better model.
This means we need to take more seriously the idea that the true top causes are different to those suggested by 80K’s model.”
So I think EAs should approach the uncertainty on what the top cause is by spending more time individually thinking about cause prioritisation, and by placing more attention on personal fit in career choices. I think this would produce a distribution of career focuses which is less concentrated to randomista development, animal welfare, meta-EA and biosecurity.
With gender, the 2020 EA Survey shows that male EAs are less likely to prioritise near-term causes than female EAs. So it seems likely that EA was 75% female instead of 75% male, the distribution of career focuses of EAs would be different, which indicates some kind of model error to me.
With culture, I mentioned that I expect unknown unknowns here, but another useful thought experiment would be—how similar would EA’s cause priorities and rankings of cause priorities be if it emerged in India, or Brazil, or Nigeria, instead of USA / UK? For example, it seems plausible to me that we value animal welfare less than an EA movement with more Hindu / Buddhist cultural influences would, or that we prioritise promoting liberal democracy less than an imagined EA movement with more influence from people in less democratic countries. Also, maybe we value improving balance and harmony less than an EA movement that originated in Japan would, which could affect cause prioritisation.
So I’m an example of someone in that position (I’m trying to work out how to contribute via direct work to a cause area) so I appreciate the opportunity to discuss the topic.
Upon reflection, maybe the crux of my disagreement here is that I just don’t agree that the uncertainty is wide enough to effect the rankings (except in each tier) or to make the direct-work decision rule robust to personal fit.
I think that X-risks have non-overlapping confidence intervals with non-x-risks because of the scale of the problem, and I don’t feel like this changes from a near-term perspective. Even small chances of major catastrophic events this century seen to dwarf other problems.
80k’s second top priority areas are Nuclear security, Climate Change (extreme) and Improving Institutional decision making. For the first two, these seem to be associated with major catastrophe’s (maybe not x-risks) which also might be considered not to overlap with the next set of issues (factory farming/global health).
With respect to concerns that demographics might be heavily affecting cause prioritisation, I think it would be helpful to have specific examples of causes you think are under-estimated and the biases associated with them.
For example, I’ve heard lots of different arguments that x-risks are concerning even if you don’t buy into long-termism. To a similar end, I can’t think of any causes that would be under-valued because of not caring adequately about balance/harmony.
“I think that X-risks have non-overlapping confidence intervals with non-x-risks because of the scale of the problem, and I don’t feel like this changes from a near-term perspective. Even small chances of major catastrophic events this century seen to dwarf other problems.”
If you agree with the astronomical waste argument for longtermism, then this is true. But within x-risks, for example, I imagine that the confidence intervals for different x-risks probably overlap.
So as an imaginary example, I think it’d be suboptimal if all EAs worked on AI safety, or only on AI safety and engineered pandemics, and no EAs were working on nuclear war, supervolcanoes or climate change.
And then back to the real world, (without data on this), I’d guess that we have fewer EAs than is optimal currently working on supervolcanoes.
With respect to concerns that demographics might be heavily affecting cause prioritisation, I think it would be helpful to have specific examples of causes you think are under-estimated and the biases associated with them.
I think there are unknown unknowns here, but a concrete example which I offered above:
“For example, it seems plausible to me that we value animal welfare less than an EA movement with more Hindu / Buddhist cultural influences would.”
For example, I’ve heard lots of different arguments that x-risks are concerning even if you don’t buy into long-termism.
If you don’t buy longtermism, you probably still care about x-risks, but your rejection of longtermism massively affects the relative importance of x-risks compared to nearterm problems, which affects cause prioritisation.
Similarly, I don’t expect diversity of thought to introduce entirely new causes to EA or lead to current causes being entirely abandoned, but I do expect it to affect cause prioritisation.
I don’t entirely understand what East Asian cultures mean by balance / harmony so can’t tell how it would affect cause prioritisation, I just think there would be an effect.
I agree that confidence intervals between x-risks are more likely to overlap. I haven’t really looked into super-volcanoes or asteroids and I think that’s because what I know about them currently doesn’t lead me to believe they’re worth working on over AI or Biosecurity.
Possibly, a suitable algorithm would be to defer to/check with prominent EA organisations like 80k to see if they are allocating 1 in every 100 or every 1000 EAs to rare but possibly important x-risks. Without a coordinated effort by a central body, I don’t see how you’d calibrate adequately (use a random number generator and if the number is less than some number, work on a neglected but possibly important cause?).
My thoughts on EA allocation to cause areas have evolved quite a bit recently (partly due to talking 80k and others, mainly in biosecurity). I’ll probably write a post with my thoughts, but the bottom line is that, basically, the sentiment expressed here is correct and that it’s easier socially to have humility in the form of saying you have high uncertainty.
Responding to the spirit of the original post, my general sense is that plenty of people are not highly uncertain about AI-related x-risk—you might have gotten that email from 80k titled “A huge update to our problem profile — why we care so much about AI risk”. That being said, they’re still using phrases like “we’re very uncertain”. Maybe the lack of uncertainty about some relevant facts is lower than their decision rule. For example, in the problem profile, they write:
Overall, our current take is that AI development poses a bigger threat to humanity’s long-term flourishing than any other issue we know of.
Different Views under Near-Termism
If you don’t buy longtermism, you probably still care about x-risks, but your rejection of longtermism massively affects the relative importance of x-risks compared to nearterm problems, which affects cause prioritisation.
This seems tempting to believe, but I think we should substantiate it. What current x-risks are not ranked higher than non-x-risks (or how much less of a lead do they have) relative to non-x-risks causes from a near-term perspective?
I think this post proposes a somewhat detailed summary of how your views may change under a transformation from long-termist to near-termist. Scott says:
Does Long-Termism Ever Come Up With Different Conclusions Than Thoughtful Short-Termism?
I think yes, but pretty rarely, in ways that rarely affect real practice.
His arguments here are convincing because I find an AGI event this century likely. If you didn’t, then you would disagree. Still, I think that even were AI not to have short timelines, other existential risks like engineered pandemics, super-volcanoes or asteroids might have milder only catastrophic variations, which near-termists would equally prioritise, leading to little practical variation in what people work on.
Talking about different cultures and EA
Similarly, I don’t expect diversity of thought to introduce entirely new causes to EA or lead to current causes being entirely abandoned, but I do expect it to affect cause prioritisation.
I don’t entirely understand what East Asian cultures mean by balance / harmony so can’t tell how it would affect cause prioritisation, I just think there would be an effect.
Can you reason out how “there would be an effect”?
I’ve got a strong intuition that this is wrong, so I’m trying to think it through.
To argue that EA’s underestimate uncertainty, you need to directly observe their uncertainty estimates (and have knowledge of the correct level of uncertainty to have). For example, if the community was homogenous and all assigned a 1% chance to Cause X being the most important issue (I’m deliberately trying not to deal with how to measure this) and there was a 99% chance of cause Y being the most important issue, then all individuals would choose to work on Cause Y. If the probabilities were 5% X and 95% you’d get the same outcome. This is because individuals are making single choices.
Now, if there was a central body coordinating everyone’s efforts, in the first scenario, it still wouldn’t follow that 1% of people would get allocated to cause Y. Optimal allocation strategy aside, there isn’t this clean relationship between uncertainty and decision rules.
I think 80k is already very conscious of this (based on my general sense of 80k materials). Global priorities research is one of their 4 highest priorities areas and it’s precisely about having more confidence about what is the top priority.
I think something that would help me understand better where you are coming from is to hear more about what you think the decision rules are for most individuals, how they are taking their uncertainty into account and more about precisely how gender/culture interacts with cause area uncertainty in creating decisions.
From one of my other comments:
“The way I’m thinking about it, is that 80K have used some frameworks to come up with quantitative scores for how pressing each cause area is, and then ranked the cause areas by the point estimates.
But our imagined confidence intervals around the point estimates should be very large and presumably overlap for a large number of causes, so we should take seriously the idea that the ranking of causes would be different in a better model.
This means we need to take more seriously the idea that the true top causes are different to those suggested by 80K’s model.”
So I think EAs should approach the uncertainty on what the top cause is by spending more time individually thinking about cause prioritisation, and by placing more attention on personal fit in career choices. I think this would produce a distribution of career focuses which is less concentrated to randomista development, animal welfare, meta-EA and biosecurity.
With gender, the 2020 EA Survey shows that male EAs are less likely to prioritise near-term causes than female EAs. So it seems likely that EA was 75% female instead of 75% male, the distribution of career focuses of EAs would be different, which indicates some kind of model error to me.
With culture, I mentioned that I expect unknown unknowns here, but another useful thought experiment would be—how similar would EA’s cause priorities and rankings of cause priorities be if it emerged in India, or Brazil, or Nigeria, instead of USA / UK? For example, it seems plausible to me that we value animal welfare less than an EA movement with more Hindu / Buddhist cultural influences would, or that we prioritise promoting liberal democracy less than an imagined EA movement with more influence from people in less democratic countries. Also, maybe we value improving balance and harmony less than an EA movement that originated in Japan would, which could affect cause prioritisation.
Thanks for clarifying.
So I’m an example of someone in that position (I’m trying to work out how to contribute via direct work to a cause area) so I appreciate the opportunity to discuss the topic.
Upon reflection, maybe the crux of my disagreement here is that I just don’t agree that the uncertainty is wide enough to effect the rankings (except in each tier) or to make the direct-work decision rule robust to personal fit.
I think that X-risks have non-overlapping confidence intervals with non-x-risks because of the scale of the problem, and I don’t feel like this changes from a near-term perspective. Even small chances of major catastrophic events this century seen to dwarf other problems.
80k’s second top priority areas are Nuclear security, Climate Change (extreme) and Improving Institutional decision making. For the first two, these seem to be associated with major catastrophe’s (maybe not x-risks) which also might be considered not to overlap with the next set of issues (factory farming/global health).
With respect to concerns that demographics might be heavily affecting cause prioritisation, I think it would be helpful to have specific examples of causes you think are under-estimated and the biases associated with them.
For example, I’ve heard lots of different arguments that x-risks are concerning even if you don’t buy into long-termism. To a similar end, I can’t think of any causes that would be under-valued because of not caring adequately about balance/harmony.
If you agree with the astronomical waste argument for longtermism, then this is true. But within x-risks, for example, I imagine that the confidence intervals for different x-risks probably overlap.
So as an imaginary example, I think it’d be suboptimal if all EAs worked on AI safety, or only on AI safety and engineered pandemics, and no EAs were working on nuclear war, supervolcanoes or climate change.
And then back to the real world, (without data on this), I’d guess that we have fewer EAs than is optimal currently working on supervolcanoes.
I think there are unknown unknowns here, but a concrete example which I offered above:
“For example, it seems plausible to me that we value animal welfare less than an EA movement with more Hindu / Buddhist cultural influences would.”
If you don’t buy longtermism, you probably still care about x-risks, but your rejection of longtermism massively affects the relative importance of x-risks compared to nearterm problems, which affects cause prioritisation.
Similarly, I don’t expect diversity of thought to introduce entirely new causes to EA or lead to current causes being entirely abandoned, but I do expect it to affect cause prioritisation.
I don’t entirely understand what East Asian cultures mean by balance / harmony so can’t tell how it would affect cause prioritisation, I just think there would be an effect.
Sorry for the slow reply.
Talking about allocation of EA’s to cause areas.
I agree that confidence intervals between x-risks are more likely to overlap. I haven’t really looked into super-volcanoes or asteroids and I think that’s because what I know about them currently doesn’t lead me to believe they’re worth working on over AI or Biosecurity.
Possibly, a suitable algorithm would be to defer to/check with prominent EA organisations like 80k to see if they are allocating 1 in every 100 or every 1000 EAs to rare but possibly important x-risks. Without a coordinated effort by a central body, I don’t see how you’d calibrate adequately (use a random number generator and if the number is less than some number, work on a neglected but possibly important cause?).
My thoughts on EA allocation to cause areas have evolved quite a bit recently (partly due to talking 80k and others, mainly in biosecurity). I’ll probably write a post with my thoughts, but the bottom line is that, basically, the sentiment expressed here is correct and that it’s easier socially to have humility in the form of saying you have high uncertainty.
Responding to the spirit of the original post, my general sense is that plenty of people are not highly uncertain about AI-related x-risk—you might have gotten that email from 80k titled “A huge update to our problem profile — why we care so much about AI risk”. That being said, they’re still using phrases like “we’re very uncertain”. Maybe the lack of uncertainty about some relevant facts is lower than their decision rule. For example, in the problem profile, they write:
Different Views under Near-Termism
This seems tempting to believe, but I think we should substantiate it. What current x-risks are not ranked higher than non-x-risks (or how much less of a lead do they have) relative to non-x-risks causes from a near-term perspective?
I think this post proposes a somewhat detailed summary of how your views may change under a transformation from long-termist to near-termist. Scott says:
His arguments here are convincing because I find an AGI event this century likely. If you didn’t, then you would disagree. Still, I think that even were AI not to have short timelines, other existential risks like engineered pandemics, super-volcanoes or asteroids might have milder only catastrophic variations, which near-termists would equally prioritise, leading to little practical variation in what people work on.
Talking about different cultures and EA
Can you reason out how “there would be an effect”?