I want CEA to represent the range of expert views on cause prioritization. I still don’t think we have amazing data on this, but my best guess is that this skews towards longtermist-motivated or X-risk work (like maybe 70-80%).
I would love someone to do a proper survey of everyone (trying to avoid one’s own personal networks) who has spent >1 year thinking about cause prioritization with a scope-sensitive and open-minded lens. I’ve tried to commission someone to do this a couple of times but it hasn’t worked out. If someone did this, it would help to shape our content, so I’d be happy to offer some advice and could likely find funding. If anyone is interested, let me know!
Thank you for wanting to be principled about such an important issue. However (speaking as someone who is both very strongly longtermist and a believer of the importance of cause prioritization), a core problem with the “neutrality”/expert-views framing of this comment is selection bias. We would naively expect people who spend a lot of time on cause prioritization to systematically overrate (relative to the broader community) both the non-obviousness of the most important causes, and their esotericism.
Put another way, if you were a thoughtful, altruistic, person who heard about EA in 2013 and your first instinct was to start what would be become Wave or earn-to-give for global poverty, you’d be systematically less represented in such a survey.
Now, I happen to think focusing a lot on cause prioritization is correct: I think ethics is hard, in many weird and surprising ways. But I don’t think I can (justifiably) get this from expert appeal/deference, it all comes down to specific beliefs I have about the world and how hard it is, and to some degree making specific bets that my own epistemology isn’t completely screwed up (because if it was, I probably can’t have much of an impact anyway).
Analogously, I also think we should update approximately not at all on the existence of God if we see surveys that philosophers of religion are much more likely to believe in God than other philosophers, or if ethicists are more likely to be deontologists than utilitarians.
I agree that all sorts of selection biases are going to be at play in this sort of project: the methodology would be a minefield and I don’t have all the answers.
I agree that there’s going to be a selection bias towards people who think cause prio is hard. Honestly, I guess I also believe that ethics is hard, so I was basically assuming that worldview. But maybe this is a very contentious position? I’d be interested to hear from anyone who thinks that cause prio is just really easy.
More generally, I agree that I/CEA can’t just defer our way out of this problem or other problems: you always need to choose the experts or the methodology or whatever. But, partly because ethics seems hard to me, I feel better about something like what I proposed, rather than just going with our staff’s best guess (when we mostly haven’t engaged deeply with all of the arguments).
I agree that there’s going to be a selection bias towards people who think cause prio is hard.
To be more explicit, there’s also a selection bias towards esotericism. Like how much you think most of the work is “done for you” by the rest of the world (e.g. in developmental economics or moral philosophy), versus needing to come up with the frameworks yourself.
As a side note, I think there’s an analogous selection bias within longtermism, where many of our best and brightest people end up doing technical alignment, making it harder to have clearer thinking about other longtermist issues (including issues directly related to making the development of transformative AI go well, like understanding the AI strategic landscape and AI safety recruitment strategy)
Thank you for wanting to be principled about such an important issue. However (speaking as someone who is both very strongly longtermist and a believer of the importance of cause prioritization), a core problem with the “neutrality”/expert-views framing of this comment is selection bias. We would naively expect people who spend a lot of time on cause prioritization to systematically overrate (relative to the broader community) both the non-obviousness of the most important causes, and their esotericism.
Put another way, if you were a thoughtful, altruistic, person who heard about EA in 2013 and your first instinct was to start what would be become Wave or earn-to-give for global poverty, you’d be systematically less represented in such a survey.
Now, I happen to think focusing a lot on cause prioritization is correct: I think ethics is hard, in many weird and surprising ways. But I don’t think I can (justifiably) get this from expert appeal/deference, it all comes down to specific beliefs I have about the world and how hard it is, and to some degree making specific bets that my own epistemology isn’t completely screwed up (because if it was, I probably can’t have much of an impact anyway).
Analogously, I also think we should update approximately not at all on the existence of God if we see surveys that philosophers of religion are much more likely to believe in God than other philosophers, or if ethicists are more likely to be deontologists than utilitarians.
I agree that all sorts of selection biases are going to be at play in this sort of project: the methodology would be a minefield and I don’t have all the answers.
I agree that there’s going to be a selection bias towards people who think cause prio is hard. Honestly, I guess I also believe that ethics is hard, so I was basically assuming that worldview. But maybe this is a very contentious position? I’d be interested to hear from anyone who thinks that cause prio is just really easy.
More generally, I agree that I/CEA can’t just defer our way out of this problem or other problems: you always need to choose the experts or the methodology or whatever. But, partly because ethics seems hard to me, I feel better about something like what I proposed, rather than just going with our staff’s best guess (when we mostly haven’t engaged deeply with all of the arguments).
To be more explicit, there’s also a selection bias towards esotericism. Like how much you think most of the work is “done for you” by the rest of the world (e.g. in developmental economics or moral philosophy), versus needing to come up with the frameworks yourself.
As a side note, I think there’s an analogous selection bias within longtermism, where many of our best and brightest people end up doing technical alignment, making it harder to have clearer thinking about other longtermist issues (including issues directly related to making the development of transformative AI go well, like understanding the AI strategic landscape and AI safety recruitment strategy)