I agree wholeheartedly with this! Strong upvote from me.
I agree that cause prioritization research in EA focuses almost entirely on utilitarian and longtermist views. There’s substantial diversity of ethical theories within this space, but I bet that most of the world’s population are not longtermist utilitarians. I’d like to see more research trying to apply cause prioritization to non-utilitarian worldviews such as ones that emphasize distributive justice.
One thing I notice is that, with few exceptions, the path to change for EA folk who want to improve the long-run future is research. They work at research institutions, design AI systems, fund research, support research. Those that do not do research seem to be trying to accumulate power or wealth or CV points in the vague hope that at some point the researchers will know what needs doing.
Fully agree, but I think it’s ironic (in a good way) that your proposed solution is “more global priorities research.” When I see some of 80K’s more recent advice, I think, “Dude, I already sank 4 years of college into studying CS and training to be a software engineer and now you expect me to shift into research or public policy jobs?” Now I know they don’t expect everyone to follow their priority paths, and I’m strongly thinking about shifting into AI safety or data science anyway. But I often feel discouraged because my skill set doesn’t match what the community thinks it needs most.
I think there needs to be much better research into how to make complex decisions despite high uncertainty. There is a whole field of decision making under deep uncertainty (or knightian uncertainty) used in policy design, military decision making and climate science but rarely discussed in EA.
I wouldn’t know how to assess this claim, but this is a very good point. I’m glad you’re writing a paper about this.
Finally, I love the style of humor you use in this post.
Hi evelynciara, Thank you so much for your positivity and for complementing my writing.
Also to say do not feel discouraged. It is super unclear exactly what the community needs and I we should each be doing what we can with the skills we have and see what form that takes.
I agree wholeheartedly with this! Strong upvote from me.
I agree that cause prioritization research in EA focuses almost entirely on utilitarian and longtermist views. There’s substantial diversity of ethical theories within this space, but I bet that most of the world’s population are not longtermist utilitarians. I’d like to see more research trying to apply cause prioritization to non-utilitarian worldviews such as ones that emphasize distributive justice.
Fully agree, but I think it’s ironic (in a good way) that your proposed solution is “more global priorities research.” When I see some of 80K’s more recent advice, I think, “Dude, I already sank 4 years of college into studying CS and training to be a software engineer and now you expect me to shift into research or public policy jobs?” Now I know they don’t expect everyone to follow their priority paths, and I’m strongly thinking about shifting into AI safety or data science anyway. But I often feel discouraged because my skill set doesn’t match what the community thinks it needs most.
I wouldn’t know how to assess this claim, but this is a very good point. I’m glad you’re writing a paper about this.
Finally, I love the style of humor you use in this post.
Hi evelynciara, Thank you so much for your positivity and for complementing my writing.
Also to say do not feel discouraged. It is super unclear exactly what the community needs and I we should each be doing what we can with the skills we have and see what form that takes.