[status: mostly sharing long-held feelings&intuitions, but have not exposed them to scrutiny before]
I feel disappointed in the focus on longtermism in the EA Community. This is not because of empirical views about e.g. the value of x-risk reduction, but because we seem to be doing cause prioritisation based on a fairly rare set of moral beliefs (people in the far future matter as much as people today), at the expense of cause prioritisation models based on other moral beliefs.
The way I see the potential of the EA community is by helping people to understand their values and then actually try to optimize for them, whatever they are. What the EA community brings to the table is the idea that we should prioritise between causes, that triaging is worth it.
If we focus the community on longtermism, we lose out on lots of other people with different moral views who could really benefit from the ‘Effectiveness’ idea in EA.
This has some limits, there are some views I consider morally atrocious. I prefer not giving these people the tools to more effectively pursue their goals.
But overall, I would much prefer to have more people to have access to cause prioritisation tools, and not just people who find longtermism appealing.
What underlies this view is possibly that I think the world would be a better place if most people had better tools to do the most good, whatever they consider good to be (if you want to use SSC jargon, you could say I favour mistake theory over conflict theory).
I appreciate this might not necessarily be true from a longtermist perspective, especially if you take the arguments around cluelessness seriously. If you don’t even know what is best to do from a longtermist perspective, you can hardly say the world would be better off if more people tried to pursue their moral views more effectively.
I have some sympathy with this view, and think you could say a similar thing with regard non-utilitarian views. But I’m not sure how one would cache out the limits on ‘atrocious’ views in a principled manner. To a truly committed longtermist it is plausible that any non-longtermist view is atrocious!
Yes, completely agree, I was also thinking of non-utilitarian views when I was saying non-longtermist views. Although ‘doing the most good’ is implicitly about consequences and I expect for someone who wants to be the best virtual ethicist one can be to not find the EA community as valuable for helping them on that path than for people who want to optimize for specific consequences (i.e. the most good). I would be very curious what a good community for that kind of person is however and what good tools for that path are.
I agree dividing between the desirability of different moral views is hardly doable in a principled manner, but even just looking at longtermism we have disagreements whether they should be suffering-focussed or not, so there already is no one simple truth.
I’d be really curious what others think about whether humanity collectively would be better off according to most if we all worked effectively towards our desired worlds, or not, since this feels like an important crux to me.
I mostly share this sentiment. One concern I have: I think one must be very careful in developing cause prioritization tools that work with almost any value system. Optimizing for naively held moral views can cause net harm; Scott Alexander implied that terrorists might just be taking beliefs too seriously when those beliefs only work in an environment of epistemic learned helplessness.
One possible way to identify views reasonable enough to develop tools for is checking that they’re consistent under some amount of reflection; another way could be checking that they’re consistent with facts e.g. lack of evidence for supernatural entities, or the best knowledge on conscious experience of animals.
I think that thinking about longtermism enables people to feel empowered to solve problems somewhat beyond the reality, truly feeling the prestige/privilege/knowing-better of ‘doing the most good’- also, this may be a viewpoint applicable for those who really do not have to worry about finances, but also that is relative. Which also links to my second point that some affluent persons enjoy speaking about innovative solutions, reflecting current power structures defined by high-technology, among others. It would be otherwise hard to make a community of people feeling the prestige of being paid a little to do good or donating to marginally improve some of the current global institutions, that cause the present problems. Or wouldit
[status: mostly sharing long-held feelings&intuitions, but have not exposed them to scrutiny before]
I feel disappointed in the focus on longtermism in the EA Community. This is not because of empirical views about e.g. the value of x-risk reduction, but because we seem to be doing cause prioritisation based on a fairly rare set of moral beliefs (people in the far future matter as much as people today), at the expense of cause prioritisation models based on other moral beliefs.
The way I see the potential of the EA community is by helping people to understand their values and then actually try to optimize for them, whatever they are. What the EA community brings to the table is the idea that we should prioritise between causes, that triaging is worth it.
If we focus the community on longtermism, we lose out on lots of other people with different moral views who could really benefit from the ‘Effectiveness’ idea in EA.
This has some limits, there are some views I consider morally atrocious. I prefer not giving these people the tools to more effectively pursue their goals.
But overall, I would much prefer to have more people to have access to cause prioritisation tools, and not just people who find longtermism appealing. What underlies this view is possibly that I think the world would be a better place if most people had better tools to do the most good, whatever they consider good to be (if you want to use SSC jargon, you could say I favour mistake theory over conflict theory).
I appreciate this might not necessarily be true from a longtermist perspective, especially if you take the arguments around cluelessness seriously. If you don’t even know what is best to do from a longtermist perspective, you can hardly say the world would be better off if more people tried to pursue their moral views more effectively.
I have some sympathy with this view, and think you could say a similar thing with regard non-utilitarian views. But I’m not sure how one would cache out the limits on ‘atrocious’ views in a principled manner. To a truly committed longtermist it is plausible that any non-longtermist view is atrocious!
Yes, completely agree, I was also thinking of non-utilitarian views when I was saying non-longtermist views. Although ‘doing the most good’ is implicitly about consequences and I expect for someone who wants to be the best virtual ethicist one can be to not find the EA community as valuable for helping them on that path than for people who want to optimize for specific consequences (i.e. the most good). I would be very curious what a good community for that kind of person is however and what good tools for that path are.
I agree dividing between the desirability of different moral views is hardly doable in a principled manner, but even just looking at longtermism we have disagreements whether they should be suffering-focussed or not, so there already is no one simple truth.
I’d be really curious what others think about whether humanity collectively would be better off according to most if we all worked effectively towards our desired worlds, or not, since this feels like an important crux to me.
I mostly share this sentiment. One concern I have: I think one must be very careful in developing cause prioritization tools that work with almost any value system. Optimizing for naively held moral views can cause net harm; Scott Alexander implied that terrorists might just be taking beliefs too seriously when those beliefs only work in an environment of epistemic learned helplessness.
One possible way to identify views reasonable enough to develop tools for is checking that they’re consistent under some amount of reflection; another way could be checking that they’re consistent with facts e.g. lack of evidence for supernatural entities, or the best knowledge on conscious experience of animals.
I think that thinking about longtermism enables people to feel empowered to solve problems somewhat beyond the reality, truly feeling the prestige/privilege/knowing-better of ‘doing the most good’- also, this may be a viewpoint applicable for those who really do not have to worry about finances, but also that is relative. Which also links to my second point that some affluent persons enjoy speaking about innovative solutions, reflecting current power structures defined by high-technology, among others. It would be otherwise hard to make a community of people feeling the prestige of being paid a little to do good or donating to marginally improve some of the current global institutions, that cause the present problems. Or wouldit