I have some sympathy with this view, and think you could say a similar thing with regard non-utilitarian views. But I’m not sure how one would cache out the limits on ‘atrocious’ views in a principled manner. To a truly committed longtermist it is plausible that any non-longtermist view is atrocious!
Yes, completely agree, I was also thinking of non-utilitarian views when I was saying non-longtermist views. Although ‘doing the most good’ is implicitly about consequences and I expect for someone who wants to be the best virtual ethicist one can be to not find the EA community as valuable for helping them on that path than for people who want to optimize for specific consequences (i.e. the most good). I would be very curious what a good community for that kind of person is however and what good tools for that path are.
I agree dividing between the desirability of different moral views is hardly doable in a principled manner, but even just looking at longtermism we have disagreements whether they should be suffering-focussed or not, so there already is no one simple truth.
I’d be really curious what others think about whether humanity collectively would be better off according to most if we all worked effectively towards our desired worlds, or not, since this feels like an important crux to me.
I have some sympathy with this view, and think you could say a similar thing with regard non-utilitarian views. But I’m not sure how one would cache out the limits on ‘atrocious’ views in a principled manner. To a truly committed longtermist it is plausible that any non-longtermist view is atrocious!
Yes, completely agree, I was also thinking of non-utilitarian views when I was saying non-longtermist views. Although ‘doing the most good’ is implicitly about consequences and I expect for someone who wants to be the best virtual ethicist one can be to not find the EA community as valuable for helping them on that path than for people who want to optimize for specific consequences (i.e. the most good). I would be very curious what a good community for that kind of person is however and what good tools for that path are.
I agree dividing between the desirability of different moral views is hardly doable in a principled manner, but even just looking at longtermism we have disagreements whether they should be suffering-focussed or not, so there already is no one simple truth.
I’d be really curious what others think about whether humanity collectively would be better off according to most if we all worked effectively towards our desired worlds, or not, since this feels like an important crux to me.