I rather liked this comment, and think it really hits the nail on the head. Myself being a person that has only recently come into contact and developed an interest, and therefore having mostly an ‘outsider’ perspective, I would add that there’s a big difference in the perception of ‘effective altruism’, which almost anybody would find reasonable and morally unobjectionable, and ‘Effective Altruism’ / Rationalism as a movement with some beliefs and practices that will be felt as weird and rejectable by many people (basically, all those mentioned by S.E. Montgomery like elitism, long-termism, utilitarianism, a general hibristic and nerdy belief that complex issues and affairs are reducible to numbers and optimization models, etc...).
Controversial take: While I agree that EA has big problems, I actually think that elitism was correct for one reason.
Things are usually heavy tailed like power laws, indeed they may be the most common distribution, and this supports elitism.
One of my largest criticisms with EA is that they don’t realize there might be a crucial consideration around moral realism. Now this is a non-special criticism, but moral realism is basically the theory behind where morality is real and mind independent.
Yet there is probably overconfidence on this front, and this matters, because if it isn’t true, than EA will have to change drastically. And in general this is way too unquestioned as an assumption being used.
It’s possible I’ve flipped the sign on what you’re saying, but if I haven’t, I’m pretty sure most EAs are not moral realists, so I don’t know where you got the impression that it’s an underlying assumption of any serious EA efforts.
If I did flip the sign, then I don’t think it’s true that moral realism is “too unquestioned”. At this point it might be more fair to say that too much time & ink has been spilled on what’s frankly a pretty trivial question that only sees as much engagement as it does because people get caught up in arguing about definitions of words (and, of course, because some other people are deeply confused).
I think this might be a crux here: I do think that this question matters a lot, and I will point to implications if moral realism is false. Thankfully more EAs are moral anti-realists than I thought.
EA would need to recognize that their values aren’t superior or inferior to other people’s values, just different values. In other words it would need to stop believing it’s actually objectively right at all to hold (X values).
The moral progress thesis is not correct, that is the changes in the 19th and 20th centuries were mostly not made on moral progress. At the very least moral progress is very much subjective.
Values are points of view, not fundamental truths that humans have to abide by.
Now this isn’t the most important question, but it is a question that I think matters, especially for moral movements like EA.
I rather liked this comment, and think it really hits the nail on the head. Myself being a person that has only recently come into contact and developed an interest, and therefore having mostly an ‘outsider’ perspective, I would add that there’s a big difference in the perception of ‘effective altruism’, which almost anybody would find reasonable and morally unobjectionable, and ‘Effective Altruism’ / Rationalism as a movement with some beliefs and practices that will be felt as weird and rejectable by many people (basically, all those mentioned by S.E. Montgomery like elitism, long-termism, utilitarianism, a general hibristic and nerdy belief that complex issues and affairs are reducible to numbers and optimization models, etc...).
Controversial take: While I agree that EA has big problems, I actually think that elitism was correct for one reason.
Things are usually heavy tailed like power laws, indeed they may be the most common distribution, and this supports elitism.
One of my largest criticisms with EA is that they don’t realize there might be a crucial consideration around moral realism. Now this is a non-special criticism, but moral realism is basically the theory behind where morality is real and mind independent.
Yet there is probably overconfidence on this front, and this matters, because if it isn’t true, than EA will have to change drastically. And in general this is way too unquestioned as an assumption being used.
It’s possible I’ve flipped the sign on what you’re saying, but if I haven’t, I’m pretty sure most EAs are not moral realists, so I don’t know where you got the impression that it’s an underlying assumption of any serious EA efforts.
If I did flip the sign, then I don’t think it’s true that moral realism is “too unquestioned”. At this point it might be more fair to say that too much time & ink has been spilled on what’s frankly a pretty trivial question that only sees as much engagement as it does because people get caught up in arguing about definitions of words (and, of course, because some other people are deeply confused).
I think this might be a crux here: I do think that this question matters a lot, and I will point to implications if moral realism is false. Thankfully more EAs are moral anti-realists than I thought.
EA would need to recognize that their values aren’t superior or inferior to other people’s values, just different values. In other words it would need to stop believing it’s actually objectively right at all to hold (X values).
The moral progress thesis is not correct, that is the changes in the 19th and 20th centuries were mostly not made on moral progress. At the very least moral progress is very much subjective.
Values are points of view, not fundamental truths that humans have to abide by.
Now this isn’t the most important question, but it is a question that I think matters, especially for moral movements like EA.