They care more about the people around them than those far away, or they care more about some kinds of problems than others, and they care about how things are done, not just the outcome.
It seems to me that part of effective altruism has been not just increasing the effectiveness of altruism by recommending people change their actions, or where their philanthropic dollars go, to interventions with higher leverage, but also pointing out that people would be more effective if they changed their values. For example, Peter Singer’s ‘expanding circle’, meat-free diet advocacy, etc.
People don’t like to be told they need to change their values, or that they should change their values, or that the world would be a better place if they had some values that they didn’t have already. Really, one’s values tend to be near the core of one’s social identity, so an attack on values can be perceived as an the attack on the self. The obvious example of this is the friend you know who doesn’t like vegetarians for pointing out how bad eating meat is, while that friend doesn’t bring up any particular philosophical objections, but just doesn’t like being called out for doing something they’ve always been raised to think of as normal.
Changing one’s values does not more effectively promote the values one has initially, so it seems one should be averse to it. I think the expanding circle case is more complicated—the advocates of a wider circle are trying to convince the others that those others are mistaken about their own existing values, and that by consistency they must care about some entities they think they don’t care about. This is why the phenomenon looks like an expanding circle—points just outside a circle look a lot like points just inside it, so consistency pushes the circle outwards (this doesn’t explain why the circle expands rather than contracting).
That makes more sense. I haven’t read much philosophy, or engaged with that sort of thinking very deeply, so I often get confused about what I or others (are supposed to) mean by the word ‘value’. I meant that people would be more effective if they altered their actions to be more in line with their values after they were updated for consistency. If someone says “I don’t value X” one day, and “I now value X” the next day, I myself semantically think of that as a ‘change of values’ rather than ‘an update of values toward greater behavioral consistency’. The latter definition seems to be the more common one around these parts, and also more precise, so I’ll just go with that one from now on.
That makes more sense. I haven’t read much philosophy, or engaged with that sort of thinking very deeply, so I often get confused about what I or others (are supposed to) mean by the word ‘value’. I meant that people would be more effective if they altered their actions to be more in line with their values after they were updated for consistency. If someone says “I don’t value X” one day, and “I now value X” the next day, I myself semantically think of that as a ‘change of values’ rather than ‘an update of values toward greater behavioral consistency’. The latter definition seems to be the more common one around these parts, and also more precise, so I’ll just go with that one from now on.
people would be more effective if they changed their values.
If you changed your value to “Evan’s Gaensbauer’s house being painted blue” you could probably promote that very efficiently. It would also be worthless—the point is to promote the values we already have, and avoid value deathism
It seems to me that part of effective altruism has been not just increasing the effectiveness of altruism by recommending people change their actions, or where their philanthropic dollars go, to interventions with higher leverage, but also pointing out that people would be more effective if they changed their values. For example, Peter Singer’s ‘expanding circle’, meat-free diet advocacy, etc.
People don’t like to be told they need to change their values, or that they should change their values, or that the world would be a better place if they had some values that they didn’t have already. Really, one’s values tend to be near the core of one’s social identity, so an attack on values can be perceived as an the attack on the self. The obvious example of this is the friend you know who doesn’t like vegetarians for pointing out how bad eating meat is, while that friend doesn’t bring up any particular philosophical objections, but just doesn’t like being called out for doing something they’ve always been raised to think of as normal.
Changing one’s values does not more effectively promote the values one has initially, so it seems one should be averse to it. I think the expanding circle case is more complicated—the advocates of a wider circle are trying to convince the others that those others are mistaken about their own existing values, and that by consistency they must care about some entities they think they don’t care about. This is why the phenomenon looks like an expanding circle—points just outside a circle look a lot like points just inside it, so consistency pushes the circle outwards (this doesn’t explain why the circle expands rather than contracting).
Unless you’re a moral realist, and want to have the correct values.
That makes more sense. I haven’t read much philosophy, or engaged with that sort of thinking very deeply, so I often get confused about what I or others (are supposed to) mean by the word ‘value’. I meant that people would be more effective if they altered their actions to be more in line with their values after they were updated for consistency. If someone says “I don’t value X” one day, and “I now value X” the next day, I myself semantically think of that as a ‘change of values’ rather than ‘an update of values toward greater behavioral consistency’. The latter definition seems to be the more common one around these parts, and also more precise, so I’ll just go with that one from now on.
That makes more sense. I haven’t read much philosophy, or engaged with that sort of thinking very deeply, so I often get confused about what I or others (are supposed to) mean by the word ‘value’. I meant that people would be more effective if they altered their actions to be more in line with their values after they were updated for consistency. If someone says “I don’t value X” one day, and “I now value X” the next day, I myself semantically think of that as a ‘change of values’ rather than ‘an update of values toward greater behavioral consistency’. The latter definition seems to be the more common one around these parts, and also more precise, so I’ll just go with that one from now on.
If you changed your value to “Evan’s Gaensbauer’s house being painted blue” you could probably promote that very efficiently. It would also be worthless—the point is to promote the values we already have, and avoid value deathism