“And there are also attitudes that are sufficiently common to not be personally identifiable, such as that one’s life as an important EA is worth that of at least 20 “normal” people.” can you think about editing this please—its a view I’m worried doesn’t deserve platform. It doesn’t seem to be the result of consequentialist thinking, just vanity.
Explanation:
If important was defined more precisely around specific questions, such as instrumental value to other people’s welfare, it might be a way of thinking about how useful it is to spend time supporting current EA people compared to time supporting others (but even then, that’s a dumb calculation because you want to be looking at a specific EA and a specific way of supporting them compared to the best available alternative). But as it stands I can’t see how that’s a useful thought—enlighten me if I’m wrong.
I agree that the life of an EA isn’t going to be more important, even if saving that EA has greater value than saving someone who isn’t an EA.
And if we’re giving animals any moral weight at all (as we obviously should), the same can be said about people who are vegan.
Edited (after Tom A’s comment): Maybe part of the problem is we’re not clear here about what we mean by “a life”. In my mind, a life is more or less important depending on whether it contains more or less intrinsic goods. The fact that an EA might do more good than a non-EA doesn’t make their life more valuable—it doesn’t obviously add any intrinsic goods to it—it just makes saving them more valuable. On the other hand, if we mean to include all of someone’s actions and the effects of these actions in someone’s “life”, then the way it’s worded is unproblematic.
This is nit-picky, but I think it’s right. Is this what you’re getting at, Tom S?
I’m not endorsing the view, just giving it as an example of one some people actually hold! At least in the cases I’ve had some exposure to, they’re thinking of instrumental value, and of the worth of lives all things considered, not just who you should spend time supporting.
2). Doing a calc of the instrumental value of saving an individual from a group is not actually morally useful: you want to do it for an individual when its relevant if you’re thining about instrumental value? (instrumental value can totally be a vanity thing—when would you have to save an EA’s life anyway?)
I think that how we represent these arguments in writing is important to our brand as a movement, that’s the thrust of my comment. It’s obvious that you don’t endorse it—but you are giving it platform. You’re also saying its held by n>3 people. I think there’s a cost to this and I can’t see the benefit. put me right and I’ll delete this thread. :)
“And there are also attitudes that are sufficiently common to not be personally identifiable, such as that one’s life as an important EA is worth that of at least 20 “normal” people.” can you think about editing this please—its a view I’m worried doesn’t deserve platform. It doesn’t seem to be the result of consequentialist thinking, just vanity.
Explanation: If important was defined more precisely around specific questions, such as instrumental value to other people’s welfare, it might be a way of thinking about how useful it is to spend time supporting current EA people compared to time supporting others (but even then, that’s a dumb calculation because you want to be looking at a specific EA and a specific way of supporting them compared to the best available alternative). But as it stands I can’t see how that’s a useful thought—enlighten me if I’m wrong.
I agree that the life of an EA isn’t going to be more important, even if saving that EA has greater value than saving someone who isn’t an EA.
And if we’re giving animals any moral weight at all (as we obviously should), the same can be said about people who are vegan.
Edited (after Tom A’s comment): Maybe part of the problem is we’re not clear here about what we mean by “a life”. In my mind, a life is more or less important depending on whether it contains more or less intrinsic goods. The fact that an EA might do more good than a non-EA doesn’t make their life more valuable—it doesn’t obviously add any intrinsic goods to it—it just makes saving them more valuable. On the other hand, if we mean to include all of someone’s actions and the effects of these actions in someone’s “life”, then the way it’s worded is unproblematic.
This is nit-picky, but I think it’s right. Is this what you’re getting at, Tom S?
1). The way it reads it sounds like you’re talking about intrinsic value to someone not used to these discussion
I’m not endorsing the view, just giving it as an example of one some people actually hold! At least in the cases I’ve had some exposure to, they’re thinking of instrumental value, and of the worth of lives all things considered, not just who you should spend time supporting.
2). Doing a calc of the instrumental value of saving an individual from a group is not actually morally useful: you want to do it for an individual when its relevant if you’re thining about instrumental value? (instrumental value can totally be a vanity thing—when would you have to save an EA’s life anyway?)
I think that how we represent these arguments in writing is important to our brand as a movement, that’s the thrust of my comment. It’s obvious that you don’t endorse it—but you are giving it platform. You’re also saying its held by n>3 people. I think there’s a cost to this and I can’t see the benefit. put me right and I’ll delete this thread. :)