This problem of consequentialism applied to real human problems will always be there as long as what is “good” for others is not defined by those others, for themselves. Its impossible to determine what is or isn’t good for another person, without abstracting their agency away—which means whatever conclusion you come to about what is “good” for them, will always be flawed.
There are a lot of things we can say with certainty are nearly universally understood as “good”—like being able to live, for instance. It means EA isn’t in conflict with the right direction, for most, at this moment in time, because its giving and “good” work is largely focused on these sort of lower level, fundamental human and animal welfare problems. As you progress however, which is all basically theoretical at this point, this is where what is “good” becomes more and more grey and you encounter the problems created by having ignored the agency of those others.
As you get closer to a potential maximum benefit, I’d suggest you’d realize that the only real “good” or benefit an altruist can pursue is the maximization of agency for others. People with agency choose life, they choose better living, they choose personal progress, they can better see suffering, they choose to acknowledge the agency of others and if this sort of agency as good is the motivation of EAs, then you are far less likely to have to deal with fanaticism issues. When was there ever a harmful fanatic who’s MO was “I want to give everyone else all the power”?
The larger problem is that moralities diverge at the tails, not converging. This is why moderation doesn’t win out in long-term, and why different moral systems will view a high-tech future with very different reactions.
(BTW, this is why I suspect moral realism is not true, that is there are no facts of the matter over what’s good or bad.)
Yep, that’s what I said. The further you get from addressing basic human needs problems, the more grey “good” becomes. But its always grey, it just gets more and more grey towards ‘the tails’ as you say. I’m not really a moral realist either.
Another way to state my overall argument is that really the only altruistic thing to do is to make sure everyone has the power to make effectual moral decisions for themselves—the same power you have. This doesn’t exclude addressing basic human needs, btw. It would likely necessitate it.
If you believe in people, which I imagine effective altruists do because what’s the point otherwise, then people with agency that end up matching your level of agency will also end up updating to your moral position anyway, if you’re right.
I should have been more clear I guess: I was talking about the moral imperative of altruists, not a social regulating or political philosophy. But, from the perspective I am arguing here, imbuing that psychopath with agency would negate work to imbue everyone else with agency. I am actually not arguing against utility here, just against determining what is good for others, which only makes sense if you believe that people, as a whole or majority, are good or will choose good—which is sort of a required belief to be an altruist, isn’t it?
I don’t know of any libertarian philosophy that really considers the importance or moral value of other people’s agency let alone one that actively seeks to enable agency in others in order to do or maximize good. As far as I understand libertarianism, its pretty much just concerned with “self” and the only other-regarding aspect of it is ensuring that others don’t interfere with others, in order to preserve “self”, which makes it not other-regarding at all. There’s certainly little if any altruism involved. I mean, an individual libertarian could pursue altruism, I suppose, but its not a part of the underlying philosophy. I’d actually suggest that altruism, which is a reciprocal behavior, is pretty opposing to libertarian behaviors.
This problem of consequentialism applied to real human problems will always be there as long as what is “good” for others is not defined by those others, for themselves. Its impossible to determine what is or isn’t good for another person, without abstracting their agency away—which means whatever conclusion you come to about what is “good” for them, will always be flawed.
There are a lot of things we can say with certainty are nearly universally understood as “good”—like being able to live, for instance. It means EA isn’t in conflict with the right direction, for most, at this moment in time, because its giving and “good” work is largely focused on these sort of lower level, fundamental human and animal welfare problems. As you progress however, which is all basically theoretical at this point, this is where what is “good” becomes more and more grey and you encounter the problems created by having ignored the agency of those others.
As you get closer to a potential maximum benefit, I’d suggest you’d realize that the only real “good” or benefit an altruist can pursue is the maximization of agency for others. People with agency choose life, they choose better living, they choose personal progress, they can better see suffering, they choose to acknowledge the agency of others and if this sort of agency as good is the motivation of EAs, then you are far less likely to have to deal with fanaticism issues. When was there ever a harmful fanatic who’s MO was “I want to give everyone else all the power”?
The larger problem is that moralities diverge at the tails, not converging. This is why moderation doesn’t win out in long-term, and why different moral systems will view a high-tech future with very different reactions.
(BTW, this is why I suspect moral realism is not true, that is there are no facts of the matter over what’s good or bad.)
Yep, that’s what I said. The further you get from addressing basic human needs problems, the more grey “good” becomes. But its always grey, it just gets more and more grey towards ‘the tails’ as you say. I’m not really a moral realist either.
Another way to state my overall argument is that really the only altruistic thing to do is to make sure everyone has the power to make effectual moral decisions for themselves—the same power you have. This doesn’t exclude addressing basic human needs, btw. It would likely necessitate it.
If you believe in people, which I imagine effective altruists do because what’s the point otherwise, then people with agency that end up matching your level of agency will also end up updating to your moral position anyway, if you’re right.
Not if they inherently care about different things, eg psychopaths who enjoy taking away others’ agency
I should have been more clear I guess: I was talking about the moral imperative of altruists, not a social regulating or political philosophy. But, from the perspective I am arguing here, imbuing that psychopath with agency would negate work to imbue everyone else with agency. I am actually not arguing against utility here, just against determining what is good for others, which only makes sense if you believe that people, as a whole or majority, are good or will choose good—which is sort of a required belief to be an altruist, isn’t it?
Isn’t that just hardcore libertarianism, which some consider to be harmful?
I don’t know of any libertarian philosophy that really considers the importance or moral value of other people’s agency let alone one that actively seeks to enable agency in others in order to do or maximize good. As far as I understand libertarianism, its pretty much just concerned with “self” and the only other-regarding aspect of it is ensuring that others don’t interfere with others, in order to preserve “self”, which makes it not other-regarding at all. There’s certainly little if any altruism involved. I mean, an individual libertarian could pursue altruism, I suppose, but its not a part of the underlying philosophy. I’d actually suggest that altruism, which is a reciprocal behavior, is pretty opposing to libertarian behaviors.