Does “aiming for a nearby peak you can see the torch on” risk giving moral license to proximal bad actions? For example, consider (I endorse none of these):
I want to live in a world without factory farms, thus I’m justified in proximally reducing factoring farming by doing arson at any factory farm I like.
I want to live in the communist utopia where nobody has more private wealth than anybody else, so stealing from sufficiently rich people is good.
I want to live in a world with ubiquitous cheap clean energy and think any agenda other than energy abundance is a distraction, so I spend all my altruistic effort lobbying against any kind of demand responsiveness or energy efficiency standard.
Intuitively, these feel quite different to me than your example about eating tuna:
“In a world I’d like to live in, cheap alternatives exist to replace the need for animal farming, and people broadly care a notch more about animals suffering. That world involves people not eating tuna. So I should head in that direction.”
Maybe it’s the “we can agree on features of a world we all want” condition? But the world pointed at isn’t so different than the world in my first example. (Also, are there worlds we all agree we all want?) The core difference seems—to me—to be the nature of the proximal action. Not eating tuna is at least plausibly good; arson is almost surely bad. As you conclude,
But if the intuitively good thing is a constructive part of a future everyone wants, that’s a reason to do the project, and a way to cut through the paralysis.
Intuitive to whom? In EA spaces, I find that most people share moral intuitions that are at least familiar and grokkable to me. But in the wider world (even fairly demographically similar worlds, like “people on English speaking reddit”), I regularly find that others’ moral intuitions are just wild to me. Presumably mine are equally bizarre to them.
So...yeah. In a world of disagreement about what’s good and what’s intuitive, how do you use “head generally in the direction we all want to go” as a decision criterion?
Thanks for sharing!
Does “aiming for a nearby peak you can see the torch on” risk giving moral license to proximal bad actions? For example, consider (I endorse none of these):
I want to live in a world without factory farms, thus I’m justified in proximally reducing factoring farming by doing arson at any factory farm I like.
I want to live in the communist utopia where nobody has more private wealth than anybody else, so stealing from sufficiently rich people is good.
I want to live in a world with ubiquitous cheap clean energy and think any agenda other than energy abundance is a distraction, so I spend all my altruistic effort lobbying against any kind of demand responsiveness or energy efficiency standard.
Intuitively, these feel quite different to me than your example about eating tuna:
Maybe it’s the “we can agree on features of a world we all want” condition? But the world pointed at isn’t so different than the world in my first example. (Also, are there worlds we all agree we all want?) The core difference seems—to me—to be the nature of the proximal action. Not eating tuna is at least plausibly good; arson is almost surely bad. As you conclude,
Intuitive to whom? In EA spaces, I find that most people share moral intuitions that are at least familiar and grokkable to me. But in the wider world (even fairly demographically similar worlds, like “people on English speaking reddit”), I regularly find that others’ moral intuitions are just wild to me. Presumably mine are equally bizarre to them.
So...yeah. In a world of disagreement about what’s good and what’s intuitive, how do you use “head generally in the direction we all want to go” as a decision criterion?