Tyler, thanks so much for writing this. I’ve been struggling to get more involved with EA precisely because I fear that accepting its premises would inexorably lead to the kind of dark night you’ve described. How could I possibly waste my time on leisure activities once I’ve seen the dark world? Even my work in the disability and autism communities would be profligate when measured against worthier causes.
This post goes some way to helping dispel these notions in me, but I’m still struggling with the implications. For example, I have a hard time seeing music, art, dancing, poetry and so forth as ends-in-themselves, rather than as a means to an end of generating utility in the form of quality of life. And it is hard for me to not see that quality of life as being fungible with that of other people; why should my own QALYs count for more than the drowning child’s, except on account of my own “beastly” self-interest?
Maybe the valuing of EA-ends are indeed just as beastly as valuing any other ends. Yet if this is so, one is not actually valuing the betterment of the world, but the neurological reward mechanism, the warm fuzzies, that it produces. Maybe these warm fuzzies perfectly align with doing the most good. But this proxy-alignment doesn’t seem like the basis of a robust system of ethics.
Where I had worried about falling on the slippery slope of EA-style utilitarianism, but I now fear a slope in the other direction; if I accept that “morality is relevant only because we care deeply about it”, then does it not follow that for those who care nought about it, there is consequently no moral obligation at all? Is our moral duty, if it exists, only commensurate with the degree of pleasure we derive from its practice?
There’s also a more practical aspect, best encapsulated by ThomasW’s reply to Eric Neyman’s bargain with the EA machine:
It seems pretty intuitive that our impact would be power law distributed … If your other personal factors are your ability to have fun, have good friendships, etc., you now have to make the claim that those things are also power-law distributed, and that your best life with respect to those other values is hundreds of times better than your impact maximizing life. If you don’t make that claim, then either you have to give your other values an extremely high weight compared with impact, or you have to let impact guide every decision
I don’t want to let impact guide my every decision. I want to believe that no trade-offs are required. But in building our moral framework, shouldn’t we assume the least convenient possible world? I’m not suggesting that there are yet certain answers here, but in the meantime I personally find it difficult to make moral decisions under such philosophical uncertainty. For now, I suppose the best heuristic remains:
Hi Braxton – I feel moved by what you wrote here and want to respond later in full.
For now, I just want to thank you for your phrase here: “How could I possibly waste my time on leisure activities once I’ve seen the dark world?” I think this might deserve its own essay. I think it poses a challenge even for anti-realists who aren’t confused about their values, in the way Spencer Greenberg talks about here (worth a read).
Through the analogy I use in my essay, it might be nonsensical, in the conditions of a functional society, to say something like “It’s more important to eat well than to sleep well,” since you need both to be alive, and the society has minimized the tradeoffs between these two things. However, in situations of scarcity or emergency, that might change. If supply chains suddenly fail, I might find myself without food, and start sacrificing sleep to go forage.
A similar tradeoff might apply to value threatened by emergency or scarcity. For instance, let’s say that a much-loved yet little-seen friend messages me: “Tyler! I missed my layover in Berlin, so now I’m here for the night!” Under this condition of scarcity, I might put off the art project I’d planned to work on this night. This is a low-stakes example, but seems to generalize to high stakes ones.
One might say: The emergency/scarcity conditions of the world bring our EA values under continuous threat. Does the same logic not apply?
I need to think about this more. My preliminary answer is kind of simple. Eventually I do need to sleep well to continue to forage for food, otherwise I’ll undermine my foraging. Likewise, I can’t raincheck my artmaking every night if my art project is a key source of fulfillment – doing so will undermine my quality of engagement when I spend time with friends. And similarly, allowing EA pursuits to crowd out the rest of the good life seems to undermine those EA pursuits long term (for roughly everyone that I know at least – my opinion). This seems to be true not only of EA pursuits, but any value that I allow to crowd out my other values. So the rational strategy seems to be to find ways for values to reduce conflict or even become symbiotic. An internal geopolitics of co-prosperity.
Just want to say that even though this is my endorsed position, it often goes out the window when I encounter tangible cases of extreme suffering. Here in Berlin, there was a woman I saw on the subway the other day who walked around dazedly with an open wound, seemingly not in touch with her surroundings, walking barefoot, and wearing an expression that looked like utter hopelessness. I don’t speak German, so I wasn’t able to interact with her well.
When I run into a case like this, what the “preliminary answer” I wrote above is hard to keep in mind, especially when I think of the millions who might be suffering in similar ways, yet invisibly.
Tyler, thanks so much for writing this. I’ve been struggling to get more involved with EA precisely because I fear that accepting its premises would inexorably lead to the kind of dark night you’ve described. How could I possibly waste my time on leisure activities once I’ve seen the dark world? Even my work in the disability and autism communities would be profligate when measured against worthier causes.
This post goes some way to helping dispel these notions in me, but I’m still struggling with the implications. For example, I have a hard time seeing music, art, dancing, poetry and so forth as ends-in-themselves, rather than as a means to an end of generating utility in the form of quality of life. And it is hard for me to not see that quality of life as being fungible with that of other people; why should my own QALYs count for more than the drowning child’s, except on account of my own “beastly” self-interest?
Maybe the valuing of EA-ends are indeed just as beastly as valuing any other ends. Yet if this is so, one is not actually valuing the betterment of the world, but the neurological reward mechanism, the warm fuzzies, that it produces. Maybe these warm fuzzies perfectly align with doing the most good. But this proxy-alignment doesn’t seem like the basis of a robust system of ethics.
Where I had worried about falling on the slippery slope of EA-style utilitarianism, but I now fear a slope in the other direction; if I accept that “morality is relevant only because we care deeply about it”, then does it not follow that for those who care nought about it, there is consequently no moral obligation at all? Is our moral duty, if it exists, only commensurate with the degree of pleasure we derive from its practice?
There’s also a more practical aspect, best encapsulated by ThomasW’s reply to Eric Neyman’s bargain with the EA machine:
I don’t want to let impact guide my every decision. I want to believe that no trade-offs are required. But in building our moral framework, shouldn’t we assume the least convenient possible world? I’m not suggesting that there are yet certain answers here, but in the meantime I personally find it difficult to make moral decisions under such philosophical uncertainty. For now, I suppose the best heuristic remains:
Hi Braxton – I feel moved by what you wrote here and want to respond later in full.
For now, I just want to thank you for your phrase here: “How could I possibly waste my time on leisure activities once I’ve seen the dark world?” I think this might deserve its own essay. I think it poses a challenge even for anti-realists who aren’t confused about their values, in the way Spencer Greenberg talks about here (worth a read).
Through the analogy I use in my essay, it might be nonsensical, in the conditions of a functional society, to say something like “It’s more important to eat well than to sleep well,” since you need both to be alive, and the society has minimized the tradeoffs between these two things. However, in situations of scarcity or emergency, that might change. If supply chains suddenly fail, I might find myself without food, and start sacrificing sleep to go forage.
A similar tradeoff might apply to value threatened by emergency or scarcity. For instance, let’s say that a much-loved yet little-seen friend messages me: “Tyler! I missed my layover in Berlin, so now I’m here for the night!” Under this condition of scarcity, I might put off the art project I’d planned to work on this night. This is a low-stakes example, but seems to generalize to high stakes ones.
One might say: The emergency/scarcity conditions of the world bring our EA values under continuous threat. Does the same logic not apply?
I need to think about this more. My preliminary answer is kind of simple. Eventually I do need to sleep well to continue to forage for food, otherwise I’ll undermine my foraging. Likewise, I can’t raincheck my artmaking every night if my art project is a key source of fulfillment – doing so will undermine my quality of engagement when I spend time with friends. And similarly, allowing EA pursuits to crowd out the rest of the good life seems to undermine those EA pursuits long term (for roughly everyone that I know at least – my opinion). This seems to be true not only of EA pursuits, but any value that I allow to crowd out my other values. So the rational strategy seems to be to find ways for values to reduce conflict or even become symbiotic. An internal geopolitics of co-prosperity.
Just want to say that even though this is my endorsed position, it often goes out the window when I encounter tangible cases of extreme suffering. Here in Berlin, there was a woman I saw on the subway the other day who walked around dazedly with an open wound, seemingly not in touch with her surroundings, walking barefoot, and wearing an expression that looked like utter hopelessness. I don’t speak German, so I wasn’t able to interact with her well.
When I run into a case like this, what the “preliminary answer” I wrote above is hard to keep in mind, especially when I think of the millions who might be suffering in similar ways, yet invisibly.