I think a lot of EAs, including myself, struggle with how to integrate the psychological features that propel, enrich, stifle, or endanger our capacities as moral agents. Although, upon reflection, we are firmly committed to privileging net well-being of conscious entities across space-time, we are firmly aware that this commitment is imperiled by the beasts within us hungering for other ends.
Like many of us, you determined that acknowledging and feeding those beasts was correct and appropriate… One’s health is indispensable to being able to do good. Yet what the beast most craved was to be regarded with dignity and respect. The delights and adventures were bereft of their magic to you if you just considered them extrinsically valuable.
The proper antidote seems self-deception… Allow yourself to be lost in the greatness of life as an intrinsic value in the moment while charting your broader path in moments of reflection...
I agree with you that the question of how to integrate our broader humanity into our personal moral calculuses is most critical. It is also immensely difficult: the differentiation between a rational sacrifice to our beasts in the name of psychological health and consequently, greater good, and an excuse to indulge ourselves and behave badly when such choices are not warranted is very hard. I personally participate in the torture of animals by eating meat without regard for its provenance. Given my current circumstance, I think this abomination rational sacrifice to my beasts, but I sincerely doubt many vegans would agree. (I think that with the work I’m currently doing, taxing myself psychologically by a lifestyle change to veganism would be net negative EV).
The broader question of how to ethically and responsibly integrate our own flaws into ethical decisionmaking is fascinating, necessary, and fraught with peril.
Hi Brad, I appreciate this reply. I wonder if we might have a fundamental disagreement!
I personally don’t regard my non-EA ends as “beastly” – or, if I do, my valuing of EA ends is just as beastly as my valuing of other ends. I can adopt a moral or cultural framework that disagrees with my pre-existing “value function,” and what it deems valuable. But something about this is a bit awkward: Wasn’t it my pre-existing value function that deemed EA ends to be valuable?
It’s not obvious to me that severe sacrifice and tradeoffs are necessary. I think their seeming necessary might be the byproduct of our lack of cultural infrastructure for minimizing tradeoffs. That’s why I wrote this analogy:
To say that [my other ends] were lesser seemed to say, “It is more vital and urgent to eat well than to drink or sleep well.” No – I will eat, sleep, and drink well to feel alive; so too will I love and dance as well as help.
Once, the material requirements of life were in competition: If we spent time building shelter it might jeopardize daylight that could have been spent hunting. We built communities to take the material requirements of life out of competition. For many of us, the task remains to do the same for our spirits.
I believe it’s possible to find and build synergies that reduce tradeoffs. For instance, as a lone ancient human in the wilderness, time building shelter might jeopardize daylight that could have been spent foraging for food. However, if you joined a well-functioning tribe, you’re no longer forced to choose between [shelter-building] and [foraging]. If you forage, the food you find will power the muscles of your tribesmate to build shelter. Similarly, your tribesmate’s shelter will give you the good night’s rest you need to go out and forage. Unless there’s a pressing emergency, it would be a mistake for the tribe to allocate everyone only toforaging or only to shelf-building.
I think we’re in a similar place with our EA ends. They seem like they demand the sacrifice of our other ends. But I think that’s just because we haven’t set up the right cultural infrastructure to create synergies and minimize tradeoffs. In the essay, I suggest one example piece of infrastructure that might help with this: a fractal altruist community. But I’m excited to see what other people come up with. Maybe you’ll be one of them.
Yeah we probably do have a fundamental disagreement.
I think you were essentially correct when you were in the dark night. The weight you put to your own conscious experiences should not exceed the weight you put on that of other beings throughout space and time. Thus, the wonders and joys of your own conscious experience have intrinsic value, but it is not clear that satisfaction of these joys is the most effective use of the resources you as an agent have in an obvious sense (i.e. it seems like you can enable greater net experiences by privileging other entities).
I think there is a nonobvious reason to (seemingly) privilege yourself sometimes as an Effective Altruist in that concessions to your own psychological desires can facilitate your most effective operation and minimize likelihood that you will abandon or weaken your commitment to maximize well-being. This is what I mean by feeding the beast.
Your seeming reconciliation is value pluralism, which appears to, in this case, simply mean placing the value of some of your own conscious experiences in a superpriority above those of the conscious experiences of others. I would think your framing, an elevation of your own conscious experience, makes less sense than mine. Other beings’ conscious experiences are no less important than my own. I would make concessions which seemingly prioritize me, but ultimately, if I am acting morally, this preference is only illusory.
If you, believe, as I do, that there aren’t sturdy philosophical grounds for de-valuing my other ends, then life becomes a puzzle of how to fulfill all of your ends, including – for me – both my EA ends and making art for actually for its own sake (i.e., not primarily for the sake of instrumentally useful psychological health so I can do the greater good).
Loved reading about your journey...
I think a lot of EAs, including myself, struggle with how to integrate the psychological features that propel, enrich, stifle, or endanger our capacities as moral agents. Although, upon reflection, we are firmly committed to privileging net well-being of conscious entities across space-time, we are firmly aware that this commitment is imperiled by the beasts within us hungering for other ends.
Like many of us, you determined that acknowledging and feeding those beasts was correct and appropriate… One’s health is indispensable to being able to do good. Yet what the beast most craved was to be regarded with dignity and respect. The delights and adventures were bereft of their magic to you if you just considered them extrinsically valuable.
The proper antidote seems self-deception… Allow yourself to be lost in the greatness of life as an intrinsic value in the moment while charting your broader path in moments of reflection...
I agree with you that the question of how to integrate our broader humanity into our personal moral calculuses is most critical. It is also immensely difficult: the differentiation between a rational sacrifice to our beasts in the name of psychological health and consequently, greater good, and an excuse to indulge ourselves and behave badly when such choices are not warranted is very hard. I personally participate in the torture of animals by eating meat without regard for its provenance. Given my current circumstance, I think this abomination rational sacrifice to my beasts, but I sincerely doubt many vegans would agree. (I think that with the work I’m currently doing, taxing myself psychologically by a lifestyle change to veganism would be net negative EV).
The broader question of how to ethically and responsibly integrate our own flaws into ethical decisionmaking is fascinating, necessary, and fraught with peril.
Hi Brad, I appreciate this reply. I wonder if we might have a fundamental disagreement!
I personally don’t regard my non-EA ends as “beastly” – or, if I do, my valuing of EA ends is just as beastly as my valuing of other ends. I can adopt a moral or cultural framework that disagrees with my pre-existing “value function,” and what it deems valuable. But something about this is a bit awkward: Wasn’t it my pre-existing value function that deemed EA ends to be valuable?
Moreover:
It’s not obvious to me that severe sacrifice and tradeoffs are necessary. I think their seeming necessary might be the byproduct of our lack of cultural infrastructure for minimizing tradeoffs. That’s why I wrote this analogy:
I believe it’s possible to find and build synergies that reduce tradeoffs. For instance, as a lone ancient human in the wilderness, time building shelter might jeopardize daylight that could have been spent foraging for food. However, if you joined a well-functioning tribe, you’re no longer forced to choose between [shelter-building] and [foraging]. If you forage, the food you find will power the muscles of your tribesmate to build shelter. Similarly, your tribesmate’s shelter will give you the good night’s rest you need to go out and forage. Unless there’s a pressing emergency, it would be a mistake for the tribe to allocate everyone only to foraging or only to shelf-building.
I think we’re in a similar place with our EA ends. They seem like they demand the sacrifice of our other ends. But I think that’s just because we haven’t set up the right cultural infrastructure to create synergies and minimize tradeoffs. In the essay, I suggest one example piece of infrastructure that might help with this: a fractal altruist community. But I’m excited to see what other people come up with. Maybe you’ll be one of them.
Yeah we probably do have a fundamental disagreement.
I think you were essentially correct when you were in the dark night. The weight you put to your own conscious experiences should not exceed the weight you put on that of other beings throughout space and time. Thus, the wonders and joys of your own conscious experience have intrinsic value, but it is not clear that satisfaction of these joys is the most effective use of the resources you as an agent have in an obvious sense (i.e. it seems like you can enable greater net experiences by privileging other entities).
I think there is a nonobvious reason to (seemingly) privilege yourself sometimes as an Effective Altruist in that concessions to your own psychological desires can facilitate your most effective operation and minimize likelihood that you will abandon or weaken your commitment to maximize well-being. This is what I mean by feeding the beast.
Your seeming reconciliation is value pluralism, which appears to, in this case, simply mean placing the value of some of your own conscious experiences in a superpriority above those of the conscious experiences of others. I would think your framing, an elevation of your own conscious experience, makes less sense than mine. Other beings’ conscious experiences are no less important than my own. I would make concessions which seemingly prioritize me, but ultimately, if I am acting morally, this preference is only illusory.
Do you subscribe to moral realism? If not, I’m curious what you think of Spencer‘s post: https://www.spencergreenberg.com/2022/08/tensions-between-moral-anti-realism-and-effective-altruism/
I am a moral realist believing agents should act to create the greatest net well-being (utility).
Not all conscious experiences are created equal.
Pursuing those ends Tyler talks about helps cultivate higher quality conscious experiences.
If you, believe, as I do, that there aren’t sturdy philosophical grounds for de-valuing my other ends, then life becomes a puzzle of how to fulfill all of your ends, including – for me – both my EA ends and making art for actually for its own sake (i.e., not primarily for the sake of instrumentally useful psychological health so I can do the greater good).