Itâs amusing how you argue against hardcore utilitarianism by indicating that factoring in an agentâs human needs is indispensable for maximizing impact. To the extent that being good to yourself is necessary for maximizing impact, a hardcore utilitarian would do so.
Utilitarianism is optimizing for whatever agent is operative⊠Humans or robots. Itâs just realizing that the experiences of other beings throughout space and time matter just as much as your own. There is nothing wrong with being extreme and impartial in your compassion for others, which is the essence of utilitarianism. To the extent you are lobbing criticisms of people not being effective because theyâre not taking care of themselves, it isnât a criticism of âhardcoreâ utilitarianism. Itâs a criticism of them failing to integrate the productivity benefits from taking care of themselves into the analysis.
Well yes your logic is perfect, but itâs a lot like the logic of communism...if humans did communism perfectly it would usher in world peace and Utopia...the problem is not ideal communism, itâs somehow it just doesnât fit humanity well. Yours is the exact same argument you would hear over and over when people still argued passionately for communism...âTheyâre just not doing it right!!â...after a while you realize, it just isnât the right thing no matter how lovely on paper. Eventually almost all of them let go of that idealism, but it doggedly held on a long time and Iâm sure that will be the case of many EAâs holding on way to long to utilitarianism.
Hardly anything really does fit us...the best path is to keep iterating reachable modifications wherever you are...I can see the benefits of ideal utilitarianism and I appreciate early EA embracing it with gusto...it got fantastic results, no way to argue with that. To me EA is one of the brightest lights in the world. But Iâve been steering movements and observing them for many decades and itâs clear to me that, as in the OP, EA is transitioning into a new phase or wave, and the point of resistance I come up against when I discuss there being more art in EA is the Utilitarian response of âwhy waste money on aestheticsâ, or I hear about stressed anxious EAâs and significant mental health needs...the only clear answer I see to these two problems is reform the Utilitarian part of EA, thatâs whatâs blocking it moving into the next era. You can run at a fast pace for a long time when youâre young...but eventually it just isnât sustainable. Thatâs my thesis...early EA was utilitarian awesome, but time moved on and now itâs not sustainable anymore.
Changing yourself is hard, Iâve done it a few times, usually it was forced on me. And I totally get this is not obvious to most in EA...itâs not popular to tell utilitarians, âdonât be utilitarianâ...but itâs true, you should not be so utilitarian...because thatâs for robots, but youâre human. Itâs time to move on to a more mature and sustainable path.
Well⊠Communism is structurally disinclined to work in the envisioned way. It involves overthrowing the government, which involves âstrong menâ and bloodshed, the people who lead a communist regime tend to be strongmen who rule with an iron grip (âfor the good of communismâ, they might say) and are willing to use murder to further their goals. Thanks to this it tends to involve a police state and central planning (which are not the characteristics originally envisioned). More broadly, communism isnât based on consequentialist reasoning. Itâs an exaggeration to say itâs based on South Park reasoning: 1. overthrow the bourgeoisie and the government so communists can be in charge, 2. ???, 3. utopia! But I donât think this is a big exaggeration.
Individuals, on the other hand, can believe in whatever moral system they feel like and follow its logic wherever it leads. Taking care of yourself (and even your friends/âfamily) not only perfectly fits within the logic of (consequentialist) utilitarianism, it is practical because its logic is consequentialist (which is always practical if done correctly). Unlike communism, we can simply do it (and in fact itâs kind of hard not to, itâs the natural human thing to do).
Whatâs weird about your argument is that you made no argument beyond âitâs like the logic of communismâ. No, different things are different, you canât just make an analogy and stop there (especially when criticizing logic that you yourself described as âperfectââwell gee, what hope does an analogy have against perfect logic?)
when I discuss there being more art in EA is the Utilitarian response of âwhy waste money on aestheticsâ, or I hear about stressed anxious EAâs and significant mental health needs...the only clear answer I see to these two problems is reform the Utilitarian part of EA
I think whatâs going on here is that youâre not used to consequentialist reasoning, and since the founders of EAs were consequentialists, and EA attracts, creates and retains consequentialists, you need to learn how consequentialists think if you want to be persuasive with them. I donât see aesthetics as wasteful; I routinely think about the aesthetics of everything I build as an engineer. But the reason is not something like âbeauty is goodâ, itâs a consequentialist reason (utilitarian or not) like âif this looks better, Iâm happierâ (my own happiness is one of my terminal goals) or âpeople are more likely to buy this product if it looks goodâ (fulfilling an instrumental goal) or âmy boss will be pleased with me if he thinks customers will like how it looksâ (instrumental goal). Also, as a consequentialist, aesthetics must be balanced against other thingsâwe spend much more time on the aesthetics of some things than other things because the cost-benefit analysis discounts aesthetics for lesser-used parts of the system.
You want to reform the utilitarian part, but itâs like telling Protestants to convert to Catholicism. Not only is it an extremely hard goal, but you wonât be successful unless you âget inside the mindâ of the people whose beliefs you want to change. Like, if you just explain to Protestants (who believe X) why Catholics believe the opposite of X, you wonât convince most of them that X is wrong. And the thing is, I think when you learn to think like a consequentialistânot a naive consequentialist* but a mature consequentialist who values deontological rules and virtues for consequentialist reasonsâat that point you realize that this is the best way of thinking, whether one is EA or not.
(* we all still remember SBF around here of course. He mightâve been a conman, but the scary part is that he may have thought of himself as an consequentialist utilitarian EA, in which case he was a naive consequentialist. For you, that might say something against utilitarianism, but for me it illustrates that nuance, care and maturity is required to do utilitarianism well.)
Yes I appreciate very much what youâre saying, Iâm learning much from this dialogue. I think what I said that didnât communicate well to you and Brad West isnât some kind of comparison of utilitarianism and communist thought...but rather how people defend their ideal when itâs failing, whatever it is...religion, etc. that, âTheyâre not doing it rightâ...âIf you did it right (as I see it) then it would produce much better stuffâ.
EA is uniquely bereft of art in comparison to all other categories of human endeavor: education, business, big tech, military, healthcare, social society, etc. So for EA thereâs been ten years of incredible activity and massive funding, but no art in sight...so whatever is causing that is a bug and not a feature. Maybe my thesis that utilitarianism is the culprit is wrong. Iâd be happy to abandon that thesis if I could find a better one.
But given that EA âattracts, creates and retains consequentialistsâ as you say, and that they are hopefully not the bad kind that doesnât work (naive) but the good kind that works (mature) then why the gaping hole in the center where the art should be? I think itâs not naive versus mature utilitarianism, itâs that utilitarianism is a mathematical algorithm and simply doesnât work for optimizing human living...itâs great for robots. And great for the first pioneering wave of EA blazing a new path...but ulitimately unsustainable for the future.
Eric Hoel does a far better job outlining the poison in utilitarianism that remains no matter how you dilute it or claim it to be naive or mature (but unlike him I am an Effective Altruist).
And of course I agree with you on the âitâs hard to tell one religion to be another religionâ, which I myself said in my reply post. In fact, I have a college degree in exactly thatâChristian Ministry with an emphasis in âmissionsâ where you go tell people in foreign countries to abandon their culture and religion and adopt yours...and amazingly, youâd be surprised at how well it works. Any religious group that does proselytizing usually gets decent results. I donât agree with doing that anymore with religion, but it is surprisingly effective...and so I donât mind telling a bunch of utilitarians to stop being utilitarians...on the other hand if I can figure out a different reason for the debilitating lack of art in EA and the anxious mental health issues connected to not saving enough lives guilt, Iâll gladly change tactics.
If you compare EA to all those other human endeavors I listed above, whatâs the point of differentiation? Why do even military organizations have tons of art compared to EA?
You seem to think if art was good for human optimization then consequentialists should have plenty, so why donât they around here?
Thanks for taking my comment in the spirit intended. As a noncentral EA itâs not obvious to me why EA has little art, but it could be something simple like artists not historically being attracted to EA. It occurs to me that membership drives have often been at elite universities that maybe donât have lots of art majors.
Speaking personally, Iâm an engineer and a (unpaid) writer. As such I want to play to my strengths and any time I spend on making art is time not spent using my valuable specialized skills⊠at least I started using AI art in my latest article about AI (well, duh). I write almost exclusively about things I think are very important, because that feeling of importance is usually what drives me to write. But the result has been that my audience has normally been very close to zero (even when writing on EA forum), which caused me to write much less and, when I do write, I tend to write on Twitter instead or in the comment areas of ACX. Okay I guess Iâm not really going anywhere with this line of thought, but itâs a painful fact that I sometimes feel like ranting about. But hereâs a couple of vaguely related hypotheses: (i) maybe there is some EA art but itâs not promoted well so we donât see it; (ii) EAs can imagine art being potentially valuable, but are extremely uncertain about how and when it should be used, and so donât fund it or put the time into it. EAs want to do âthe most impactful thing they canâ and itâs hard to believe art is it. However, you can argue that EA art is neglected (even though art is commonplace) and that certain ways of using art would be impactful, much as I argued that some of the most important climate change interventions are neglected (even though climate change interventions are commonplace). I would further argue that artists are famously inexpensive to hire, which can boost the benefit/âcost ratio (related: the most perplexing thing to me about EA is having hubs in places that are so expensive it would pain me to live there; I suggested Toledo which is inexpensive and near two major cities, earning no votes or comments. Story of my life, I swear, and Iâve been thinking of starting a blog called âNo one listens to meâ.)
Any religious group that does proselytizing usually gets decent results.
I noticed that too, but I assumed that (for unknown reasons) it worked better for big shifts (pagan to Christian) than more modest ones. But I mentioned âProtestant to Catholicâ specifically because the former group was formed in opposition to the latter. I used to be Mormon; we had a whole doctrine about why our religion made more sense and was the True One, and itâs hard to imagine any other sect couldâve come along and changed my mind unless they could counter the exact rationales I had learned from my church. As I see it, mature consequentialist utilitarianism is a lot like this. Unless you seem to understand it very well, I will perceive your pushback against it as being the result of misunderstanding it.
So, if you say utilitarianism is only fit for robots, I just say: nope. You say: utilitarianism is a mathematical algorithm. I say: although it can be put into mathematical models, it can also be imprinted deeply in your mind, and (if youâre highly intelligent and rational) it may work better there than in a traditional computer program. This is because humans can more easily take many nuances into account in their minds than type those nuances into a program. Thus, while mental calculations are imprecise, they are richer in detail which can (with practice) lead to relatively good decisions (both relative to decisions suggested by a computer program that lacks important nuances, and relative to human decisions that are rooted in deontology, virtue ethics, conventional wisdom, popular ideology, or legal precedent).
I did add a caveat there about intelligence and rationality, because the strongest argument against utilitarianism that comes to mind is that it requires a lot of mental horsepower and discipline to be used well as a decision procedure. This is also why I value rules and virtues: an mathematically ideal consequentialist would have no need of them per se, but such a being cannot exist because it would require too much computational power. I think of rules and virtues as a way of computationally bounding otherwise intractable mental calculations, though they are also very useful for predicting public perception of oneâs actions (as most of the public primarily views morality through the lenses of rules and virtues). Related: superforecasters are human, and I donât think itâs a coincidence that lots of EAs like forecasting as a test of intelligence and rationality.
However, I think that consequentialist utilitarianism (CU) has value for people of all intelligence levels for judging which rules and virtues are good and which are not. For example, we can explain in CU terms why common rules such as âdonât stealâ and âdonât lieâ are usually justified, and by the same means it is hard to justify rules like âdonât masturbateâ or the Third Reichâs rule that only non-Jewish people of âGerman or kindred bloodâ could be citizens (except via strange axioms).
This makes it very valuable from a secular perspective: without CU, what other foundation is there to judge proposed rules or virtues? Most people, it seems to me, just go with the flow: whatever rules/âvirtues are promoted by trusted people are assumed to be good. This leads to people acting like lemmings, sometimes believing good things and other times bad things according to whatever is popular in their tribe/âgroup, since they have no foundational principle on which to judge (they do have principles promoted by other people, which, again, could be good or bad). While Christians say âGod is my rockâ, I say âthese two axioms are my bedrock, which led me to a mountain I call mature consequentialist utilitarianismâ. I could say much more on this but alas, this is a mere comment in a thread and writing takes too much time. But hereâs a story I love about Heartstone, the magic gemstone of morality.
For predictive decision-making, choosing actions via CU works better the more processing power you use (whether mental or silicon). Nevertheless, after arriving at a decision, it should always be possible to explain the decision to people without access to the same horsepower. We shouldnât say âMy giant brain determined this to be the right decision, via reasoning so advanced that your puny mind cannot comprehend it. Trust me.â It seems to me that anyone using CU should be able to explain (and defend) their decision in CU terms that donât require high intelligence to understand. However, (i) the audience cannot verify that the decision is correct without using at least as much computing power, they can only verify that the decision sounds reasonable, (ii) different people have different values which can correctly lead to disagreement about the right course of action, and (iii) there are always numerous ways that an audience can misunderstand what was said, even if it was said in plain and unambiguous language (I suspect this is because many people prefer other modes of thought, not because they canât think in a consequentialist manner.)
Now, just in case I sound a bit âroboticâ here, note that I like the way I am. Not because I like sounding like Spock or Data, but because there is a whole life journey spanning decades that led to where I am now, a journey where I compared different ways of being and found what seem to be the best, most useful and truth-centered principles from which to derive my beliefs and goals. (Plus Iâve always loved computers, so a computational framing comes naturally.)
a different reason for [...] the anxious mental health issues connected to not saving enough lives guilt[?]
I think a lot of EAs have an above-average level of empathy and sense of responsibility. My poth (hypothesis) is that these things are what caused them to join EA in the first place, and also caused them to have this anxiety about lives not saved and good not done. This poth leads me to predict that such a person will have had some anxiety from the first day they found out about the disease and starvation in Africa, even if joining EA managed to increase that anxiety further. For me personally, global poverty bothered me since I first learned about it, I have a deep yearning to improve the world that appeared 15+ years before I learned about EA, I donât feel like my anxiety increased after joining EA, and the analysis weâre talking about (in which there is a utilitarian justification not to feel bad about only giving 10% of our income) helps me not to feel too bad about the limits of my altruism, although I still want to give much more to fund direct work, mainly because I have little confidence in my ability to persuade other EAs about what I think needs to be done (only 31 karma including my own strong upvote? Yikes! đłđ±)
Why do even military organizations have tons of art compared to EA?
Is that true? Iâm not surprised if military personnel make a lot of art, but I donât expect it from the formal structures or leadership. But, if a military does spend money on art, I expect itâs a result of some people who advocated for art to sympathetic ears that controlled the purse strings, and that this worked either because they were persuasive or because people liked art. The same should work in EA if you find a framing that appeals to EAs. (which reminds me of the odd fact that although I identify strongly with common EA beliefs and principles, I have little confidence in my ability to persuade other EAs, as I am often downvoted or not upvoted. I cannot explain this.)
You seem to think if art was good for human optimization then consequentialists should have plenty, so why donât they around here?
My guess is that itâs a combination of
the difficulty EAs have had seeing art as an impactful intervention (although I feel like it could be, e.g. as a way of attracting new EAs and improving EA mental health). Note: although EAs like theoretical models and RCTs demonstrating good cost/âbenefit, my sense is that EA leaders also understand (in a CU manner) that some interventions are valuable enough to support even when thereâs no solid theoretical/âscientific basis for them.
artists rarely becoming EAs (why? maybe selection bias in membership drives⊠maybe artists being turned off by EA vibes for some reason...)
EA being a young movement, so (i) lots of things still havenât been worked out and (ii) the smaller the movement is, the less likely that art is worthy of funding (the explanation for this assertion feels too complicated to briefly explain.)
Wow thanks for your long and thoughtful reply. I really do appreciate your thinking and Iâm glad CU is working for you and youâre happy with it...that is a good thing.
I do think youâve given me a little boost in my argument against CU unfortunately, though, in the idea that our brain just doesnât have enough compute. There was a post a while back from a well know EA about their long experience starting orgs and âdoing EA stuffâ and how the lesson theyâd taken from it all is that there are just too many unknown variables in life for anything we try to build and plan outcomes for to really work out how we hoped...itâs a lot of shots in the dark and sometimes you hit. That is similar to my experience as well...and the reason is we just donât have enough data nor enough compute to process it all...nor adequate points or spectrums of input. The thing that better fits in that kind of category is a robot who with an AI mind can do far more compute...but even they are challenged. So for me thatâs another good reason against CU optimizing well for humans.
And the other big thing I havenât mentioned is our mysterious inner life, the one that responds to spirituality and to emotions within human relationships, and to art...this part of us does not follow logic or compute...it is somehow organic and you could almost say quantum in how we are connected to other people...living with it is vital for happiness...I think the attraction of CU is that it adds to us the logic side that our inner life doesnât always have...and so the answer is to live with both together...to use CU thinking for the effective things it does, but also to realize where it is very ineffective toward human thriving...and so that may be similar to the differences you see between naive and mature CU. Maybe thatâs how we synthesize our two views.
How I would apply this to the Original Post here is that we should see âthe gaping hole where the art should beâ in EA as a form of evidence of a bug in EA that we should seek to fix. I personally hope as we turn this corner toward a third wave, we will include that on the list of priorities.
Well, okay. Iâve argued that other decision procedures and moralities do have value, but are properly considered subordinate to CU. Not sure if these ideas swayed you at all, but if youâre Christian you may be thinking âI have my Rockâ so you feel no need for another.
If you want to criticize utilitarianism itself, you would have to say the goal of maximizing well-being should be constrained or subordinated by other principles/ârules, such as requirements of honesty or glorifying God/âetc.
You could do this, but youâd be arguing axiomatically. A claim like âmy axioms are above those of utilitarians!â would just be a bare assertion with nothing to back it up. As I mentioned, I have axioms too, but only the bare minimum necessary, because axioms are unprovable, and years of reflection led me to reject all unnecessary axioms.
You could say something like the production of art/âbeauty is intrinsically valuable apart from the well-being it produces and thus utilitarianism is flawed in that it fails to capture this intrinsic value (and only captures the instrumental value.
The most important thing to realize is that âthings with intrinsic valueâ is a choice that lies outside consequentialism. A consequentialist could indeed choose an axiom that âart is intrinsically valuableâ. Calling it âutilitarianâ feels like nonstandard terminology, but such value assignment seems utilitarian-adjacent unless you treat it merely as a virtue or rule rather than as a goal you seek after.
Note, however, that beauty doesnât exist apart from an observer to view it, which is part of the reason I think this choice would be a mistake. Imagine a completely dead universeâno people, no life, no souls/âGod/âheaven/âhell, and no chance life will ever arise. Just an endless void pockmarked by black holes. Suppose there is art drifting through the void (perhaps Boltzmann art, by analogy to a Boltzmann brain). Does it have value? I say it does not. But if, in the endless void, a billion light years beyond the light cone of this art that can never be seen, there should be one solitary civilization left alive, I argue that this civilizationâs art is infinitely more valuable. More pointedly, I would say that it is the experience of art that is valuable and not the art itself, or that art is instrumentally valuable, not intrinsically. Thus a great work of art viewed by a million people has delivered 100 times as much value as the same art only seen by 10,000 peopleâthough one should take into account countervailing factors such as the fact that the first 10,000 who see it are more likely to be connoisseurs who appreciate it a lot, and the fact that any art you experience takes away time you might have spent seeing other art. e.g. for me I wish I could listen to more EA and geeky songs (as long as they have beautiful melodies), but lacking that, I still enjoy hearing nice music that isnât tailored as much to my tastes. Thus EA art is less valuable in a world that is already full of art. But EA art could be instrumentally valuable both in the experiences it creates (experiences with intrinsic value) and in its tendency to make the EA movement healthy and growing.
So, to be clear, I donât see a bug in utilitarianism; I see the other views as the ones with bugs. This is simply because I see no flaws in my moral system, but I see flaws in other systems. There are of course flaws in myself , as I illustrate below.
And the other big thing I havenât mentioned is our mysterious inner life, the one that responds to spirituality and to emotions within human relationships, and to art...this part of us does not follow logic or compute...it is somehow organic and you could almost say quantum in how we are connected to other people...living with it is vital for happiness
I think itâs important to understand and accept that humans cannot be maximally moral; we are all flawed. And this is not a controversial statement to a Christian, right? We can be flawed even in our own efforts to act morally! Iâll give an example from last year, when Russia invaded Ukraine. I was suddenly and deeply interested in helping, seeing that a rapid response was needed. But other EAs werenât nearly as interested as I was. I wouldâve argued that although Ukrainian lives are less cost-effective to save than Africans, there was a meaningful longtermist angle: Russia was tipping the global balance of power from democracy to dictatorship, and if we donât respond strongly against Putin, Xi Jinping could be emboldened to invade Taiwan; in the long term, this could lead to tyranny taking over the world (EA relies on freedom to work, so this is a threat to us). Yet I didnât make this argument to EAs; I kept it to myself (perhaps for the same reason I waited ~6 months to publish this, I was afraid EAs wouldnât care). I ended up thinking deeply about the warâabout what it might be like as a civilian on the front lines, about what kinds of things would help Ukraine most on a small-ish budget, about how the best Russians werenât getting the support they deserved, and about how events might play out (which turned into a hobby of learning and forecasting, and the horrors of war Iâve seen are likeâholy shit, did I burn out my empathy unit?). But not a single EA organization offered any way to help Ukraine and I was left looking for other ways to help. So, I ended up giving $2000 CAD to Ukraine-related causes, half of which went to Ripleyâs Heroes which turned out to be a (probably, mostly) fraudulent organization. Not my best EA moment! From a CU perspective, I performed badly. I f**ked up. I shouldâve been able to look at the situation and accept that there was no way I could give military aid effectively with the information I had. And I certainly knew that military aid was far from an ideal intervention and that there were probably better interventions, I just didnât have access to them AFAIK. The correct course of action was not to donate to Ukraine (understanding that some people could help effectively, just not me). But emotionally I couldnât accept that. But you know, I have no doubt that it was myself that was flawed and not my moral system; CUâs name be praised! đ Also I donât really feel guilty about it, I just think âwell, Iâm human, Iâll make some mistakes and no oneâs judging me anyway, hopefully Iâll do better next time.â
In sum: humans canât meet the ideals of (M)CU, but that doesnât mean (M)CU isnât the correct standard by which to make and evaluate choices. There is no better standard. And again, the Christian view is similar, just with a different axiomatic foundation.
5.6: Isnât utilitarianism hostile to music and art and nature and maybe love?
No. Some people seem to think this, but it doesnât make a whole lot of sense. If a world with music and art and nature and love is better than a world without them (and everyone seems to agree that it is) and if they make people happy (and everyone seems to agree that they do) then of course utilitarians will support these things.
Thereâs a more comprehensive treatment of this objection in 7.8 below.
If art production is critical to EAâs ability to maximize well-being and EA is failing to do so, then this is a failure of EA not to be utilitarian enough. Your criticism perhaps stems from the culture and notions of people who happen to subscribe to utilitarianism, not utilitarianism itself. Utilitarians are human, and thus capable of being in error as to what will do the most good.
If you want to criticize utilitarianism itself, you would have to say the goal of maximizing well-being should be constrained or subordinated by other principles/ârules, such as requirements of honesty or glorifying God/âetc. You could say something like the production of art/âbeauty is intrinsically valuable apart from the well-being it produces and thus utilitarianism is flawed in that it fails to capture this intrinsic value (and only captures the instrumental value.
I think a more apt target for your criticism would not be utilitarianism itself, but rather the cultures and mentalities of those who practice it.
I think we are just using two different definitions of utilitarian. I am talking about maximizing well-being⊠If that means adding more ice cream or art into agentsâ lives, then utilitarianism demands ice cream and art. Utilitarianism regards the goal⊠Maximization of net value of experience.
A more apt comparison than a specific political system such as communism, capitalism, or mercantilism would be a political philosophy that defined the goal of governmental systems as âadvancing the welfare of people within a state.â Then, different political systems could be evaluated by how well they achieve that goal.
Similarly, utilitarianism is agnostic as to whether one should drink Huel, produce and enjoy art, work X hours per week, etc. All of these questions come down to whether the agent is producing better outcomes for the world.
So if youâre saying that the habits of EAs are not sustainable (and thus arenât doing the greatest good, ultimately), youâre not criticizing utilitarianism. Rather, youâre saying they are not being the best utilitarians they can be. You canât challenge utilitarianism by saying that utilitariansâ choices donât produce the most good. Then youâre just challenging choices made by them within a utilitarian lens.
If it were the case that belief in utilitarianism predictably causes the world to have less utility, then under basically any common moral system thereâs no strong case for spreading utilitarianism[1]. In such a world, there is of course no longer a utilitarian case for spreading utilitarianism, and afaik the other common ethical systems would not endorse spreading utilitarianism, especially if it reduces net utility.
Now âhistorically utilitarianism has led to less utilityâ does not strictly imply that in the future âbelief in utilitarianism predictably causes the world to have less utility.â But it is extremely suggestive, and more so if it looks overdetermined rather than due to a specific empirical miscalculation, error in judgement, or bad actor.
Iâm personally pretty neutral on whether utilitarianism has been net negative. The case against is that I think Bentham was unusually far-seeing and correct relative to his contemporaries. The strongest case for in my opinion probably comes from people in our cluster of ideas[2] accelerating AI capabilities (runner ups include FTX, some specific culty behaviors, and well-poisoning of good ideas), though my guess is that there isnât much evidence that the more utilitarian EAs are more responsible.
On a more theoretical level, Askellâs distinction between utilitarianism as a criterion of rightness vs decision procedure is also relevant here.
Note that even if utilitarianism overall is a net positive moral system for people to have, if it were the case that EAsspecifically would be destructive with it, thereâs still a local case against it.
Itâs amusing how you argue against hardcore utilitarianism by indicating that factoring in an agentâs human needs is indispensable for maximizing impact. To the extent that being good to yourself is necessary for maximizing impact, a hardcore utilitarian would do so.
Utilitarianism is optimizing for whatever agent is operative⊠Humans or robots. Itâs just realizing that the experiences of other beings throughout space and time matter just as much as your own. There is nothing wrong with being extreme and impartial in your compassion for others, which is the essence of utilitarianism. To the extent you are lobbing criticisms of people not being effective because theyâre not taking care of themselves, it isnât a criticism of âhardcoreâ utilitarianism. Itâs a criticism of them failing to integrate the productivity benefits from taking care of themselves into the analysis.
Well yes your logic is perfect, but itâs a lot like the logic of communism...if humans did communism perfectly it would usher in world peace and Utopia...the problem is not ideal communism, itâs somehow it just doesnât fit humanity well. Yours is the exact same argument you would hear over and over when people still argued passionately for communism...âTheyâre just not doing it right!!â...after a while you realize, it just isnât the right thing no matter how lovely on paper. Eventually almost all of them let go of that idealism, but it doggedly held on a long time and Iâm sure that will be the case of many EAâs holding on way to long to utilitarianism.
Hardly anything really does fit us...the best path is to keep iterating reachable modifications wherever you are...I can see the benefits of ideal utilitarianism and I appreciate early EA embracing it with gusto...it got fantastic results, no way to argue with that. To me EA is one of the brightest lights in the world. But Iâve been steering movements and observing them for many decades and itâs clear to me that, as in the OP, EA is transitioning into a new phase or wave, and the point of resistance I come up against when I discuss there being more art in EA is the Utilitarian response of âwhy waste money on aestheticsâ, or I hear about stressed anxious EAâs and significant mental health needs...the only clear answer I see to these two problems is reform the Utilitarian part of EA, thatâs whatâs blocking it moving into the next era. You can run at a fast pace for a long time when youâre young...but eventually it just isnât sustainable. Thatâs my thesis...early EA was utilitarian awesome, but time moved on and now itâs not sustainable anymore.
Changing yourself is hard, Iâve done it a few times, usually it was forced on me. And I totally get this is not obvious to most in EA...itâs not popular to tell utilitarians, âdonât be utilitarianâ...but itâs true, you should not be so utilitarian...because thatâs for robots, but youâre human. Itâs time to move on to a more mature and sustainable path.
Well⊠Communism is structurally disinclined to work in the envisioned way. It involves overthrowing the government, which involves âstrong menâ and bloodshed, the people who lead a communist regime tend to be strongmen who rule with an iron grip (âfor the good of communismâ, they might say) and are willing to use murder to further their goals. Thanks to this it tends to involve a police state and central planning (which are not the characteristics originally envisioned). More broadly, communism isnât based on consequentialist reasoning. Itâs an exaggeration to say itâs based on South Park reasoning: 1. overthrow the bourgeoisie and the government so communists can be in charge, 2. ???, 3. utopia! But I donât think this is a big exaggeration.
Individuals, on the other hand, can believe in whatever moral system they feel like and follow its logic wherever it leads. Taking care of yourself (and even your friends/âfamily) not only perfectly fits within the logic of (consequentialist) utilitarianism, it is practical because its logic is consequentialist (which is always practical if done correctly). Unlike communism, we can simply do it (and in fact itâs kind of hard not to, itâs the natural human thing to do).
Whatâs weird about your argument is that you made no argument beyond âitâs like the logic of communismâ. No, different things are different, you canât just make an analogy and stop there (especially when criticizing logic that you yourself described as âperfectââwell gee, what hope does an analogy have against perfect logic?)
I think whatâs going on here is that youâre not used to consequentialist reasoning, and since the founders of EAs were consequentialists, and EA attracts, creates and retains consequentialists, you need to learn how consequentialists think if you want to be persuasive with them. I donât see aesthetics as wasteful; I routinely think about the aesthetics of everything I build as an engineer. But the reason is not something like âbeauty is goodâ, itâs a consequentialist reason (utilitarian or not) like âif this looks better, Iâm happierâ (my own happiness is one of my terminal goals) or âpeople are more likely to buy this product if it looks goodâ (fulfilling an instrumental goal) or âmy boss will be pleased with me if he thinks customers will like how it looksâ (instrumental goal). Also, as a consequentialist, aesthetics must be balanced against other thingsâwe spend much more time on the aesthetics of some things than other things because the cost-benefit analysis discounts aesthetics for lesser-used parts of the system.
You want to reform the utilitarian part, but itâs like telling Protestants to convert to Catholicism. Not only is it an extremely hard goal, but you wonât be successful unless you âget inside the mindâ of the people whose beliefs you want to change. Like, if you just explain to Protestants (who believe X) why Catholics believe the opposite of X, you wonât convince most of them that X is wrong. And the thing is, I think when you learn to think like a consequentialistânot a naive consequentialist* but a mature consequentialist who values deontological rules and virtues for consequentialist reasonsâat that point you realize that this is the best way of thinking, whether one is EA or not.
(* we all still remember SBF around here of course. He mightâve been a conman, but the scary part is that he may have thought of himself as an consequentialist utilitarian EA, in which case he was a naive consequentialist. For you, that might say something against utilitarianism, but for me it illustrates that nuance, care and maturity is required to do utilitarianism well.)
Yes I appreciate very much what youâre saying, Iâm learning much from this dialogue. I think what I said that didnât communicate well to you and Brad West isnât some kind of comparison of utilitarianism and communist thought...but rather how people defend their ideal when itâs failing, whatever it is...religion, etc. that, âTheyâre not doing it rightâ...âIf you did it right (as I see it) then it would produce much better stuffâ.
EA is uniquely bereft of art in comparison to all other categories of human endeavor: education, business, big tech, military, healthcare, social society, etc. So for EA thereâs been ten years of incredible activity and massive funding, but no art in sight...so whatever is causing that is a bug and not a feature. Maybe my thesis that utilitarianism is the culprit is wrong. Iâd be happy to abandon that thesis if I could find a better one.
But given that EA âattracts, creates and retains consequentialistsâ as you say, and that they are hopefully not the bad kind that doesnât work (naive) but the good kind that works (mature) then why the gaping hole in the center where the art should be? I think itâs not naive versus mature utilitarianism, itâs that utilitarianism is a mathematical algorithm and simply doesnât work for optimizing human living...itâs great for robots. And great for the first pioneering wave of EA blazing a new path...but ulitimately unsustainable for the future.
Eric Hoel does a far better job outlining the poison in utilitarianism that remains no matter how you dilute it or claim it to be naive or mature (but unlike him I am an Effective Altruist).
And of course I agree with you on the âitâs hard to tell one religion to be another religionâ, which I myself said in my reply post. In fact, I have a college degree in exactly thatâChristian Ministry with an emphasis in âmissionsâ where you go tell people in foreign countries to abandon their culture and religion and adopt yours...and amazingly, youâd be surprised at how well it works. Any religious group that does proselytizing usually gets decent results. I donât agree with doing that anymore with religion, but it is surprisingly effective...and so I donât mind telling a bunch of utilitarians to stop being utilitarians...on the other hand if I can figure out a different reason for the debilitating lack of art in EA and the anxious mental health issues connected to not saving enough lives guilt, Iâll gladly change tactics.
If you compare EA to all those other human endeavors I listed above, whatâs the point of differentiation? Why do even military organizations have tons of art compared to EA?
You seem to think if art was good for human optimization then consequentialists should have plenty, so why donât they around here?
Thanks for helping me think these things through.
Thanks for taking my comment in the spirit intended. As a noncentral EA itâs not obvious to me why EA has little art, but it could be something simple like artists not historically being attracted to EA. It occurs to me that membership drives have often been at elite universities that maybe donât have lots of art majors.
Speaking personally, Iâm an engineer and a (unpaid) writer. As such I want to play to my strengths and any time I spend on making art is time not spent using my valuable specialized skills⊠at least I started using AI art in my latest article about AI (well, duh). I write almost exclusively about things I think are very important, because that feeling of importance is usually what drives me to write. But the result has been that my audience has normally been very close to zero (even when writing on EA forum), which caused me to write much less and, when I do write, I tend to write on Twitter instead or in the comment areas of ACX. Okay I guess Iâm not really going anywhere with this line of thought, but itâs a painful fact that I sometimes feel like ranting about. But hereâs a couple of vaguely related hypotheses: (i) maybe there is some EA art but itâs not promoted well so we donât see it; (ii) EAs can imagine art being potentially valuable, but are extremely uncertain about how and when it should be used, and so donât fund it or put the time into it. EAs want to do âthe most impactful thing they canâ and itâs hard to believe art is it. However, you can argue that EA art is neglected (even though art is commonplace) and that certain ways of using art would be impactful, much as I argued that some of the most important climate change interventions are neglected (even though climate change interventions are commonplace). I would further argue that artists are famously inexpensive to hire, which can boost the benefit/âcost ratio (related: the most perplexing thing to me about EA is having hubs in places that are so expensive it would pain me to live there; I suggested Toledo which is inexpensive and near two major cities, earning no votes or comments. Story of my life, I swear, and Iâve been thinking of starting a blog called âNo one listens to meâ.)
I noticed that too, but I assumed that (for unknown reasons) it worked better for big shifts (pagan to Christian) than more modest ones. But I mentioned âProtestant to Catholicâ specifically because the former group was formed in opposition to the latter. I used to be Mormon; we had a whole doctrine about why our religion made more sense and was the True One, and itâs hard to imagine any other sect couldâve come along and changed my mind unless they could counter the exact rationales I had learned from my church. As I see it, mature consequentialist utilitarianism is a lot like this. Unless you seem to understand it very well, I will perceive your pushback against it as being the result of misunderstanding it.
So, if you say utilitarianism is only fit for robots, I just say: nope. You say: utilitarianism is a mathematical algorithm. I say: although it can be put into mathematical models, it can also be imprinted deeply in your mind, and (if youâre highly intelligent and rational) it may work better there than in a traditional computer program. This is because humans can more easily take many nuances into account in their minds than type those nuances into a program. Thus, while mental calculations are imprecise, they are richer in detail which can (with practice) lead to relatively good decisions (both relative to decisions suggested by a computer program that lacks important nuances, and relative to human decisions that are rooted in deontology, virtue ethics, conventional wisdom, popular ideology, or legal precedent).
I did add a caveat there about intelligence and rationality, because the strongest argument against utilitarianism that comes to mind is that it requires a lot of mental horsepower and discipline to be used well as a decision procedure. This is also why I value rules and virtues: an mathematically ideal consequentialist would have no need of them per se, but such a being cannot exist because it would require too much computational power. I think of rules and virtues as a way of computationally bounding otherwise intractable mental calculations, though they are also very useful for predicting public perception of oneâs actions (as most of the public primarily views morality through the lenses of rules and virtues). Related: superforecasters are human, and I donât think itâs a coincidence that lots of EAs like forecasting as a test of intelligence and rationality.
However, I think that consequentialist utilitarianism (CU) has value for people of all intelligence levels for judging which rules and virtues are good and which are not. For example, we can explain in CU terms why common rules such as âdonât stealâ and âdonât lieâ are usually justified, and by the same means it is hard to justify rules like âdonât masturbateâ or the Third Reichâs rule that only non-Jewish people of âGerman or kindred bloodâ could be citizens (except via strange axioms).
This makes it very valuable from a secular perspective: without CU, what other foundation is there to judge proposed rules or virtues? Most people, it seems to me, just go with the flow: whatever rules/âvirtues are promoted by trusted people are assumed to be good. This leads to people acting like lemmings, sometimes believing good things and other times bad things according to whatever is popular in their tribe/âgroup, since they have no foundational principle on which to judge (they do have principles promoted by other people, which, again, could be good or bad). While Christians say âGod is my rockâ, I say âthese two axioms are my bedrock, which led me to a mountain I call mature consequentialist utilitarianismâ. I could say much more on this but alas, this is a mere comment in a thread and writing takes too much time. But hereâs a story I love about Heartstone, the magic gemstone of morality.
For predictive decision-making, choosing actions via CU works better the more processing power you use (whether mental or silicon). Nevertheless, after arriving at a decision, it should always be possible to explain the decision to people without access to the same horsepower. We shouldnât say âMy giant brain determined this to be the right decision, via reasoning so advanced that your puny mind cannot comprehend it. Trust me.â It seems to me that anyone using CU should be able to explain (and defend) their decision in CU terms that donât require high intelligence to understand. However, (i) the audience cannot verify that the decision is correct without using at least as much computing power, they can only verify that the decision sounds reasonable, (ii) different people have different values which can correctly lead to disagreement about the right course of action, and (iii) there are always numerous ways that an audience can misunderstand what was said, even if it was said in plain and unambiguous language (I suspect this is because many people prefer other modes of thought, not because they canât think in a consequentialist manner.)
Now, just in case I sound a bit âroboticâ here, note that I like the way I am. Not because I like sounding like Spock or Data, but because there is a whole life journey spanning decades that led to where I am now, a journey where I compared different ways of being and found what seem to be the best, most useful and truth-centered principles from which to derive my beliefs and goals. (Plus Iâve always loved computers, so a computational framing comes naturally.)
I think a lot of EAs have an above-average level of empathy and sense of responsibility. My poth (hypothesis) is that these things are what caused them to join EA in the first place, and also caused them to have this anxiety about lives not saved and good not done. This poth leads me to predict that such a person will have had some anxiety from the first day they found out about the disease and starvation in Africa, even if joining EA managed to increase that anxiety further. For me personally, global poverty bothered me since I first learned about it, I have a deep yearning to improve the world that appeared 15+ years before I learned about EA, I donât feel like my anxiety increased after joining EA, and the analysis weâre talking about (in which there is a utilitarian justification not to feel bad about only giving 10% of our income) helps me not to feel too bad about the limits of my altruism, although I still want to give much more to fund direct work, mainly because I have little confidence in my ability to persuade other EAs about what I think needs to be done (only 31 karma including my own strong upvote? Yikes! đłđ±)
Is that true? Iâm not surprised if military personnel make a lot of art, but I donât expect it from the formal structures or leadership. But, if a military does spend money on art, I expect itâs a result of some people who advocated for art to sympathetic ears that controlled the purse strings, and that this worked either because they were persuasive or because people liked art. The same should work in EA if you find a framing that appeals to EAs. (which reminds me of the odd fact that although I identify strongly with common EA beliefs and principles, I have little confidence in my ability to persuade other EAs, as I am often downvoted or not upvoted. I cannot explain this.)
My guess is that itâs a combination of
the difficulty EAs have had seeing art as an impactful intervention (although I feel like it could be, e.g. as a way of attracting new EAs and improving EA mental health). Note: although EAs like theoretical models and RCTs demonstrating good cost/âbenefit, my sense is that EA leaders also understand (in a CU manner) that some interventions are valuable enough to support even when thereâs no solid theoretical/âscientific basis for them.
artists rarely becoming EAs (why? maybe selection bias in membership drives⊠maybe artists being turned off by EA vibes for some reason...)
EA being a young movement, so (i) lots of things still havenât been worked out and (ii) the smaller the movement is, the less likely that art is worthy of funding (the explanation for this assertion feels too complicated to briefly explain.)
something else I didnât think of (???)
Wow thanks for your long and thoughtful reply. I really do appreciate your thinking and Iâm glad CU is working for you and youâre happy with it...that is a good thing.
I do think youâve given me a little boost in my argument against CU unfortunately, though, in the idea that our brain just doesnât have enough compute. There was a post a while back from a well know EA about their long experience starting orgs and âdoing EA stuffâ and how the lesson theyâd taken from it all is that there are just too many unknown variables in life for anything we try to build and plan outcomes for to really work out how we hoped...itâs a lot of shots in the dark and sometimes you hit. That is similar to my experience as well...and the reason is we just donât have enough data nor enough compute to process it all...nor adequate points or spectrums of input. The thing that better fits in that kind of category is a robot who with an AI mind can do far more compute...but even they are challenged. So for me thatâs another good reason against CU optimizing well for humans.
And the other big thing I havenât mentioned is our mysterious inner life, the one that responds to spirituality and to emotions within human relationships, and to art...this part of us does not follow logic or compute...it is somehow organic and you could almost say quantum in how we are connected to other people...living with it is vital for happiness...I think the attraction of CU is that it adds to us the logic side that our inner life doesnât always have...and so the answer is to live with both together...to use CU thinking for the effective things it does, but also to realize where it is very ineffective toward human thriving...and so that may be similar to the differences you see between naive and mature CU. Maybe thatâs how we synthesize our two views.
How I would apply this to the Original Post here is that we should see âthe gaping hole where the art should beâ in EA as a form of evidence of a bug in EA that we should seek to fix. I personally hope as we turn this corner toward a third wave, we will include that on the list of priorities.
Well, okay. Iâve argued that other decision procedures and moralities do have value, but are properly considered subordinate to CU. Not sure if these ideas swayed you at all, but if youâre Christian you may be thinking âI have my Rockâ so you feel no need for another.
You could do this, but youâd be arguing axiomatically. A claim like âmy axioms are above those of utilitarians!â would just be a bare assertion with nothing to back it up. As I mentioned, I have axioms too, but only the bare minimum necessary, because axioms are unprovable, and years of reflection led me to reject all unnecessary axioms.
The most important thing to realize is that âthings with intrinsic valueâ is a choice that lies outside consequentialism. A consequentialist could indeed choose an axiom that âart is intrinsically valuableâ. Calling it âutilitarianâ feels like nonstandard terminology, but such value assignment seems utilitarian-adjacent unless you treat it merely as a virtue or rule rather than as a goal you seek after.
Note, however, that beauty doesnât exist apart from an observer to view it, which is part of the reason I think this choice would be a mistake. Imagine a completely dead universeâno people, no life, no souls/âGod/âheaven/âhell, and no chance life will ever arise. Just an endless void pockmarked by black holes. Suppose there is art drifting through the void (perhaps Boltzmann art, by analogy to a Boltzmann brain). Does it have value? I say it does not. But if, in the endless void, a billion light years beyond the light cone of this art that can never be seen, there should be one solitary civilization left alive, I argue that this civilizationâs art is infinitely more valuable. More pointedly, I would say that it is the experience of art that is valuable and not the art itself, or that art is instrumentally valuable, not intrinsically. Thus a great work of art viewed by a million people has delivered 100 times as much value as the same art only seen by 10,000 peopleâthough one should take into account countervailing factors such as the fact that the first 10,000 who see it are more likely to be connoisseurs who appreciate it a lot, and the fact that any art you experience takes away time you might have spent seeing other art. e.g. for me I wish I could listen to more EA and geeky songs (as long as they have beautiful melodies), but lacking that, I still enjoy hearing nice music that isnât tailored as much to my tastes. Thus EA art is less valuable in a world that is already full of art. But EA art could be instrumentally valuable both in the experiences it creates (experiences with intrinsic value) and in its tendency to make the EA movement healthy and growing.
So, to be clear, I donât see a bug in utilitarianism; I see the other views as the ones with bugs. This is simply because I see no flaws in my moral system, but I see flaws in other systems. There are of course flaws in myself , as I illustrate below.
I think itâs important to understand and accept that humans cannot be maximally moral; we are all flawed. And this is not a controversial statement to a Christian, right? We can be flawed even in our own efforts to act morally! Iâll give an example from last year, when Russia invaded Ukraine. I was suddenly and deeply interested in helping, seeing that a rapid response was needed. But other EAs werenât nearly as interested as I was. I wouldâve argued that although Ukrainian lives are less cost-effective to save than Africans, there was a meaningful longtermist angle: Russia was tipping the global balance of power from democracy to dictatorship, and if we donât respond strongly against Putin, Xi Jinping could be emboldened to invade Taiwan; in the long term, this could lead to tyranny taking over the world (EA relies on freedom to work, so this is a threat to us). Yet I didnât make this argument to EAs; I kept it to myself (perhaps for the same reason I waited ~6 months to publish this, I was afraid EAs wouldnât care). I ended up thinking deeply about the warâabout what it might be like as a civilian on the front lines, about what kinds of things would help Ukraine most on a small-ish budget, about how the best Russians werenât getting the support they deserved, and about how events might play out (which turned into a hobby of learning and forecasting, and the horrors of war Iâve seen are likeâholy shit, did I burn out my empathy unit?). But not a single EA organization offered any way to help Ukraine and I was left looking for other ways to help. So, I ended up giving $2000 CAD to Ukraine-related causes, half of which went to Ripleyâs Heroes which turned out to be a (probably, mostly) fraudulent organization. Not my best EA moment! From a CU perspective, I performed badly. I f**ked up. I shouldâve been able to look at the situation and accept that there was no way I could give military aid effectively with the information I had. And I certainly knew that military aid was far from an ideal intervention and that there were probably better interventions, I just didnât have access to them AFAIK. The correct course of action was not to donate to Ukraine (understanding that some people could help effectively, just not me). But emotionally I couldnât accept that. But you know, I have no doubt that it was myself that was flawed and not my moral system; CUâs name be praised! đ Also I donât really feel guilty about it, I just think âwell, Iâm human, Iâll make some mistakes and no oneâs judging me anyway, hopefully Iâll do better next time.â
In sum: humans canât meet the ideals of (M)CU, but that doesnât mean (M)CU isnât the correct standard by which to make and evaluate choices. There is no better standard. And again, the Christian view is similar, just with a different axiomatic foundation.
Edit: P.S. a relevant bit of the Consequentialism FAQ:
If art production is critical to EAâs ability to maximize well-being and EA is failing to do so, then this is a failure of EA not to be utilitarian enough. Your criticism perhaps stems from the culture and notions of people who happen to subscribe to utilitarianism, not utilitarianism itself. Utilitarians are human, and thus capable of being in error as to what will do the most good.
If you want to criticize utilitarianism itself, you would have to say the goal of maximizing well-being should be constrained or subordinated by other principles/ârules, such as requirements of honesty or glorifying God/âetc. You could say something like the production of art/âbeauty is intrinsically valuable apart from the well-being it produces and thus utilitarianism is flawed in that it fails to capture this intrinsic value (and only captures the instrumental value.
I think a more apt target for your criticism would not be utilitarianism itself, but rather the cultures and mentalities of those who practice it.
I think we are just using two different definitions of utilitarian. I am talking about maximizing well-being⊠If that means adding more ice cream or art into agentsâ lives, then utilitarianism demands ice cream and art. Utilitarianism regards the goal⊠Maximization of net value of experience.
A more apt comparison than a specific political system such as communism, capitalism, or mercantilism would be a political philosophy that defined the goal of governmental systems as âadvancing the welfare of people within a state.â Then, different political systems could be evaluated by how well they achieve that goal.
Similarly, utilitarianism is agnostic as to whether one should drink Huel, produce and enjoy art, work X hours per week, etc. All of these questions come down to whether the agent is producing better outcomes for the world.
So if youâre saying that the habits of EAs are not sustainable (and thus arenât doing the greatest good, ultimately), youâre not criticizing utilitarianism. Rather, youâre saying they are not being the best utilitarians they can be. You canât challenge utilitarianism by saying that utilitariansâ choices donât produce the most good. Then youâre just challenging choices made by them within a utilitarian lens.
If it were the case that belief in utilitarianism predictably causes the world to have less utility, then under basically any common moral system thereâs no strong case for spreading utilitarianism[1]. In such a world, there is of course no longer a utilitarian case for spreading utilitarianism, and afaik the other common ethical systems would not endorse spreading utilitarianism, especially if it reduces net utility.
Now âhistorically utilitarianism has led to less utilityâ does not strictly imply that in the future âbelief in utilitarianism predictably causes the world to have less utility.â But it is extremely suggestive, and more so if it looks overdetermined rather than due to a specific empirical miscalculation, error in judgement, or bad actor.
Iâm personally pretty neutral on whether utilitarianism has been net negative. The case against is that I think Bentham was unusually far-seeing and correct relative to his contemporaries. The strongest case for in my opinion probably comes from people in our cluster of ideas[2] accelerating AI capabilities (runner ups include FTX, some specific culty behaviors, and well-poisoning of good ideas), though my guess is that there isnât much evidence that the more utilitarian EAs are more responsible.
On a more theoretical level, Askellâs distinction between utilitarianism as a criterion of rightness vs decision procedure is also relevant here.
And depending on details, there might indeed be a moral obligation to reduce utilitarianism.
Note that even if utilitarianism overall is a net positive moral system for people to have, if it were the case that EAs specifically would be destructive with it, thereâs still a local case against it.