Wholesomeness and Effective Altruism
This is the second of a collection of three essays, ‘On Wholesomeness’. In the first essay I introduced the idea of wholesomeness as a criterion for choosing actions. This essay will explore the relationship between acting wholesomely and some different conceptions of effective altruism.
Tensions
Apparent tensions
Acting wholesomely feels relatively aligned with traditional commonsense notions of doing good. To the extent that EA is offering a new angle on doing good, shouldn’t we expect its priorities to clash with what being wholesome suggests? (It would be a suspicious convergence if not!)
Getting more concrete:
It feels wholesome to support our local communities, but EA suggests it would be more effective to support others far removed from us.
It doesn’t feel wholesome to reorient strategies around speculative sci-fi concerns. But this is what a large fraction of EA has done with AI stuff.
Surely there are tensions here?
Aside: acting wholesomely and commonsense morality
Although I’ve just highlighted that acting wholesomely often feels aligned with commonsense morality, I think it’s important to note that it certainly doesn’t equal commonsense morality. Wholesome action means attending to the whole of things one can understand, and that may include esoteric considerations which wouldn’t get a look in on commonsense morality. The alignment is more one-sided: if commonsense morality doesn’t like something, there’s usually some reason for the dislike. Wholesomeness will seek not to dismiss these objections out of hand, but rather to avoid such actions unless the objections have been thoroughly understood and felt not to stand up.
The shut-up-and-multiply perspective
A particular perspective which is often associated with EA is the idea of taking expected value seriously, and choosing our actions on this basis. The catch-phrase of this perspective might be “shut up and multiply!”.
Taken at face value, this perspective would recommend:
We put everything we can into an explicit model
We use this to determine what seems like the best option
We pursue that option
Deep tensions between wholesomeness and straw EA
There’s a kind of simplistic version of EA which tells you to work out what the most important things are and then focus on maximizing goodness there. This is compatible with using the shut-up-and-multiply perspective to work what’s most important, but doesn’t require it.
I don’t think that this simplistic version of EA is the correct version of EA (precisely because it misses the benefits of wholesomeness; or for another angle on its issues see EA is about maximization, and maximization is perilous). But I do think it’s a common thing to perceive EA principles as saying, perhaps especially by people who are keen to criticise EA[1]. For this reason I’ll label it “straw EA”.
There is a quite fundamental tension between acting wholesomely and straw EA:
Wholesomeness tells you to focus on the whole, and not let actions be dictated by impact on a few parts of things
Straw EA tells you to focus on the most important dimensions and maximize there — implicitly telling you to ignore everything else
Indeed when EA is introduced it is sometimes emphasised that we shouldn’t necessarily focus on helping those close to us, which could sound like an instruction to forget whom we’re close to
Wholesome EA
I don’t think that these apparent tensions are necessary. In this section I’ll describe a version of effective altruism, which I’ll call “wholesome EA”, which is deeply grounded in a desire to act wholesomely. Although the articulation is new, I don’t think that the thing I’m proposing here is fundamentally novel — I feel like I’ve seen some version of this implicitly espoused (and followed) by people for years.
My purposes in this essay are to show that there is a coherent unifying perspective which captures what’s good about acting wholesomely as well as what’s good about EA. I’ll give some examples of how it might have helped to avoid some kinds of errors that have arisen in EA. But I’ll leave discussion of its advantages and disadvantages as a set of ideas to spread until the third essay.
Finding EA within a wholesomeness-focused worldview
Suppose you are principally concerned with acting wholesomely. If you’re unfamiliar with EA, or with the shut-up-and-multiply perspective, you’re likely to choose quite different things than if acting from an EA perspective; e.g. you may put a lot of resources into your local community. (This is one concretization of the apparent tensions discussed above.)
But now suppose you read about, and spend time considering, EA and the shut-up-and-multiply perspective. What is wholesome depends on the whole system. After you’re aware of the case for the importance of helping those who need it most, it is unlikely to continue to feel wholesome to devote all your resources to your local community.
Indeed, on this picture the central thing that EA is doing is calling out the unwholesomeness of getting absorbed in the local and not caring about what’s most effective. This is a pain point, and calling it out — and building structures which better address it — is increasing global wholesomeness. The EA perspective is valuable because it changes our notion of what wholesome action entails — towards things that, at least when things are going right, are more truly wholesome and more likely to lead to a very good world.
Equipped with these new perspectives, you’re likely to look for ways to better understand the larger wholes, and to put resources in effective ways towards bettering those. But you’ll stay connected to a local desire for wholesomeness; not as something to be maximized, but as something to be upheld.
Let’s revisit the concrete tensions from above:
It’s wholesome to participate constructively in our local communities, and people acting on a wholesome EA worldview will continue to do this
e.g. while straw EA might present lines of reasoning like “effective charities will make so much better use of this money than the people serving us” and recommend giving very small or no tips in restaurants, wholesome EA will reject this as unwholesome
But after participating constructively in local communities, people acting on a wholesome EA worldview will often still have a lot of resources which are their own to choose what to do with, and feel it’s wholesome to devote those to the most effective causes they know of
I think it’s straightforwardly wholesome to take ideas seriously, and not look away from potentially big issues just because they sound weird or are unpopular
There are ways of acting in light of this which I think are unwholesome — e.g. talking as though people who haven’t looked into this or don’t agree are stupid, or developing disdain for normal institutions; the wholesome EA perspective avoids doing these
(I do think that how to talk about issues which are not broadly recognized as issues is delicate and can be tricky to get right, and mistakes usually contain some trace of unwholesomeness — there is, after all, unwholesomeness everywhere — but this is less than the unwholesomeness that would come from refusing to think about things that seem like they may be the most important issues of our time)
Again, I don’t think my picture here is a stretch from the normal English sense of the word “wholesomely”. Rather it’s just part of taking the idea of acting wholesomely seriously. Here’s what ChatGPT had to say when I asked it:
Failing to act on the most important things can be seen as a failure to act wholesomely, especially when it reflects a neglect of moral responsibilities, a lack of concern for the well-being of others, or an absence of integrity in prioritizing what truly matters.
Finding wholesomeness within an EA worldview
In some ways the picture I just presented doesn’t have much content. It could be read as “those things you thought were obviously good anyway? EAs should still care about those”. Well, obviously. Except … the reason I’m writing this out is that while it may seem obvious that the final answer should involve caring about them, it’s not necessarily obvious how to subsume them into the EA worldview.
In the last section I implicitly proposed that the whole question, “how to subsume them into the EA worldview”, might be the wrong way up. It’s more natural to think of it as “how to subsume the EA worldview into acting wholesomely”, and this is also easier to answer.
However, some readers may be starting with an EA worldview. In that case we might need to construct arguments for the importance of wholesomeness as a principle for high-level decision-making. I think this is possible; it is part of the purpose (executed imperfectly) of these three essays. It is now my tentative belief that it will be a roundabout way of putting wholesomeness on top. But I have not yet argued for that.
Still, the preceding section demonstrates that adopting a worldview which puts wholesomeness at the top need not involve giving up on the heart of EA. I think it will achieve many or all of the things of central importance to the EA worldview, and will additionally have advantages (discussed across the three essays) which are valued by the EA worldview. And the EA worldview is unusually committed to recommending the actions which are robustly predicted to have good consequences, whatever they are (hence including shifts in worldview). I have, therefore, argued that wholesome EA is at least a plausible contender worldview to adopt, even if starting with a purely EA worldview which does not value wholesomeness for its own sake.
Central challenges of wholesome EA
In this section I’ll consider two central challenges that must be grappled with by people acting along wholesome EA lines. These challenges aren’t unique to wholesome EA; in fact, versions of them can be framed for anyone trying to act wholesomely, or trying to do good. But a wholesome EA perspective makes them especially salient — in a way that I think is good. Since anyone acting must grapple with these challenges and make tradeoffs, it seems better to make those choices consciously than implicitly or without noticing that there are tradeoffs.
Wholesomeness vs expedience
Often people would prefer all else equal for things to be as wholesome as possible, but feel that all else isn’t equal; making things more wholesome takes effort, and that’s effort that could be put to other good purposes. Navigating this properly means handling two related balancing acts:
When to prioritize moving forward faster vs addressing more known pieces of unwholesomeness
When to prioritize moving forward faster vs searching to find unknown aspects of unwholesomeness
Of these, #1 is relatively the more straightforward: if you have the different considerations in front of you, you can hold them up and ask “what is the more wholesome thing to do?”. Sometimes this will mean moving forward and accepting the remaining issues (since your sense of what is wholesome should definitely be tracking the importance of moving faster); sometimes it will mean slowing down to deal with them. Ideally you accept that you are making tradeoffs, and don’t fall into the failure modes of perfectionism, nor of pretending problems aren’t there. Of course we all make many decisions of this type, and we frequently misjudge at the margin exactly where to draw the line, but that’s a kind of challenge we have to just face and try to improve on.
I think that #2 can be a bit more gnarly. In principle it’s the same kind of decision, but you’re weighing up the known costs of slowing down versus looking for unknown unknowns. This is a subtle kind of judgement call — made harder by the fact that in the moment it’s often not even recognised as a judgement call. I think people can fall into multiple traps:
Being single-minded about their objective it can seem like a wasteful distraction to investigate issues that don’t seem like they will help with the core goal (so they under-explore)
Feeling like it’s crucial to understand all possible issues, people can be reluctant to move forward until they feel that they’ve got a comprehensive grasp of things (so they over-explore)
Oddly enough I think that EAs may be unusually vulnerable to both of these traps.[2] A relatively normal response to the issue is to invest a modest amount of time in searching for issues and then move forwards. But EAs care a lot about doing the right thing, and optimizing. They’re unusually susceptible to talking themselves into the single-mindedness about their objectives, and they’re also unusually susceptible to putting effort into trying to optimize everything, even when that doesn’t make sense. After soaking in EA ideas, the kind of half-arsed search that someone might normally do doesn’t intuitively feel like it’s the right approach — but I think for deep pragmatic reasons it often is.
Global vs local wholesomeness as the focus of attention
We only have so much attention to go around. In the pursuit of wholesomeness, how should we best allocate that between the global and the local? By “local” I don’t just mean in the sense of “local communities”, but also in terms of work we’re doing.
We can imagine that we’re parts of systems larger than ourselves which are collectively trying to make good things happen:
In seeking to contribute as well as we can, we could focus more on being a good role-player, learning how to be an excellent cog in the system we’re embedded in, without too much regard for parts of the system far from us.
This works well if we’re already in the right position and it’s not necessary for our decisions to understand how they’ll be used elsewhere. It works less well if we’re giving up on opportunities to change role or shift the larger-scale behaviour of the system.
Alternatively we might focus on being a good strategist, building the best models we can of the big picture and what directions it would be best if it moved in, without fretting too much about our local interactions or how we’re going to contribute.
This may be epistemically useful for people in the system to spend time doing, but it significantly limits the individual’s ability to coordinate with and contribute well to local systems around them.
Of course both of these extremes are somewhat unwholesome. The ideal must be some balance, where we pay more attention to local details than details elsewhere in the system, but still maintaining some awareness of the big picture and our role in it.
Where does this balance lie? That’s a subtle question, and the correct answer may vary significantly with details of the person choosing and their situation. My aim here is not to provide the answer for people, but to point out the existence of the question and the necessity of making tradeoffs. If people realise that they’re making decisions on this whether consciously or not, I think it will be easier to take a bit of time to reflect and consider whether the current balance feels wholesome or needs adjustment.
EA mistakes as failures of wholesomeness
With my definition of wholesomeness, one might think that just about any mistake is unwholesome — “the system just wasn’t putting enough weight on [other parts of the whole]”.
Nonetheless I think there’s a useful distinction between:
Errors of wholesomeness, where different attitudes towards parts of the whole might predictably have avoided the error
e.g. single-minded pursuit of an outcome; treating some issue as toxic in ways that block good judgements; refusal to consider certain classes of effect
Mere errors of assessing the whole, which while failing to make the judgements that are truly best for the whole, don’t do so because of some predictable blindspots
I think that a good fraction of serious mistakes in EA have been errors of wholesomeness, and that if we had systematically better attitudes there, that might make an important difference. (To some extent the claim here is that errors of wholesomeness are more blameworthy than other errors, precisely because they are potentially foreseeable and avertable.)
I think that EA has been especially vulnerable to errors of this type, for reasons gestured at when we looked at the tensions between wholesomeness and straw EA. As a set of ideas, the value of EA comes in the important novel perspectives it offers. These can point to the crucial importance of things otherwise overlooked. Little wonder that people getting swept up in this have sometimes lost touch with all the normal (boring and non-crucial) reasons why normal (boring) things matter, especially if nobody has done the work to translate their importance into an EA ontology.
Even if you grant that many serious historical mistakes have had this character (which I won’t try to justify in the body of the essay as I’m trying to inhabit the general perspective rather than be too anchored to things in 2024), this isn’t enough to demonstrate that EA should pay more attention to wholesomeness. But it’s enough to demonstrate that there would be some concrete benefits to doing so. I’ll go deeper on the general question of whether it’s worthwhile in the final essay.
- ^
Mathematically-inclined readers might also be interested in Garrabrant’s sequence of posts on geometric rationality, which explore some perspectives which are gentler alternatives to shut-up-and-multiply, and feel thematically related to what I’m saying here.
- ^
I’m not confident about this; at least there are EA-mindset mechanisms which lead to each of the traps, but perhaps the general population has one or both of them at higher prevalence for other reasons.
Examples of EA errors as failures of wholesomeness
In this comment I’ll share a few examples of things I mean as failures of wholesomeness. I don’t really mean to over-index on these examples. I actually feel like a decent majority of what I wish that EA had been doing differently relates to this wholesomeness stuff. However, I’m choosing examples that are particularly easy to talk about — around FTX and around mistakes I’ve made — because I have good visibility of them, and in order not to put other people on the spot. Although I’m using these examples to illustrate my points, my beliefs don’t hinge too much on the particulars of these cases. (But the fact that the “failures of wholesomeness” frame can be used to provide insight on a variety of different types of error does increase the degree to which I think there’s a deep and helpful insight here.)
Fraud at FTX
To the extent that the key people at FTX were motivated by EA reasons, it looks like a catastrophic failure of wholesomeness — most likely supported by a strong desire for expedience and a distorted picture where people’s gut sense of what was good was dominated by the terms where they had explicit models of impact on EA-relevant areas. It is uncomfortable to think that people could have caused this harm while believing they were doing good, but I find that it has some plausibility. It is hard to imagine that they would have made the same mistakes if they had explicitly held “be wholesome” as a major desideratum in their decision-making.
EA relationship to FTX
Assume that we don’t get to intervene to change SBF’s behaviour. I still think that EA would have had a healthier relationship with FTX if it had held wholesomeness as a core virtue. I think many people had some feeling of unwholesomeness associated with FTX, even if they couldn’t point to all of the issues. I think focusing on this might have helped EA to keep FTX at more distance, not to extol SBF so much just for doing a great job at making a core metric ($ to be donated) go up, etc. It could have gone a long way to reducing inappropriate trust, if people felt that their degree of trust in other individuals or organizations should vary not just with who espouses EA principles, but with how much people act wholesomely in general.
My relationship to attraction
I had an unhealthy relationship to attraction, and took actions which caused harm. (I might now say that I related to my attraction as unwholesome — arguably a mistake in itself, but compounded because I treated that unwholesomeness as toxic and refused to think about it. This blinded me to a lot of what was going on for other people, which led to unwholesome actions.)
Though I now think my actions were wrong, at some level I felt at the time like I was acting rightly. But (though I never explicitly thought in these terms), I do not think I would have felt like I was acting wholesomely. So if wholesomeness had been closer to a core part of my identity I might have avoided the harms — even without getting to magically intervene to fix my mistaken beliefs.
(Of course this isn’t precisely an EA error, as I wasn’t regarding these actions as in pursuit of EA — but it’s still very much an error where I’m interested in how I could have avoided it via a different high-level orientation.)
Wytham Abbey
Although I still think that the Wytham Abbey project was wholesome in its essence, in retrospect I think that I was prioritizing expedience over wholesomeness in choosing to move forward quickly and within the EV umbrella. I think that the more wholesome thing to do would have been, up front, to establish a new charity with appropriate governance structures. This would have been more inconvenient, and slowed things down — but everything would have been more solid, more auditable in its correctness. Given the scale of the project and its potential to attract public scrutiny, having a distinct brand that was completely separate from “the Centre for Effective Altruism” would have been a real benefit.
I knew at the time that that wasn’t entirely the wholesome way to proceed. I can remember feeling “you know, it would be good to sort out governance properly — but this isn’t urgent, so maybe let’s move on and revisit this later”. Of course there were real tradeoffs there, and I’m less certain than for the other points that there was a real error here; but I think I was a bit too far in the direction of wanting expedience, and expecting that we’d be able to iron out small unwholesomenesses later. Leaning further towards caring about wholesomeness might have led to more-correct actions.
The more I read of these essays the less I agree with this. On my subjective authority as a native English speaker, your usage seems pretty far from the normal sense to me. I think what you’re gesturing at is a reasonable concept but I think it’s quite confusing to call it “wholesome”.
As some evidence, I kept finding myself having to reinterpret sentences to use your meaning rather than what I would consider the more normal meaning. For example, “What is wholesome depends on the whole system.” This is IMO kind of nonsensical in normal English.
I’m guessing that the word is just used differently in different contexts or circles? Your comment made me wonder how much I was just stuck in my own head about this. So I asked ChatGPT about the sentence you’re labelling as nonsensical, and it said:
Of course I guess that ChatGPT is pretty good at picking up on meanings which are known anywhere, so this is evidence more that I’m aligning with one existing usage of the word, rather than that all native English speakers will understand it that way (and you’re providing helpful evidence against the latter claim).
The same could be said about e.g. many fake aphorisms people come up with. Something can function to make you pause for thought, but still be nonsensical.
It’s also obvious that ChatGPT is bullshitting because such a short sentence is almost by definition not “comprehensive”
OK, fair complaint.
Another data point that this is how some other people understand the word is this comment by Gordon S Worley on LessWrong:
I think that’s just a minority of people retroactively imagining an additional meaning to the word. The ‘whole’ in wholesome is in contrast to being injured, not in contrast to something being partial. so you get: uninjured → healthy → beneficial → morally good. Nothing to do with examining parts vs wholes.
(‘Wholesome’ was a word (‘hailasam’) before English was even its own language, when whole/hail primarily meant being healthy. So it pretty much bypasses the idea of ‘leaving nothing out’. It’s like saying that a brainstorming session has to be some sort of violent, disturbing process because it contains the word ‘storm’ in it. Indeed there’s a completely separate meaning for ‘brainstorm’ which is more like this—a moment of mental confusion essentially, which is basically the opposite of a brainstorming session.)
I appreciate the etymological details, and feel a bit embarrassed that I hadn’t looked into that already.
I guess I’d describe what’s going on as:
The original word meant “healthy”
I’m largely using it to mean “healthy” in the sense of “healthy for the systems we’re embedded in” (which I think is a pretty normal usage)
I’m adding a flavour of “attending to the wholeness” (inspired by Christopher Alexander), which includes both “attending to all the parts” (new) as well as “attending to making things fit with existing parts” (essentially an existing meaning, as this is part of healthy)
This is vibe-wise supported by the presence of the string “whole” as part of “wholesome”
This makes it easier for me (and I guess others) to conceive of and remember this extra sense
However, it’s etymologically just a coincidence
Does that seem fair?
Executive summary: Acting wholesomely feels aligned with commonsense morality but has tensions with some EA principles like “shut up and multiply”. However, a “wholesome EA” perspective can incorporate the value of EA while avoiding unwelcome implications.
Key points:
Wholesome action considers the whole system rather than maximizing narrow metrics. This contrasts with a “straw EA” fixation on the most important things.
Wholesome EA subsumes EA priorities into a larger concern for constructive participation in communities. It avoids unwholesome implications like neglecting loved ones.
Central challenges are balancing wholesomeness and expedience, and allocating attention between global and local concerns.
Many EA mistakes reflect failures of wholesomeness like single-minded over-focus or refusal to consider certain effects. A wholesome EA could help avoid such errors.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Having complained about the summary on the last one, I thought I should say that this summary seems pretty decent :)