Cross-posting from my substack, where I ask: What would an “Alt-EA” beneficentrist movement look like?
Three Kinds of Critics
Some people just aren’t very altruistic, and so may quietly dislike Effective Altruism for promoting values that conflict with their interests. (It’s easy to see how wealthy academics might be better off with a moral ideology that prioritizes verbiage over material outcomes, for example.) One doesn’t often hear this perspective explicitly voiced, but—human nature being what it is—I expect it must be out there.
Others may be broadly enthusiastic about the idea of Effective Altruism, but have some concerns about the current state of the movement as it actually stands. From here one might offer friendly/internal critiques of EA: “Here’s how you might do better by your own lights!” And my sense is that good-faith critiques of this sort tend to get a very positive reception on the EA forum. (Indeed, there’s now a $100k incentive for criticism of EA and its current priorities.)
Finally, a third class of critics claims to agree with the beneficent values of effective altruism, but regard the actual EA movement as hopelessly misguided, ineffective, cultish, a mere smokescreen for political complacency, or what have you. (These sorts often use sneer quotes to speak of the “Effective” “Altruism” movement.) I find this final group more puzzling. As Jeff McMahan has noted, “the philosophical critics of effective altruism tend to express their objections in a mocking and disdainful manner… suggestive of bad faith.”
One major concern I have with the actually-existing wholesale criticisms of EA is that they tend to reinforce a kind of moral complacency. No need to really do anything beneficent so long as you give it lip-service, and insist that the rest is a “collective responsibility” best left to the state to take care of (just don’t hold your breath…). I feel like these critics are discouraging real-life beneficence, and thereby doing real harm.
Viewed in this light, the absence of any competing explicitly beneficentrist movements is striking. EA seems to be the only game in town for those who are practically concerned to promote the general good in a serious, scope-sensitive, goal-directed kind of way. If a large number of genuinely beneficent people believed that actually-existing-EA was going about this all wrong, I’m surprised that they haven’t set up an alternative movement that better pursues these goals while avoiding the shortcomings they associate with traditional EA. (Perhaps they’d prefer different branding. That’s fine; I’m not concerned here with the label, but with the underlying values and ideas.)
What might an Alt-EA movement look like?
I’d genuinely love to hear from critics what they think a better alternative might look like. (I think it’s now widely acknowledged that early EA was too narrowly focused on doing good with high certainty—as evidenced through RCTs or the like—perhaps in reaction to the aid skepticism that seemed like the major barrier to uptake at the time. But EA is now much more open to diverse approaches and uncertain prospects so long as a decent case can be made for their expected value being high.)
Maybe the alternative would involve a greater political focus, with local community organizing being a major cause priority (as an implicit form of community-building)? Maybe it would avoid utilitarian/cosmopolitan rhetoric, and focus more on meeting the median voter where they are—with appeals to more local and emotive values such as solidarity—with an eye to encouraging many small nudges towards a better world? Maybe it would be more optimistic about the likely outcomes of a political “revolution”, and less optimistic about technocratic interventions? I’m not too sure what the epistemic basis for any of this would be, but perhaps one could lean hard into “self-effacingness” and insist that globally better results can be achieved by not aiming too directly at this goal, along with being guided more by hope than by evidence?
Might it then turn out that already-existing popular political movements can be viewed as alternative (albeit highly indirect) implementations of beneficentrism after all? I’m dubious—it seems awfully fishy to just insist that one’s favoured form of not carefully aiming at the general good should somehow be expected to actually have the effect of best promoting the general good. While it clearly wouldn’t be optimal to make every single decision by appeal to explicit cost-benefit analysis, it seems crazily implausible that (in realistic circumstances) it somehow maximizes expected utility to never employ direct utilitarian reasoning. It’s notable that the utilitarian philosophers who have thought most about this issue end up advocating for a multi-level approach (using explicit utilitarian reasoning in unusual or unexpected high-stakes situations—e.g. pandemic policy—and during “calm, reflective moments” to help guide our choice of everyday heuristics, strategies, and virtues, for example).
But I’d be curious if others—especially those who sneer at actually-existing EA—are more inclined to defend the optimality of existing political movements. Or if they have an entirely different conception of what Alt-EA should look like?
Moral Sincerity
One obvious possibility is that those hostile to EA aren’t truly sympathetic to beneficentrism at all, and really just have worse values. I’d be happy to see that hypothesis refuted. I think it’d be especially exciting to see an entirely new Alt-EA ecosystem spring up around those other beneficentrists who sincerely pursue the general good in a different way, or with a different rhetorical/ideological framing, that maybe appeals better to a different audience than traditional EA does. (So long as this alternative movement has good epistemics and doesn’t seem likely to be positively counterproductive and bad for the world, that is!)
Given the risk of paying empty lip-service to good values, I think it’s worth making the challenge explicit: if not EA, how do you move beyond cheap talk and take your values seriously—promoting them in a scope-sensitive, goal-directed, outcome-oriented way?
I find it so frustrating that the hostile critics don’t even seem to be interested in this question! Whatever your values are, there are so many ways that you could more effectively promote them through donations, direct work, and advocacy (that is explicitly directed towards encouraging more donations and direct work for the best causes). So even if EA is somehow misguided, I think it could still do the world a great service by encouraging more people to actually (and effectively) do more good: to achieve the EA aim, even if they think that the existing EA movement is (for whatever reason) failing in its ambitions.
I really think the great enemy here is not competing values or approaches so much as failing to act (sufficiently) on values at all. Of course, we’re all driven by a variety of motivations, many no doubt less lofty than we would normally like to think. The extent to which our professed values are “sincere” is probably best understood as a matter of degree, rather than a sharp binary distinction between sincere akratics (who don’t always manage to live up to their ambitious values) and outright hypocrites (who don’t genuinely hold the professed values at all). No one with ambitious values always manages to live up to them, but I wouldn’t want fear of being labelled a “hypocrite” to disincentivize having ambitious values at all. (There are worse things in the world than hypocrisy!)
So I’m trying to find a way to frame my point without using the H-word—I grant that we’re all a messy mix of motivations, heavily influenced by the contingent circumstances in which we find ourselves. And, let’s face it, life can be hard—even those in privileged material circumstances aren’t always in a mental space to be able to do more than just get through the day. I want to explicitly grant all that.
But if some social movements or moral ideologies do more to bring our actions more in line with our (ambitious) expressed values, then that seems good, important, and worth encouraging. Good social norms can make it much easier for us to do good things. And it seems to me that EA is near unique in this regard. It just seems remarkably rare for people to treat their values seriously in the way that EA invites us to.
And so, while I guess non-EAs wouldn’t be thrilled to be charged with failing to take their values seriously, and I certainly don’t mean to be gratuitously offensive, I hope that pointing out this disturbingly common disconnect might help to make it less common. It would be great if, in order to avoid this objection, more non-EAs worked to make their own groups and practices more morally ambitious and goal-directed. It would be great to see more embrace an ethos that was oriented more towards promoting good outcomes and less towards expressive symbolism. It would be great, in short, for others to achieve what EA at least tries to achieve.
I think this parenthetical misses what is actually hard about forming solidarity out of good intentions, which is in fact that disagreements may run so deep that it may feel mutually negative sum, or like the alt team has to lose necessarily in order for us to win. I’m not saying it’s definitely like this, but it’s kind of worstcase/securitarian thinking to prepare your model for that kind of scenario.
I have some anecdotes.
A guy at my last job bounced off EA quickly because he didn’t like a conversation he had with one of us about mental health. He felt like mental health was obviously the number one cause area, and thought the fact that for us it’s only vaguely in the top 10-15 if that was a signal that we were totally borked. I was gravely disappointed that he didn’t reason more like “the reason they’re not serious about mental health is that they haven’t met me yet, I better post my arguments on the forum” or “wow, someone should really do something about that, it might as well be me” and found an org. I encouraged him to do both of these things, but that wasn’t his mindset at all. I think this is what missed opportunities for alt-EA look like, people have their pet criticisms but fail to take themselves seriously.
I was talking with one of my oldest friends, not an EA whatsoever at this point (she eventually grokked the idea that 1 in 900 mosquito nets saves a life and signed up for the newsletter, still is far from card-carrying, but this was prior to any of that anyway), about the popularity of climate change. It seems like few beliefs are more conventional right now than “climate change really bad”, and I asked her why anecdotally every single person who’s told me they don’t want to have kids because of climate change (not because the broader GCR conversation, but strictly because of climate change) was failing to do energy science or related engineering, and heck I’d even settle for policy theories of change or serious activism. She said, and this is a point for intellectual diversity, because I don’t think I would’ve encountered this if I only talked to EAs, “no, that’s a militaristic ‘draft’ mindset. If everyone has to fight, then what is there left to fight for?”, and broadly defended peoples’ entitlement to believe there be problems that they’re not personally fixing. This, plausibly, explains a cluster of the memespace around what we interpret as missed opportunities to start alt-EA movements! Is the mentality of observing broken stuff and deciding to fix it unusually soldiery? Can we slip some cash to a viral marketing expert to instill that mentality in people, without associating it with EA? Is this plausibly an actual crux separating the alt-EAs we’d like to see from actually-existing critics?
One more comment:
I think EA has “a borg property”, i.e. the entity/civilization from star trek that could assimilate anything which expresses fear of homogeny that some critics have called an affectation from the west-end of the cold war. I think EA is nimble, a minimal set of premises that admits lots of different stuff and adapts, and I think is genuine about it’s enjoyment of criticism. But this means that it literally eats everyone above a certain quality bar (which is good). There’s an old saying “who exactly is a rationalist? Simply someone who disagrees with Eliezer Yudkowsky”, which I think sums up a lot about our culture. The difficult thing about separating a critic (someone who helps you find a path through action space that deletes their complaint) from a complainer (someone who’s the opposite of that) is that, while you have to protect your attention from complainers to a nontrivial degree, you may accidentally block a high quality adversary because what seems like a complaint is actually a criticism that’s just really really hard to address, and you don’t know the difference. Trashing your progress and going back to the drawing board is painful, we should expect cognitive biases to make it feel even more unpleasant or to tip the scale against doing that! “So you’re saying I have to throw out bourgeois economics and arm the malaria patients so they can fight imperialism?” may appear like a hostile interaction to you while also being the critic’s earnest attempt to help you be more morally correct with respect to their empirical beliefs. We have, as a tradition, heuristics for honing our sense of who’s epistemics we trust, who’s beliefs are most true, and so on, but they’re not infallible. This only gets worse when you remember that if you’re serious about intellectual diversity, you have to actually tolerate very different norms. We can’t stay in our comfort-zone norms of discourse -wise, even if we think our norms of discourse are superior, if we’re serious about actual intellectual diversity.
TLDR, a tepid defense of admitting more things that seem like complaints into the overton window of proper criticisms.
I found this comment really interesting and helpful. Thank you!
Before EA, I think there were at least two such movements:
a particular subset of the animal welfare movement that cared about effectiveness, e.g., focusing on factory farming over other animal welfare issues explicitly because it’s the biggest source of harm
AI safety
Both are now broadly considered to be part of the EA movement.
Also cost-effectiveness analyses in general, of which only a subset is in EA.
I agree this is common and it was what I most commonly confronted in college at Cornell. Oh, I should actually just be focused on living sustainably, not being racist, and participating in democracy, and this will be an optimally ethical life? Convenient if true!
I have several friends who are members of Direct Action Everywhere. I think DXE, as I’m exposed to it, does present a sort of alt-EA that you are asking about. I think that many DXE members could non hypocritically comment that EA is complacent / EAs are generally more complacent people than themselves.
While DXE is not focused on the general good (per se), anecdotally it seems like you can persuade DXE folks of extreme conclusions about the importance of AI safety, at least if they are also autistic.
I do think that you can interpret DXE as a general good, “beneficentrist” org, given that if you are not longetermism-pilled it is IMO it is reasonable to say that animal welfare is the highest moral priority and I think this is their actual belief. It’s an org for people to do the most important thing as they see it, not for them to just do a thing.
RE: Complacency:
The problem is that you can also convince them about many many things.
Unfortunately, an issue with orgs that draw on ideological tones like “social movement” organizations, is almost constant churn and doubt on probably well understood ideas, like resource allocation, and internal institutions like long planning, that other orgs have solved long ago.
On the other hand, they constantly indulge things that seem objectively bad, like ignoring evidence against theories of change, and spending enormous time on politics and abstract objects that seem unproductive, and even overshadow EA’s excesses.
It may be prejudice, but being inside and seeing several organizations of various classes, this looks overdetermined for dysfunction once these orgs reach any scale.
Again, at risk of bias, it’s hard not to indulge my personal suspicion is that these intensely chaotic environments select for self-replication and media attention with the results:
Why they exist or at least we hear about these particular orgs is their ability to be aggressive
Their ability to focus and gather resources is limited
The aggressive orgs are selected for over more functional, slower orgs, crippling the ecosystem for strong social organizations.
The leaders and cultures arising from them are suspect culturally and “epistemically’
why do this footnote exist, I deleted it
why do this footnote also exist, I also deleted it
Sorry I am not sure I follow this post. I am not really commenting on how much DXE should grow, I’m not involved. However, if I was looking for those “moral optimizers” outside of EA that are surprisingly hard to find, I think that one place you can find them is DXE. It’s an existence proof—there are IMO sincere critics as the OP discusses.
If I were going to discuss whether DXE should grow, I would just try to list what they have accomplished and do some estimates of the costs. Heuristics about types of organization, the quality of the cultures involved, etc., would be of lower interest to me.
The parent comment wasn’t really reply to you, and in some sense this neither is this comment (but it’s not intended to exclude or talk past you either).
Basically, I am observing the mini-activism being done by you, which is one instance of a broader class of what I see to be activity and related agendas trying to steer EA in a certain way. (Although with wildly different object level aims) what these people have in common is trying to steer EA using models, beliefs and patterns from left social movements.
My base model ideology is basically coastal liberal, so I’m not opposed to your end goal (modulo the issues of very different beliefs like timelines, values of sentient entities, and how actual execution/tractability/competence affects in end result). In fact, I suspect my goals are almost identical to yours.
It’s rather that I believe:
Many of these activists are lemons, and they won’t be able to execute on their goals, for a variety of reasons, not least of which is lacking understanding of the people and institutions they criticize and want to change.
The viability here is not even close. To calibrate, even if they were empowered by 1000%, they would probably still fail and result in tumult.
More substantively, I think a reasonable interpretation is that the “establishment in power”, as you might say, are perfectly aware of everything in this comment. For virtuous reasons, they won’t accept this activism, and have to react by shutting down a lot of progress for fear of dilution and tumult.
I see the “bycatch” from this shutting down as obstructing many good people, because basically fast growth can’t be trusted.
In addition to this bycatch, the ideas and language these activists are using are resulting in a collision on material issues (e.g. “decentralization”) that they don’t actually know how to solve, but others do, but now it’s less tenable to voice the issues by the viable people.
It’s a worse form of crowding out, a sort of Gresham’s law, but it’s further counterproductive in that it’s increasing the pressure, empowering further bad activism, which is pathological.
This is counterproductive and blocking substantive progress, if I’m correct above, this really hurts many issues we care about. There is a list of recent rejects from community building, for example, that would bring in a lot of good people in expectation, if this sort of activism wasn’t a concern.
I believe I’ve studied various movements/orgs/ideologies for this specific reason to understand and resolve these scenarios.
Instead of just saying the above (which I just now did), my comment was setting up the background, with (maybe not quite) object-level discussion of some social movements, sort of to interrogate how this would play out and how to think about addressing this.
I think this discussion would have to be several layers less removed from the object level in order to contain insight.
Your explicit claim seems to be that fear of leftism / leftist activist practices are responsible for a slowing in the growth of EA, because institutions (namely CEA, I assume) are intentionally operating slower than they would if they did not have this fear. Your beliefs about magnitude of this slowdown are unclear. (do you think growth has been halved? tenthed?)
You seem to have strong priors that this would be true. I am not aware of any evidence that this phenomenon has occurred, and you have not pointed any out. I am aware of two community building initiatives over the past 5 years that have tried to get funding which were rejected, the EA Hotel and some other thing for training AI safety researchers, and the reasons for rejection were both specific and completely removed from anything you have discussed.
--
I chose the most contentful and specific part of your writing to react to IMO. I think your commentary would be helped by containing more content per word (above zero?)
I repeat that my comments are for onlookers or to lay out pieces of a broader argument, interrogate responses, and won’t be understood right now.
I encourage you to consider my claim that my goals are aligned to yours and then consider the more generous version of my views.
Another way of looking at this: if I sincerely believe in my comments, this direct communication is immensely useful, even or especially if I’m wrong.
To get to evidence about “leftist patterns of activism” concerns.
I don’t think you will hear this explicitly, for the very reasons you demonstrated in your reply. Explicitly excluding an ideology or laying naked the uselessness of activism looks and feels really bad and will be reacted to with immense hostility, even if it is not the direct issue and the real issue is distinct and motivated principledly.
Instead, the word here is “dilution”,’which is a major concern that you will hear explicitly publicly and even then it’s often voiced reluctantly. I think that most people who have this concern, fear activism or fear of strong underlying beliefs that aren’t aligned to EA.
I think there are very few examples of outright grifting or wholesale mimicry, and if examined most incidents involve views of the world EAs find misguided or counterproductive.
I have spent years near or around environmental and conservation movements and I appear dyed in the wool and would pass easily. Models from these experiences strongly indicate the problematic behaviour uses these patterns (which afflict these movements too) for example, because these people tell me outright.
(Not necessarily the people referenced above) there are several instances of people who are talented, good communicators, who have spent time in community building. These people have been rejected without explanation or remedial instructions. This is despite outlining specific plans to do work and communicate EA material. It seems like one should just fund dozens of these nascent people if the concern is that they are just OK. It doesn’t seem costly to hire an “OK” community builder but it seems extremely costly to risk funding people to replicate themselves and entrench behaviour that is focused on building constituencies on prosaic or non-EA causes, which I believe describes leftist activism well.
To be clear, as before, I’m not saying these rejected people are activists or misguided. The concern is bycatch from these filters.
Finally, and very directly, actual incidents of real activism are extremely obvious here, and you must admit involve similar patterns of accusations of centralization, censorship, and dismissal from an out of touch, self interested central authority on causes no one cares about.
I do not think this, for lack of actual content. What would it mean for me to change my view on any topic or argument you have advanced? for you to change yours? I would engage in less “leftist micro activism”? I would decide DXE is probably net harmful instead of net positive? I would start believing CEA has been competently executing community building, against evidence? It cashes out to nothing except vague cultural / ideological association.
--
I agree that the concerns around “dilution” are evidence of the phenomenon you are discussing.
It remains unclear how impactful you believe this phenomenon has been in this case, which I think is important to convey.
Obviously, if somebody thought X was good, and that EA growth has been slowed because CEA hates X, this would not in itself form an argument for anything except the existence of conflict between CEA and likers of X.
--
TLDR:
Yes, this seems to follow the format of your entire thesis
Agrippa is engaging in, or promoting X (X is not particularly specificied in the comments of Charles, so I have no idea whether or not Charles could actually accurately describe the difference between my views and the average forum poster)
X or some subset of X is often involved in the toxic and incompetent culture of toxic and incompetent leftist activism
Toxic and incompetent leftist activism is bad (directly, and because CEA has intentionally funded less things for fear of it) so Agrippa should not engage in or promote X
At the object level, X seems to be “giving DXE as an example of people who include credible moral optimizers that don’t align with EA”. If X includes other posts by me, perhaps it includes “claiming that CEA has not done a good job at community building or disbursing funds” (which does not rest on any leftist principles or heuristics and does not even seem controversial among experienced EAs), and “whining that EA has ended up collaborating with, instead of opposing, the AI capabilities work” (which also does not rest on anything I would consider even vaguely leftist coded).
[ This comment is addressing Agrippa and not related to my other comments/beliefs about leftist activism ]
This reply is generous and thoughtful of you.
Yes, you are exactly right in your thoughts here.
The truth is that I didn’t mean to write about you, Sapphire or DXE at all. As you noticed, there is in fact, limited or no object level issues related to you in my comment chain.
This is deliberate. I guess the reason why I picked you to start this chain, was for this reason. As you say:
As mentioned I was/am in these circles (whatever that means). I don’t really have the heart to attack the work and object level issues to someone who is a true believer in most leftist causes, because I think that could have a chance of really hurting them.
For you, that’s not a concern, because I’m not even talking about the issues you care about. I also think your issues have different emotional character and are more abstract (30M of funding to a defecting AI safety org).
Another motivation of mine that is more (less?) principled is that I believe you and Sapphire are picking an unreasonable fight with Michael St Jules, in this comment chain.
I think he was talking about specialization (“This would be like the opposite of the donor lottery, which exists to incentivize fewer deeper independent investigations over more shallow investigations”) and I thought you ignored this reasonable explanation, to try to pin down some excessive deference or favoring concentration of power (and his beliefs about the specific funders you and Sapphire may not understand well as this is cause area dependent).
Your choice of him to press seems misguided, as he has has no direct involvement or strong opinions on AI safety object level issues that I think you care about. I also believe he is a “moderate” who doesn’t want concentration of thought or power.
This made me annoyed (it does sort of resemble some kinds of leftist activism) and I sort of trolled you with patterns I thought ”rhymes” with what you did.
This is just bad writing on my part. I meant “here” to mean, in EA or in EA discussion, and not referring to your behavior, strategy or comments.
>At the object level, X seems to be “giving DXE as an example of people who include credible moral optimizers that don’t align with EA”. If X includes other posts by me, perhaps it includes “claiming that CEA has not done a good job at community building or disbursing funds” (which does not rest on any leftist principles or heuristics and does not even seem controversial among experienced EAs), and “whining that EA has ended up collaborating with, instead of opposing, the AI capabilities work” (which also does not rest on anything I would consider even vaguely leftist coded).
This is really thoughtful, self aware and genuinely impressive. This is generous to think about and gives me too much credit.
I appreciate the praise! Very cool.
I don’t agree with your analysis of the comment chain.
These assertions / assumptions aren’t true. He didn’t limit his commentary (which was a reply / rebuttal to Sapphire) to animal welfare. If he had, it would still be irrelevant that he’s done so, given that animal welfare is Sapphire’s dominant cause area. In fact, his response (corrected by Sapphire) re: Rethink was misleading! So I’m not sure how this reading is supported.
I am also not really sure how this reading is supported.
Tangentially: As a matter of fact I think that EA has been quite negative for animal welfare because in large part CEA is a group of longtermists co-opting efforts to organize effective animal welfare and then neglecting it. I am a longtermist too but I think that the growth potential for effective animal welfare is much higher and should not be bottlenecked by a longtermist movement. I engage animal welfare as a cause area about equally as much as longtermism, excluding donations.
There is really not a shortage of unspecific commentary about leftism (or any other ideological classification) on LW, EAF, Twitter, etcetera. Other people seem to like it a lot more than me. Discussion that I find valuable is overwhelmingly specific, clear, object-level. Heuristics are fine but should be clearly relevant and strong. Etcetera. Not doing so is responsible for a ton of noise, and the noise is even noisier if it’s in a reply setting and superficially resembles conversation.
For some evidence at this, here is one of the founders of Extinction Rebellion (Robert Hallam, who got cancelled or something, I don’t know), wrote about infighting:
...
...
Again, this is hard core, former leader of XR (who got cancelled himself at one point), making very basic fights over ideologies and primitive decisions like governance and management (and I think he got deposed or something because of it, but it’s just a big soup).
I’m sure there’s every permutation of this “left” vs “right” fighting going on constantly.
The point is that I’m skeptical that these orgs and cultures are a positive example for anything besides self-replication.
DXE Bay is not very decentralized. It’s run by the five people in ‘Core Leadership’. The leadership is elected democratically. Though there is a bit on complexity since Wayne is influential but not formally part of the leadership.
Leadership being replaced over time is not something to lament. I would strongly prefer more uhhhh ‘churn’ in EA’s leadership. I endorse the current leadership quite a bit and strongly prefer that several previous ‘Core’ members lost their elections.
note: I haven’t been very involved in DXE since I left California. Its really quite concentrated in the Bay.
I think this is a fairly common/prominent concern in left circles e.g. The Tyranny of Structurelessness.
I wouldn’t really consider DXE particularly horizontalist? Paging @sapphire
I’m also not sure in what sense these quotes would be evidence of anything about DXE
A few thoughts:
I think while there may be no competing movements that have the community aspect of EA, there are lots of individuals (and orgs) out there who do charitable giving in an impact-driven/rational way, or take well paid positions with the view of using the income for good without branding it earning-to-give. Some might do this quietly. Some of these individuals might well agree with core EA ideas, and may have learnt from books like doing good better. You can do all of these without being a movement. If a critic thinks EA is a cult, why would they respond by forming a competing cult?
EA has also changed over time, it looks very different today than 5 years ago. It may be a good exercise to look at wether the criticisms that people formulate for EA today would have also applied to EA 5 years ago. A good Alt-EA movement might look like whatever EA was before longtermism and AI x-risk seemingly overpowered other areas of concern. How would the 2017 EA movement compete with the 2022 EA movement?
Thirdly, it’s pretty difficult to compete since EA hit the jackpot. In places like hiring talent, or funding students, there are limited resources that communities or concern areas compete over. If the EA community has this much more money, they suck the air from adjacent areas like near-term AI safety or AI ethics. Why would you work on alignment of not super intelligent but widely deployed ML if you can make three times as much training cool large language models next door? And for studentship funding, being EA-aligned will make an enormous difference to your funding prospects compared to other students who might work on the same thing but don’t go to EAglobal each year. I think this is where a lot of frustration originates.
Finally, it’s very common to point out that EA is open to good-faith criticism. There is indeed often very polite and thoughtful engagement on this forum, but I am not sure how easy it is to actually make people update their pre-existing beliefs on specific points.
I’ve read one alternative approach that is well written and made in good faith: Bruce Wydick’s book “Shrewd Samaritan”.
It’s a Christian perspective on doing good, and arrives at many conclusions that are similar to effective altruism. The main difference is an emphasis on “flourishing” in a more holistic way than what is typically done by a narrowly-focused effective charity like AMF. Wydick relates this to the Hebrew concept of Shalom, that is, holistic peace and wellbeing and blessing.
In practical terms, this means that Wydick more strongly (compared to, say, GiveWell) recommends interventions that focus on more than one aspect of wellbeing. For example, child sponsorships or graduation approaches, where poor people get an asset (cash or a cow or similar) plus the ability to save (e.g., a bank account) plus training.
I believe that these approaches fare pretty well when evaluated, and indeed there are some RCTs evaluating them. These programs are more complex to evaluate, however, than programs that do one thing, like distributing bednets. That said, the rationale that “cash + saving + training > cash only” is intuitive to me, and so this might be an area where GiveWell/EA is a bit biased toward stuff that is more easily measurable.
A bit more generally: I think we can look at religions as a set of Alt-EA movements.
Most religions have strong prescriptions and incentives for their members to do good. Many of them also advocate for donating a part of one’s income.
All these religions also have members that think hard about how to do the most good in a cost-effective way. Here, “good” follows the definition of the religion and might include aspects such as bringing people closer to God. However, it is usually correlated with EA notions of utility or wellbeing or freedom from suffering. And indeed one can find faith-based organizations with large positive effects: For example, AMF could not distribute its bednets without local partner organizations, and in that list are many faith-based ones like IMA or World Vision.
I’m not claiming that the effect of religion overall is robustly positive—that’s a very difficult question to answer—but that EA-like intentions, and sometimes actions, can be found in many religious people and organizations.
Yeah, I had wondered about this, as certain religious subcommunities seem the main precedents for moral ambitiousness. But of course there’s also an awful lot of parochialism and explicit demonization of outgroups inherent in many religious communities. (Evangelical Christianity in the US does not seem accurately characterized as driven by universal beneficence, for example!) Given the immense size of major religions, I’d be wary of attributing beneficentrism to religious institutions as a whole on the basis of what “can be found” amongst some (arguably non-representative) members.
But yes, I think at least some highly-specified religious sub-communities could be a good place to look here. (And I’d guess that’s precisely where “EA for Christians” outreach is most successful.)
I came to the basic idea of EA, long before I found the movement, from a Christian perspective. So I think there’s certainly the basis for it in a lot of religions. But I think at that point I was more devout than most Christians, even most of those who go to church every Sunday. This is probably a key factor.
I’m not sure how seriously most people take any of their goals, even the selfish ones. Lack of commitment is a hell of a thing, and even more so when mental effort and uncertainty are required. It kind of astounds me how often people say they want something and then don’t follow through at all on even minimal efforts. A friend wanted a job in my field, so I introduced him to a connection in his area. He never met with her. Other friends have run for office, but then not bothered talking to any voters. A relative repeats the same financial mistakes over and over and over again despite my attempts to help her with financial planning and her swearing up and down each time that next time will be different.
And all of these personal goals are a lot more straightforward to sort out than “how do I do the most good I can do?”. I could figure out a plan for all of these examples in an afternoon at most, and after years of effort I still don’t know how to be a maximally effective altruist. Most people, when they can’t round uncertainty off to “yes” or “no”, seem to have this idea that it’s uncertain so all actions are the same. I recently had a conversation with an acquaintance who accused me of “only thinking in black and white” because I believe with a high degree of confidence that donating to AMF is a better choice than randomly paying for groceries for the person behind you in line, “because maybe they need it and maybe the kindness will ripple through the world and have other effects”. And several other people witnessing this debate agreed with him!
So in addition to altruism, I think key personality traits that would be necessary for someone to be even an alt-EA are an abnormally high level of goal-commitment, and an unusually high level of comfort making decisions under uncertainty.
Overall, would you recommend reading the book?
Whether you’d enjoy the book and benefit from it depends strongly on your background, I think.
To me, this was a good read because I learned about a broad range of interventions for helping people—graduation programs and child sponsorships being probably the most notable examples. The book really changed my mind on child sponsorships. I had thought of them as a rather high-overhead intervention that was popular because it appeals to emotion to get donors’ money… but now I think they can be cost-effective when done well.
That said, if your goal is to learn about various effective interventions (beyond the few that GiveWell writes about), then a good and free resource would be the life you can save book.
The second reason to recommend the book is its good discussion on “flourishing”, that is, a holistic view of health, wellbeing, and prosperity. Finally, a third reason to read it is to get a Christian perspective on the subject, or give the book to Christian friends.
Thank you for this article, full of nuance.
I think what makes effective altruism unique is that it is trying without preconceptions to work out how to do the most good. Beneficentric people may help neighbours, or civic groups, or charities, or religions, or pressure groups, or political parties, but these different approaches are not ranked by effectiveness.
There have always have been some saints, but it is a new idea to try to be an impartial moral maximiser, working through an information-hungry social movement.