This is a classic: https://forum.effectivealtruism.org/posts/hvYvH6wabAoXHJjsC/philosophical-critiques-of-effective-altruism-by-prof-jeff
JacobS
I wrote something about campaign contributions in federal US elections earlier this year. I could be wrong, but based on my (non-expert) survey of the campaign finance literature, it doesn’t seem like donating to political campaigns has a very substantial impact on election outcomes (most of the time). The main takeaway is that spending and success are correlated, but the former doesn’t cause the latter. Spending is simply a useful heuristic for the size/traction/etc. of a campaign.
This is very similar to the comment I was going to make.
I admit that it has crossed my mind that even a moderate EA lifestyle is unusually demanding, especially in the longterm, and therefore could make finding a longterm partner more difficult. However, I do resonate with that last bit – encouraging inter-EA dating also seems culty and insular to me, and I’d like to think that most of us could integrate EA (as a project and set of values) into our lives in way that allows us to have other interests, values, friends, and so on (i.e., our lives don’t have to entirely revolve around our EA-esque commitments!). I don’t see why an EA and a non-EA who were romantically compatible couldn’t find comfortable ways to compromise on lifestyle questions – after all, plenty of frugal people find love, and plenty of vegan people find love, whose to say a frugal vegan couldn’t find love?
There are two different angles on this question. One is whether the level of response in EA has been appropriate, the second is whether the level of response outside of EA (i.e., by society at large) has been appropriate.
I really don’t know about the first one. People outside of EA radically underestimate the scale of ongoing moral catastrophes, but once you take those into account, it’s not clear to me how to compare—as one example—the suffering produced by factory farming to the suffering produced by a bad response to coronavirus in developed countries (replace “suffering” with “negative effects” or something else if “suffering” isn’t the locus of your moral concern). My guess is many of the best EA causes should still be the primary focus of EAs, as non-EAs are counterfactually unlikely to be motivated by them. I do think, however, that at the very beginning of the coronavirus timeline (January to early March), the massive EA focus on coronavirus was by-and-large appropriate, given how nonchalant most of society seemed to be about the coronavirus.
Now for the second one—has the response of society been appropriate? I’m also under-informed here, but my very unoriginal answer is that the response to the coronavirus has been appropriate if you consider it proportional, not to the deadliness of the disease, but to (1) the infectivity of the disease (2) the corresponding inability of the healthcare system to handle a lot of infections. You wrote:
I read the news, too, but there’s something about the level of response to coronavirus given the very moderate deadliness— especially within EA— that just does not add up to me.
And it seems like you’re probably not accounting for (1) and (2). It does not seem like a particularly deadly disease (when compared to other, more dangerous pathogens), but it is very easily spread, which is where the worry comes from.
Glad the alienation objection is getting some airtime in EA. I wanted to add two very brief notes in defense of consequentialism:
1) The alienation objection seems generalizable beyond consequentialism to any moral theory which (as you put it) inhibits you from participating in a normative ideal. I am not too familiar with other moral traditions, but it is possible for me to see how following certain deontological or contractualist theories too far also results in a kind of alienation. (Virtue ethics may be the safest here!)
2) The normative ideals that deal with interpersonal relationships are, as you mentioned, not the only normative ideals on offer. And while the ones that deal with interpersonal relationships may deserve a special weight, it’s still not clear how to weigh them relative to other normative ideals. Some of these other normative ideals may actually be bolstered by updating more in favor of following some kind of consequentialism. For example, consider the below quote from Alienation, Consequentialism, and the Demands of Morality by Peter Railton, which deeply resonated with me when I first read it:
Individuals who will not or cannot allow questions to arise about what they are doing from a broader perspective are in an important way cut off from their society and the larger world. They may not be troubled by this in any very direct way, but even so they may fail to experience that powerful sense of purpose and meaning that comes from seeing oneself as part of something larger and more enduring than oneself or one’s intimate circle. The search for such a sense of purpose and meaning seems to me ubiquitous — surely much of the impulse to religion, to ethnic or regional identification (most strikingly, in the ‘rediscovery’ of such identities), or to institutional loyalty stems from this desire to see ourselves as part of a more general, lasting and worthwhile scheme of things. This presumably is part of what is meant by saying that secularization has led to a sense of meaninglessness, or that the decline of traditional communities and societies has meant an increase in anomie.
This was basically going to be my response—but to expand on it, in a slightly different direction, I would say that, although maybe we shouldn’t be more concerned about biorisk, young EAs who are interested in biorisk should update in favor of pursuing a career in/getting involved with biorisk. My two reasons for this are:
1) There will likely be more opportunities in biorisk (in particular around pandemic preparedness) in the near-future.
2) EAs will still be unusually invested in lower-probability, higher-risk problems than non-EAs (like GCBRs).
(1) means talented EAs will have more access to potentially high-impact career options in this area, and (2) means EAs may have a higher counterfactual impact than non-EAs by getting involved.
Some (Rough) Thoughts on the Value of Campaign Contributions
Some low-effort thoughts (I am not an economist so I might be embarrassing myself!):
My first inclination is something like “find the average output of the field per unit time, then find the average growth rate of a field, and then calculate the ‘extra’ output you’d get with a higher growth rate.” In other words: (1) what is the field currently doing of value? (2) how much more value would that field produce if they did whatever they’re currently doing faster?
It would be interesting to see someone do a quantitative analysis of the history of progress in some particular field. However, because so much intellectual progress has happened in the last ~300 years by so few people (relatively speaking), my guess is we might not have enough data in many cases.
The more something like the “great man theory” applies to a field (i.e. the more stochastic progress is), the more of a problem you have with this model. [Had an example here, removed it because I no longer think it’s appropriate.]
With regard to that latter question (also your second set-up), I wonder how reliably we could apply heuristics for determining the EV of particular contributions (i.e. how much value do we usually get from papers in field Y with ~X citations?).
I dug up a few other places 80,000 Hours mentions law careers, but I couldn’t find any article where they discuss US commercial law for earning-to-give. The other mentions I found include:
In their profile on US AI Policy, one of their recommended graduate programs is a “prestigious law JD from Yale or Harvard, or possibly another top 6 law school.”
In this article for people with existing experience in a particular field, they write “If you have experience as a lawyer in the U.S. that’s great because it’s among the best ways to get positions in government & policy, which is one of our top priority areas.”
It’s also mentioned in this article that Congress has a lot of HLS graduates.
You mentioned in the answer to another question that you made the transition from being heavily involved with social justice in undergrad to being more involved with EA in law school. This makes me kind of curious—what’s your EA “origin story”? (How did you find out about effective altruism, how did you first become involved, etc.)
I love this post! It’s beautifully written, and one of the best things I’ve read on the forum in a while. So take my subsequent criticism of it with that in mind! I apologize in advance if I’m totally missing the point.
I feel like EAs (and most ambitious people generally) are pretty confused about how to reconcile status/impact with self-worth (I’m including myself in this group). If confronted, many of us would say that status/impact should really be orthogonal to how we feel about ourselves, but we can’t quite bring that to be emotionally true. We helplessly invidiously compare ourselves with successful people like “Carl” (using the name as a label here, not saying we really do this when we look at Carl Schuman), even though we consciously would admit that the feeling doesn’t make much sense.
I’ve read a number of relevant discussions, and I still don’t think anyone has satisfactorily dealt with this problem. But I’ll say that, for now, I think we should separate questions about the moral integrity of our actions (how we should define the goodness/badness of our actions) and those about how we should think about ourselves as people (whether we’re good/bad people). They’re related, but there might not be an easy mapping from one to the other. For instance, I think it’s very conceivable that a “Dorothea” may be a better person than a “Carl”, but a “Carl” does more good than a “Dorothea.” And, perhaps, while we should strive to do as much good as possible, our self-worth should track the kind of people we are much more closely than how much good we do.
This is fair. I was trying to salvage his argument without running into the problems mentioned in the above comment, but if he means “aim” objectively, then its tautologically true that people aim to be morally average, and if he means “aim” subjectively, then it contradicts the claim that most people subjectively aim to be slightly above average (which is what he seems to say in the B+ section).
The options are: (1) his central claim is uninteresting (2) his central claim is wrong (3) I’m misunderstanding his central claim. And I normally would feel like I should play it safe and default to (3), but it’s probably (2).
This was a good comment and very clarifying. I agree with most of what you say about the evidence – Schwitzgebel seems to be misinterpreting the evidence (and I think I was also initially).
Just to be extra charitable to Schwitzgebel, however, I think we can assume his central claim is basically intelligible (even if it’s not supported by the evidence), and he’s just using some words in an inconsistent way. Some of the confusion in your comment may be caused by this inconsistency.
In most of his piece, by “aiming to be mediocre”, Schwitzgebel means that people’s behavior regresses to the actual moral middle of a reference class, even though they believe the moral middle is even lower. Imagine there’s a target where the bullseye is 5 feet above the ground, but some archer’s eyesight is off so they think it’s 3 feet above the ground. You could say that subjectively they’re aiming for the target, but objectively they’re aiming below the target. When you write:
If people systematically believed themselves to be better than average and were aiming for mediocrity, then they could (and would) save themselves effort and reduce their moral behaviour until they no longer thought themselves to be above average.
You’re understanding “aim” in the subjective sense, whereas Schwitzgebel usually understands it in the objective sense. Someone might believe themselves to be better than average (they believe they’re aiming at the target), but are objectively aiming for mediocrity (they’re actually aiming below the target).
The problem is that he starts using “aim” in the subjective sense in the “aiming for a B+” section. It is literally not possible that a person is both aiming for a B+ and aiming for a C+. It is, however, possible that they are subjectively aiming for a B+, but objectively aiming for a C+.
Not to be pedantic, but
“People behave morally mediocre” and “People regard themselves as morally mediocre” are two different types of claims. I take Schwitzgebel as claiming the former, and I think he agrees with you that people regard themselves as slightly above average (e.g. section 6 titled “Aiming for a B+”).
He also agrees with you that the evidence is unsatisfactory in many ways (see section 4 titled ” The Gap Between the Evidence Above and the Thesis That Most People Aim for Moral Mediocrity”). Granted, he doesn’t make the specific point that you do, but I think it’s pretty safe to assume what he’s assuming: people adjust towards the behavior of their peers (i.e. they regress towards the mean). It could be true that people could are influenced in other ways (if they see others behaving poorly, they want to behave better), but I don’t think the evidence points towards that.
[Link] Aiming for Moral Mediocrity | Eric Schwitzgebel
This is not to say that she couldn’t and that she might use this as an excuse to avoid doing what she thinks is necessary to excuse doing what is convenient, but to say that we should have compassion for those who may find they agree with EA but find they cannot immediately make the changes they would like to due to life conditions, and we should not judge them as less good EAs even if they are less able to contribute to EA missions than if they were a different person in a different world that doesn’t exist.
This is great, and I’d like to add some follow-up comments in light of it.
My main point was really that passion is a contingent, rather than an intrinsic, thing. If you’re into X instead of Y, that could be because you invested more time in X, not because you “fundamentally” don’t find Y interesting. This may seem uplifting to some EAs: it means that many people have vastly more potential to do good than they might have originally thought!
But I agree that there’s something about the “human experience” that my explanation is missing. This is because “contingent” doesn’t directly imply “fungible” or “interchangeable” – people (usually) can’t fluidly change what they’re interested in or passionate about, even if those interests or passions stem from “contingent” factors. I think, as a result, I described Sue’s case in a slightly unfair and judgmental way (in a way that’s probably not totally healthy, individually or as a community). Real people are subject to all sorts of cognitive and emotional constraints that the original post does not properly recognize.
On a personal note – this post was (on some level) an attempt to rationalize a decision I’m currently going through in my own life. I’m a recent college graduate trying to decide whether to apply to graduate programs in philosophy, or to do something else. I kind of feel like Sue – maybe I could do something in philosophy, but maybe I could do something even more significant elsewhere, if only I invested as much time elsewhere as I have in philosophy. I know my interest in philosophy is contingent, in a sense, but I wonder how fungible it is.
I add this personal note in part to say that I can empathize with the kind of EAs you describe.
Scott Alexander has a very interesting response to this post on reddit: see here.