My (Lazy) Longtermism FAQ

In the wake of the What We Owe the Future media frenzy, there have been lots of questions and takes from outsiders that I feel haven’t been collected and responded to in a satisfyingly comprehensive way yet. In that spirit, I’ve been thinking for a couple months about writing an FAQ-style response to common criticisms that I can point people to for my thoughts when the subject comes up, unfortunately I’ve been horrifically busy this semester and haven’t made any progress. I then realized that I already have written responses roughly like those I would give in this FAQ to many of these points, in the form of longish comments.

I then had a very lazy idea, what if I compile these comments into an FAQish format, grouped together based on which criticism they relate to? I’m not sure how much value it will provide to others, it’s kind of a weird awkward thing, but I wanted to have something with my own takes on this stuff that I can point people to, so I figure this will serve for now. Even if it doesn’t provide value to anyone else, this seemed like the right place to host it. I may or may not edit this into something more like a standard FAQ when I have the time.

Feel free to suggest your own questions in the comments, and I may respond and/​or add them. Also this document doesn’t just have counters to criticisms, but also criticisms I agree with in some way, as you will see for some of them. I think there is plenty worth criticizing in this movement, much as I love it all things considered.

These opinions are purely my own, and I think lots of other EAs will disagree with plenty of what I say in each. They also tend to be replying to specific posts, which makes them read weirdly, but I chose these comments because I think their core points stand alone as well.

Without further ado, here’s a collection of my takes on these various issues:

Q1: Isn’t longtermism or EA secretly just hardcore utilitarianism?

https://​​forum.effectivealtruism.org/​​posts/​​PZ6pEaNkzAg62ze69/​​ea-criticism-contest-why-i-am-not-an-effective-altruist?commentId=g8keo44YdYacMTwyn

“I think the comparison between calling yourself a Christian but not believing in the Divinity of Jesus or something is a worse analogy to being a non-Utilitarian EA than calling yourself a Republican but not believing in the divinity of Jesus. It’s true that utilitarianism is overrepresented among EAs including influential ones, and most of their favored causes are ones utilitarians like, but it is my impression that most EAs are not utilitarians and almost none of them think utilitarianism is just what EA is.

Given this, the post reads to me sort of like ‘I’m a pro-life free market loving Buddhist, but Christianity is wrong therefore I can’t be a Republican’.

This makes the rest of the post less compelling to me to be honest, debates about high level moral philosophy are interesting but unlikely to be settled over one blogpost (even just the debate over pure aggregation is extremely complicated, and you seem to take a very dismissive attitude towards it), and the connection to EA as a movement made in the post seems too dubious to justify it. The piece seems like a good explanation of why you aren’t a utilitarian, but I don’t take it that that was your motive.”

https://​​forum.effectivealtruism.org/​​posts/​​PZ6pEaNkzAg62ze69/​​ea-criticism-contest-why-i-am-not-an-effective-altruist?commentId=wSDcBYiydsaoenRHF

“To be honest, I did feel like it came off this way to me as well. The majority of the piece feels like an essay on why you think utilitarianism sucks, and this post itself frames this as a criticism of EA’s ‘utilitarian core’. I sort of remember the point about EA just being ordinary do gooding when you strip this away as feeling like a side note, though I can reread it when I get a chance in case I missed something.

To address the point though, I’m not sure it works either, and I feel like the rest of your piece undermines it. Lots of things EA focuses on, like animal welfare and AI safety, are weird or at least weird combinations, so are plenty of its ways of thinking and approaching questions. These are consistent with utilitarianism, but they aren’t specifically tied to it, indeed you seem drawn to some of these and no one is going to accuse you of being a utilitarian after reading this, I have to imagine the idea that you do think something valuable and unique is left behind if you don’t just view EA as utilitarianism has to at least partly be behind your suggestion that we ‘dilute the poison’ all the way out. If we already have ‘diluted the poison’ out, I’m not sure what’s left to argue.

The point about how the founders of the movement have generally been utilitarians or utilitarian sympathetic doesn’t strike me as enough to make your point either[1]. If you mean that the movement is utilitarian at its core in the sense that utilitarianism motivated many of its founders, this is a good point. If you mean that it has a utilitarian core in the sense that it is “poisoned” by the types of implications of utilitarianism you are worried about, this doesn’t seem enough to get you there. I also think it proves far to much to mention the influence of Famine, Affluence and Morality. Non-utilitarian liberals regularly cite On Liberty, non-utilitarian vegans regularly cite Animal Liberation. Good moral philosophers generally don’t justify their points from first principles, but rather with the minimum premises necessary to agree with them on whatever specific point they’re arguing. These senses just seem crucially different to me.

  1. I also think it’s overstated. Singer is certainly a utilitarian, but MacAskill overtly does not identify as one even though he is sympathetic to the theory and I think has plurality credence in it relative to other similarly specific theories, Ord I believe is the same, Bostrom overtly does not identify with it, Parfit moved around a bunch in his career but by the time of EA I believe he was either a prioritarian or “triple theorist” as he called it, Yudkowsky is a key example of yours but from his other writing he seems like a pluralist consequentialist at most to me. It’s true that, as your piece points out, he defends pure aggregation, but so do tons of deontologists these days, because it turns out that when you get specific about your alternative, it becomes very hard not to be a pure aggregationist.”

https://​​forum.effectivealtruism.org/​​posts/​​nTybQwrnyRMenasCc/​​?commentId=ksAA8nJDuoqg5pkPZ

’If the basic idea of long-termism—giving future generations the same moral weight as our own—seems superficially uncontroversial, it needs to be seen in a longer-term philosophical context. Long-termism is a form of utilitarianism or consequentialism, the school of thought originally developed by Jeremy Bentham and John Stuart Mill.

The utilitarian premise that we should do whatever does the most good for the most people also sounds like common sense on the surface, but it has many well-understood problems. These have been pointed out over hundreds of years by philosophers from the opposing schools of deontological ethics, who believe that moral rules and duties can take precedence over consequentialist considerations, and virtue theorists, who assert that ethics is primarily about developing character. In other words, long-termism can be viewed as a particular position in the time-honored debate about inter-generational ethics.

The push to popularize long-termism is not an attempt to solve these long-standing intellectual debates, but to make an end run around it. Through attractive sloganeering, it attempts to establish consequentialist moral decision-making that prioritizes the welfare of future generations as the dominant ethical theory for our times.’

This strikes me as a very common class of confusion. I have seen many EAs say that what they hope for out of ‘What We Owe the Future’ is that it will act as a sort of ‘Animal Liberation for future people’. You don’t see a ton of people saying something like ‘caring about animals seems nice and all, but you have to view this book in context. Secretly being pro-animal liberation is about being a utilitarian sentientist with an equal consideration of equal interests welfarist approach, that awards secondary rights like life based on personhood’. This would seem either like a blatant failure of reading comprehension, or a sort of ethical paranoia that can’t picture any reason someone would argue for an ethical position that didn’t come with their entire fundamental moral theory tacked on.

On the one hand I think pieces like this are making a more forgivable mistake, because the basic version of the premise just doesn’t look controversial enough to be what MacAskill actually is hoping for. Indeed I personally think the comparison isn’t fantastic, in that MacAskill probably hopes the book will have more influence on inspiring further action and discussion than on changing minds about the fundamental issue (which again is less controversial, and which he spends less time in the book on).

On the other hand, he has been at special pains to emphasize in his book, interviews, and secondary writings, that he is highly uncertain about first order moral views, and is specifically, only arguing for longtermism as a coalition around these broad issues and ways of making moral decisions on the margins. Someone like MacAskill who is specifically arguing for a period where we hold off from irreversible changes as long as possible in order to get these moral discussions right really doesn’t fit the bill or someone trying to ‘make an end run around’ these issues.”

Q2: Isn’t it really weird that so many EAs are worried about sci fi AI risks?

https://​​forum.effectivealtruism.org/​​posts/​​hLbWWuDr3EbeQqrmg/​​reasons-for-my-negative-feelings-towards-the-ai-risk?commentId=vBqveKxzmiSM9Kz4F

“I can understand many of these points, though I disagree with most of them. I think the speculativeness point worries me most though, and I see it pretty frequently. I totally agree that AI risks are currently very uncertain and speculative, but I guess I think the relevance of this comes down to a few points:

  1. Is it highly plausible that when AI as smart as or smarter than humans arrives, this will be a huge, world changing threat?

  2. Around how long do we need to address this threat properly?

  3. How soon before this threat materializes do we think our understanding of the risks will cross your threshold of rigor?

You might disagree on any of this, but for my own part I think it is fairly intuitive that the answers to these are ‘yes’, ‘decades at least’, and ‘years at most’ respectively when you think about it. Taken together, this means that the speculativeness objection will by default sleepwalk us into the worst defaults of this risk, and that we should really start taking this risk as seriously as we ever plan to when it is still uncertain and speculative.

I think this on its own doesn’t answer whether it is a good cause area right now, alien invasion, the expansion of the sun, and the heat death of the universe all look like similarly big and hard problems, but they are arguably less urgent, we expect them much longer from now. A final assumption needed to worry about AI risks now, which you seem to disagree on, is that this is coming pretty darn soon.

I want to emphasize this as much as possible, this is super unclear and all of the arguments about when this is coming are sort of pretty terrible, but all of the most systematic, least pretty terrible ones I’m aware of converge on ‘around a century or sooner, probably sooner, possibly much sooner’, like the partially informative priors study, Ajeya Cotra’s biological anchors report (which Cotra herself thinks estimates too late an arrival date), expert surveys, and metaculus.

Again, all of this could very easily be wrong, but I don’t see a good enough reason to default to that assumption, so I think it just is the case that, not only should we take this risk as seriously as we ever plan to while it’s still speculative, but we should take this risk as seriously as we ever plan to as soon as possible. I would recommend reading Holden Karnofsky’s most important century series for a more spelled out version of similar points, especially about timelines, if you’re interested, but that’s my basic view on this issue and how to react to the speculativeness.”

https://​​forum.effectivealtruism.org/​​posts/​​hLbWWuDr3EbeQqrmg/​​reasons-for-my-negative-feelings-towards-the-ai-risk?commentId=mkWoWFhrNvCs9767J

“On the standard ‘importance, tractability, neglectedness’ framework, I agree that tractability is AI risk’s worst feature if that’s what you mean. I think there is some consensus on this amongst people worried about the issue, as stated in 80k’s recently updated profile on the issue:

‘Making progress on preventing an AI-related catastrophe seems hard, but there are a lot of avenues for more research and the field is very young. So we think it’s moderately tractable, though we’re highly uncertain — again, assessments of the tractability of making AI safe vary enormously.’

I think these other two aspects, importance and neglectedness, just matter a great deal and it would be a bad idea to disqualify cause areas just for moderately weak tractability. In terms of importance, transformative AI seems like it could easily be the most powerful technology we’ve ever made, for roughly the same reasons that humans are the most transformative ‘technology’ on Earth right now. But even if you think this is overrated, consider the relatively meager funds and tiny field as it exists today. I think many people who find the risk a bit out there would at least agree with you that it’s ‘worth some thought and research’, but because of the rarity of the type of marginal thinking about good and willingness to take weird-sounding ideas seriously found in EA, practically no one else is ensuring that there is some thought and research. The field would, arguably, almost entirely dry up if EA stopped routing resources and people towards it.

Again though, I think maybe some of the disagreement is bound up in the ‘some risk’ idea. My vague impression, and correct me if this doesn’t describe you, is that people who are weirded out by EA working on this as a cause area think that it’s a bit like if EA was getting people, right now, to work on risks from alien invasions (and then a big question is why isn’t it?), whereas people like me who are worried about it think that it is closer to working on risks from alien invasions if NASA discovered an alien spaceship parked five lightyears away from us. The risks here would still be very uncertain, the timelines, what we might be able to do to help, what sorts of things these aliens would be able to or want to do, but I think it would still look crazy if almost no one was looking into it, and I would be very wary of telling one of the only groups that was trying to look into it that they should let someone else handle it.

If you would like I would be happy to chat more about this, either by DMs, or email, or voice/​video call. I’m probably not the most qualified person since I’m not in the field, but in a way that might give you a better sense of why the typical EA who is worried about this is. I guess I would like to make this an open invitation for anyone this post resonates with. Feel absolutely no pressure to though, and if you prefer I could just link some resources I think are helpful.

I’m just in the awkward position of both being very worried about this risk, and being very worried about how EA talking about this risk might put potential EAs off. I think it would be a real shame if you felt unwelcome or uncomfortable in the movement because you disagree about this risk, and if there’s something I can do to try to at least persuade you that those of us who are worried are worth sharing the movement with at least, I would like to try to do that.”

Q3: Isn’t longtermism focusing on future generations at the expense of those in the present?

https://​​forum.effectivealtruism.org/​​posts/​​8Swy2TCLBHwWA2Rga/​​caring-about-the-future-doesn-t-mean-ignoring-the-present?commentId=w5PZuKkdD4ktqrntT

“I like this piece, but I think it misses an opportunity to comment more broadly on the dynamic at work. My own impression can be glossed in roughly this way: most money goes to the here and now, most careers go to the future (not an overwhelming majority in either case though, and FTX may have changed the funding balance). This makes sense based on talent versus funding gaps, and means the two don’t really need to compete much at all, indeed many of the same people contribute to both in different ways.”

Q4: Why do lots of EAs support the idea that making more people is better? Why not prefer just improving lives with no regard to numbers, or taking the average?

https://​​forum.effectivealtruism.org/​​posts/​​BLcyqjiXaKg7BCSxj/​​confused-about-making-people-happy-vs-making-happy-people?commentId=45PWgbkkT3nYFaDyY

“I think this is actually a central question that is relatively unresolved among philosophers, but it is my impression that philosophers in general, and EAs in particular, lean in the ‘making happy people’ direction. I think of there as being roughly three types of reason for this. One is that views of the ‘making people happy’ variety basically always wind up facing structural weirdness when you formalize them. It was my impression until recently that all of these views imply intransitive preferences (i.e something like A>B>C>A), until I had a discussion with Michael St Jules in which he pointed out more recent work that instead denies the independence of irrelevant alternatives. This avoids some problems, but leaves you with something very structurally weird or even absurd to some. I think Larry Temkin has a good quote about it something like ‘I will have the chocolate ice-cream, unless you have vanilla, in which case I will have strawberry’.

The second reason is the non-identity problem, formalized by Derek Parfit. Basically the issue this raises is that almost all of our decisions that impact the longer term future in some way also change who gets born, so a standard person affecting view seems to allow us to do almost anything to future generations. Use up all their resources, bury radioactive waste, you name it.

The third maybe connects more directly to why EAs in particular often reject these views. Most EAs subscribe to a sort of universalist, beneficent ethics, that seems to imply that if something is genuinely good for someone, then that something is good in a more impersonal sense that tugs on ethics for all. For those of us who live lives worth living, are glad we were born, and don’t want to die, it seems clear that existence is good for us. If this is the case, it seems like this presents a reason for action to anyone who can impact it if we accept this sort of universal form of ethics. Therefore, it seems like we are left with three choices. We can say that our existence actually is good for us, and so it is also good for others to bring it about, we can say that it is not good for others to bring it about, and therefore it is not actually good for us after all, or we can deny that ethics has this omnibenevolent quality. To many EAs, the first choice is clearly best.

I think here is where a standard person-affecting view might counter that it cares about all reasons that actually exist, and if you aren’t born, you don’t actually exist, and so a universal ethics on this timeline cannot care about you either. The issue is that without some better narrowing, this argument seems to prove too much. All ethics is about choosing between possible worlds, so just saying that a good only exists in one possible world doesn’t seem like it will help us in making decisions between these worlds. Arguably the most complete spelling out of a view like this looks sort of like ‘we should achieve a world in which no reasons for this world not to exist are present, and nothing beyond this equilibrium matters in the same way’. I actually think some variation of this argument is sometimes used by negative utilitarians and people with similar views. A frustrated interest exists in the timeline it is frustrated in, and so any ethics needs to care about it. A positive interest (i.e. having something even better than an already good or neutral state) does not exist in a world in which it isn’t brought about, so it doesn’t provide reasons to that world in the same way. Equilabrium is already adequetely reached when no one is badly off.

This is coherent, but again it proves much more than most people want to about what ethics should actually look like, so going down that route seems to require some extra work.”

https://​​forum.effectivealtruism.org/​​posts/​​vjDLRBaEWMmrHhrtv/​​the-standard-person-affecting-view-doesn-t-solve-the?commentId=rSLkjk2M2LzhMgYZC

“I was pretty aggravated by this part of the review, it’s my impression that Alexander wasn’t even endorsing the person-affecting view, but rather some sort of averagism (which admittedly does outright escape the repugnant conclusion). The issue is I think he’s misunderstanding the petition on the repugnant conclusion. The authors were not endorsing the statement ‘the repugnant conclusion is correct’ (though some signatories believe it) but rather ‘a theory implying the repugnant conclusion is not a reason to reject it outright’. One of the main motivators of this is that, not as a matter of speculation, but as a matter of provable necessity, any formal view in this area has some implication people don’t like. He sort of alludes to this with the eyeball pecking asides, but I don’t think he internalizes the spirit of it properly. You don’t reject repugnancy in population ethics by picking a theory that doesn’t imply this particular conclusion, you do it by not endorsing your favored theory under its worst edge cases, whatever that theory is.

Given this, there just doesn’t seem to be any reason to take the principled step he does towards averagism, and arguably averagism is the theory that is least on the table on its principled merits. I am not aware of anyone in the field who endorses average, and I was recently at an EA NYC talk on population ethics with Timothy Campbell in which he basically said outright that the field has, for philosophy an unusually strong consensus, that average is simply off the table. Average can purchase the non-existence of worthwhile lives at the cost of some happiness of those who exist in both scenarios. The value people contribute to the world is highly extrinsic to any particular person’s welfare, to the point where whether a life is good or bad on net can have no relation to whether that life is good or bad for anyone in the world, whether the person themself, or any of the other people who were already there. Its repugnant implications seem to be deeper than just the standard extremes of principled consistency.”

Q5: What about (miscellaneous things in the New Yorker piece)?:

https://​​forum.effectivealtruism.org/​​posts/​​iJqmj42pCdDbvZhHB/​​cover-story-on-ea-in-time-magazine?commentId=3paX4nmyEchcjvEAw

“I don’t know what Josh thinks the flaws are, but since I agree that this one is more flawed, I can speak a bit for myself at least. I think most of what I saw as flawed came from isolated moments, in particular criticisms the author raised that seemed to me like they had fairly clear counterpoints that the author didn’t bring up (other times he managed to do this quite well). A few that stand out to me, off the top of my head:

’Cremer said, of Bankman-Fried, ‘Now everyone is in the Bahamas, and now all of a sudden we have to listen to three-hour podcasts with him, because he’s the one with all the money. He’s good at crypto so he must be good at public policy . . . what?!’’

The 80,000 Hours podcast is about many things, but principally and originally it is about effective career paths. Earning to give is recommended less these days, but they’ve only had one other interview with someone who earned to give that I can recall, and SBF is by far the most successful example of the path to date. Another thing the podcast is about is the state of EA opportunities/​organizations. Learning about the priorities of one of the biggest new forces in the field, like FTX, seems clearly worthwhile for that. The three hours point is also misleading to raise, since that is a very typical length for 80k episodes.

‘Longtermism is invariably a phenomenon of its time: in the nineteen-seventies, sophisticated fans of ‘Soylent Green’ feared a population explosion; in the era of ‘The Matrix,’ people are prone to agonize about A.I.’

This point strikes me as very as hoc. AI is one of the oldest sci-fi tropes out there, and in order to find a recent particularly influential example they had to go back to a movie over 20 years old that looks almost nothing like the risks people worry about with AI today. Meanwhile the example of population explosion is also cherry picked to be a case of sci fi worry that seems misguided in retrospect. Why doesn’t he talk about the era of ‘Dr. Strangelove’ and ‘War Games’’? And immediately after this,

‘In the week I spent in Oxford, I heard almost nothing about the month-old war in Ukraine. I could see how comforting it was, when everything seemed so awful, to take refuge on the higher plane of millenarianism.’

Some people take comfort in this probably, but generally those are people, like the author, who aren’t that viscerally worried about the risk. Others have very serious mental health problems from worrying about AI doom. I’ve had problems like this to some degree, others have had it so bad they have had to leave the movement entirely, and indeed criticize it in the complete opposite direction.

I am not saying that people who academically or performatively believe in AI risks, and can seek refuge in this, don’t exist. I’m also not saying the author had to do yet more research and turn up solid evidence that the picture he is giving is incomplete, but when you start describing people thinking everything and everyone they love may be destroyed soon as a comforting coping mechanism, I think you should bring at least a little skepticism to the table. It is possible that this just reflects the fact that you find a different real world problem emotionally devastating at the moment and thinking about this risk you don’t personally take seriously is a distraction for you, and you failed your empathy roll this time.

A deeper issue might be the lack of discussion of the talent constraint on many top cause areas in the context of controversies over spending on community building, which is arguably the key consideration much of the debate turns on. The increased spending on community building (which still isn’t even close to most of the spending) seems more uncomplicatedly bad if you miss this dimension.

Again though, this piece goes through a ton of points, mostly quite well, and can’t be expected to land perfectly everywhere, so I’m pretty willing to forgive problems like these when I run into them. They are just the sorts of things that made me think this was more flawed than the other pieces.”

Q6: EAs don’t respond to alot of these criticisms very prominently as a rule, what’s the deal?

https://​​forum.effectivealtruism.org/​​posts/​​FtDAtzGqAgBh9x6DA/​​going-too-meta-and-avoiding-controversy-carries-risks-too-a?commentId=8P2QrQoGpE4Hc5h3L

“I disagree with this pretty strongly, and have been worried about this type of view in particular quite a bit recently. It seems as though a standard media strategy of EAs is, if someone publishes a hit piece on us somewhere, either sort of obscure or prominent, just ignore it and ‘respond’ by presenting EA ideas better to begin with elsewhere. This is a way of being positive rather than negative in interactions, and avoiding signal-boosting bad criticisms. I don’t know how to explain how I have such a different impression, or why so many smart people seem to disagree with me, but this looks to me like an intuitively terrible, obvious mistake.

I don’t know how to explain why it feels to me so clear that if someone is searching around, finding arguments that EA is a robot cult, or secretly run by evil billionaires, or some other harsh misleading critique, and nothing you find in favor of EA written for a mainstream audience even acknowledges these critics, and instead just presents some seemingly innocuous face of EA, the net takeaway will tend towards ‘EA is a sinister group all of these people have been trying to blow the whistle on’. Basically all normal social movements have their harsh critics, and even if they don’t always respond well to them, they almost all respond to them as publicly as possible.

The excuse that the criticisms are so bad that they don’t deserve the signal (which to be clear isn’t one this particular post is arguing) also leads me to think this norm encourages bad epistemics, and provides a fully general excuse. I tend to think that bad criticisms of something obscure like EA are generally quite easy for EAs to write persuasive debunking pieces on, so either a public criticism is probably bad enough that publicly responding is worth the signal boost you give the original piece, or it is good enough that it deserves the signal. Surely there are some portion of criticisms that are neither, and that are hard to be persuasive against but are still bad, but we shouldn’t orient the movement’s entire media strategy around those. I wholeheartedly agree with this comment:

https://​​forum.effectivealtruism.org/​​posts/​​kageSSDLSMpuwkPKK/​​response-to-recent-criticisms-of-longtermism-1?commentId=WycArpwah9aveNrZs

If some EA ever had the opportunity to write a high-quality response like Avital’s, or to be blunt almost any okay response, to the Torres piece in Aeon or Current Affairs, or for that matter in WSJ to their recent hit piece, I think it would be a really really good idea to do so, the EA forum is not a good enough media strategy. ACX is easy mode for this, Alexander himself is sympathetic to EA, so his main text isn’t going to be a hit piece, and the harsher points in the comments are ones people can respond to directly, and he will even directly signal boost the best of these counter-criticisms, as he did. I am very scared for the EA movement if even this looks like a scary amount of daylight.

This is something I’ve become so concerned about I’ve been strongly considering posting an edited trialogue I had with some other EAs about this on an EA chat where we tried to get to the bottom of these disagreements (though I’ve been too busy recently), but I just wanted to use this comment as a brief opportunity to register this concern a bit in advance as well. If I am wrong, please convince me, I would be happy to be dissuaded of this but it is a very strong intuition of mine that this strategy does not end well for either our community health or public perception.”

Q7: Isn’t this all an overly demanding mental health nightmare?

https://​​forum.effectivealtruism.org/​​posts/​​JBAPssaYMMRfNqYt7/​​michael-nielsen-s-notes-on-effective-altruism?commentId=RgSBrdFZsjcJXWG9j

“I think this has gotten better, but not as much better as you would hope considering how long EAs have known this is a problem, how much they have discussed it being a problem, and how many resources have gone into trying to address it. I think there’s actually a bit of an unfortunate fallacy here that it isn’t really an issue anymore because EA has gone through the motions to address it and had at least some degree of success, see Sasha Chapin’s relevant thoughts:

https://​​web.archive.org/​​web/​​20220405152524/​​https://​​sashachapin.substack.com/​​p/​​your-intelligent-conscientious-in?s=r

Some of the remaining problem might come down to EA filtering for people who already have demanding moral views and an excessively conscientious personality. Some of it is probably due to the ‘by-catch’ phenomenon the anon below discusses that comes with applying expected value reasoning to having a positively impactful career (still something widely promoted, and probably for good reason overall). Some of it is this other, deeper tension that I think Nielson is getting at:

Many people in Effective Altruism (I don’t think most, but many, including some of the most influential) believe in a standard of morality that is too demanding for it to be realistic for real people to reach it. Given the prevalence of actualist over possiblist reasoning in EA ethics, and just not being totally naive about human psychology, pretty much everyone who does believe this is onboard with compartmentalizing do-gooding or do-besting from the rest of their life. The trouble runs deeper than this unfortunately though, because once you buy an argument that letting yourself have this is what will be best for doing good overall, you are already seriously risking undermining the psychological benefits.

Whenever you do something for yourself, there is a voice in the back of your head asking if you are really so morally weak that this particular thing is necessary. Even if you overcome this voice, there is a worse voice that instrumentalizes the things you do for yourself. Buying icecream? This is now your ‘anti-burnout icecream’. Worse, have a kid (if you, like in Nielson’s example, think this isn’t part of your best set of altruistic decisions), this is your ‘anti-burnout kid’.

It’s very hard to get around this one. Nielson’s preferred solution would clearly be that people just don’t buy this very demanding theory of morality at all, because he thinks that it is wrong. That said, he doesn’t really argue for this, and for those of us who actually do think that the demanding ideal of morality happens to be correct, it isn’t an open avenue for us.

The best solution as far as I can tell is to distance your intuitive worldview from this standard of morality as much as possible. Make it a small part of your mind, that you internalize largely on an academic level, and maybe take out on rare occasions for inspiration, but insist on not viewing your day to day life through it. Again though, the trickiness of this, I think, is a real part of the persistence of some of this problem, and I think Nielson nails this part.”

Q8: Isn’t the lesson of history that ideologically ambitious movements like this are dangerous and bad?

https://​​forum.effectivealtruism.org/​​posts/​​JBAPssaYMMRfNqYt7/​​michael-nielsen-s-notes-on-effective-altruism?commentId=iAFokx2YztZxuAigg

“These are interesting critiques and I look forward to reading the whole thing, but I worry that the nicer tone of this one is going to lead people to give it more credit than critiques that were at least as substantially right, but much more harshly phrased.

The point about ideologies being a minefield, with Nazis as an example, particularly stands out to me. I pattern match this to the parts of harsher critiques that go something like ‘look at where your precious ideology leads when taken to an extreme, this place is terrible!’ Generally, the substantial mistake these make is just casting EA as ideologically purist and ignoring the centrality of projects like moral uncertainty and worldview diversification, as well as the limited willingness of EAs to bite bullets they in principle endorse much of the background logic of (see Pascal’s Mugging and Ajeya Cotra’s train to crazy town).

By not getting into telling us what terrible things we believe, but implying that we are at risk of believing terrible things, this piece is less unflattering, but is on shakier ground. It involves this same mistake about EA’s ideological purism, but on top of this has to defend this other higher level claim rather than looking at concrete implications.

Was the problem with the Nazis really that they were too ideologically pure? I find it very doubtful. The philosophers of the time attracted to them generally were weird humanistic philosophers with little interest in the types of purism that come from analytic ethics, like Heidegger. Meanwhile most philosophers closer to this type of ideological purity (Russell, Carnap) despised the Nazis from the beginning. The background philosophy itself largely drew from misreadings of people like Nietzsche and Hegel, popular anti-semitic sentiment, and plain old historical conspiracy theories. Even at the time intellectual critiques of Nazis often looked more like ‘they were mundane and looking for meaning from charismatic, powerful men’ (Arendt) or ‘they aesthetisized politics’ (Benjamin) rather than ‘they took some particular coherent vision of doing good too far’.

The truth is the lesson of history isn’t really ‘moral atrocity is caused by ideological consistency’. Occasionally atrocities are initiated by ideologically consistent people, but they have also been carried out casually by people who were quite normal for their time, or by crazy ideologues who didn’t have a very clear, coherent vision at all. The problem with the Nazis, quite simply, is that they were very very badly wrong. We can’t avoid making the mistakes they did from the inside by pattern matching aspects of our logic onto them that really aren’t historically vindicated, we have to avoid moral atrocity by finding more reliable ways of not winding up being very wrong.”

(I fear that I’m verging on denying the antecedent by linking this comment for this question, but I think it is still relevant to the worry to a decent extent. My somewhat more direct answer is that I think people with the ideologically purest views most like EA have historically mostly had a positive to innocuous impact if you actually look at it. I think people who have this worry are mostly thinking of fictional evidence that vindicates it instead of history, like science fiction and thought experiments, but mostly history is full of more Hitlers than Thanoses)

Q9: You guys look kind of like a cult. Are you a cult?

https://​​forum.effectivealtruism.org/​​posts/​​TcKbXwBX7YJ4NfgGL/​​a-hypothesis-for-why-some-people-mistake-ea-for-a-cult?commentId=sDCEuKCiEAKSB6FLd

“One theory that I’m fond of, both because it has some explanatory power, and because unlike other theories about this with explanatory power, it is useful to keep in mind and not based as directly on misconceptions, goes like this:

-A social group that has a high cost of exit, can afford to raise the cost of staying. That is, if it would be very bad for you to leave a group you are part of, the group can more successfully pressure you to be more conformist, work harder in service of it, and tolerate weird hierarchies.

-What distinguishes a cult, or at least one of the most important things that distinguishes it, is that it is a social group that manually raises the cost of leaving, in order to also raise the cost of staying. For instance it relocates people, makes them cut off other relationships, etc.

-Effective Altruism does not manually raise the cost of leaving for this purpose, and neither have I seen it really raise the cost of staying. Even more than most social groups I have been part of, being critical of the movement, having ideas that run counter to central dogmas, and being heavily involved in other competing social groups, are all tolerated or even encouraged. However,

-The cost of leaving for many Effective Altruists is high, much of this self-inflicted. Effective Altruists like to live with other Effective Altruists, make mostly Effective Altruist close friends, enter romantic relationships with other Effective Altruists, work at Effective Altruist organizations, and believe idiosyncratic ideas mostly found within Effective Altruism. Some of this is out of a desire to do good, speaking from experience, much of it is because we are weirdos who are most comfortable hanging out with people who are similar types of weirdos to us, and have a hard time with social interactions in general. Therefore,

-People looking in sometimes see things from point four, the things that contribute to the high cost of leaving, and even if they can’t put what’s cultish about it into words, are worried about possible cultishness, and don’t know the stuff in point three viscerally enough to be disuaded of this impression. Furthermore, even if EA isn’t a cult, point four is still important, because it increases the risk of cultishness creeping up on us.

Overall, I’m not sure what to do with this. I guess be especially vigilant, and maybe work a little harder to have as much of a life as possible outside of Effective Altruism. Anyway, that’s my take.”

Q10: If longtermist causes like extinction risks really do look good even if you just look at the short term, why bother with all this promotion of the idea of “longtermism” at all anyway?

https://​​forum.effectivealtruism.org/​​posts/​​KDjEogAqWNTdddF9g/​​?commentId=5xGgS798KxwjqMoFE

“I’m not so sure about this. Speaking as someone who talks with new EAs semi-frequently, it seems much easier to get people to take the basic ideas behind longtermism seriously than, say, the idea that there is a significant risk that they will personally die from unaligned AI. I do think that diving deeper into each issue sometimes flips reactions—longtermism takes you to weird places on sufficient reflection, AI risk looks terrifying just from compiling expert opinions—but favoring the approach that shifts the burden from the philosophical controversy to the empirical controversy doesn’t seem like an obviously winning move. The move that seems both best for hedging this, and just the most honest, is being upfront both about your views on the philosophical and the empirical questions, and assume that convincing someone of even a somewhat more moderate version of either or both views will make them take the issues much more seriously.”

And finally, there are some questions that I didn’t really comment on in the past, that I decided were worth spending a little time writing up original responses to (there were others I didn’t because I think responding adequately would take too long, but which I would like to address if I make a more finalized version)

Q11: Are EAs only worried about AI risks because of billionaires corrupting the movement?

The proximate cause of Effective Altruist interest in AI is a transhumanist mailing list from the early 2000s. From there, Eliezer Yudkowsky seeded the worry within the Bay Area scene of EA, while Nick Bostrom seeded it within the Oxford scene. The idea that this was then mostly amplified within EA by billionaire interest is also implausible.

Lots of Dustin Moskovitz’s and Sam Bankman Fried’s money has gone into various projects to work on AI, but Moskovitz has been very standoffish about his money, and functionally his role was just driving a dump truck of money up to Holden Karnofsky (who at the time was pretty uninterested in AI) and walking away. Bankman Fried is arguably more hands on, but also showed up on the scene very recently, after concern about AI had been gaining prominence within the movement for nearly a decade.

Lots of people think of Elon Musk in this context as well because he’s so famous, but his involvement with longtermism has been flirtatious at most. His main contribution was co-founding OpenAI, which is one of the most controversial AI labs within the AI safety community.

Q12: But isn’t worry about AI safety just about promoting AI hype?

This is one of the most confusing talking points I’ve seen. It’s like saying that people worried about nuclear apocalypse are only doing so to build hype for weapons manufacturers. Actually it is weirder than that, because at least that involves the bombs doing what they are supposed to do. AI safety worries are by and large about the concern that we will make AI that sucks in the only way that matters, and the only way it will be good, is the way that will make this suck even more.

More typical worries about AI, like those about social control (which are also extremely valid concerns!) fit a hype narrative better for the same reasons as the nuclear bomb example—in such a scenario AI is at least an effective tool for the wielder. In all three cases however, is it is very bad to dismissively make this assumption about the activism.

As far as I can tell, the only reason anyone takes this seriously is the aforementioned worry that people are only concerned about AI safety because of billionaires, and the idea that this is the only explanation for why some billionaires promote AI safety (others, including some who make money off of AI like Mark Zuckerberg, are very dismissive). For the reasons I already covered however, I don’t find this connection historically plausible anyway.

Q13: Aren’t billionaires shaping and corrupting EA in other ways?

For reasons already mentioned, I don’t think this is plausible in the case of Dustin Moskovitz (not involved enough with how funds are distributed), Sam Bankman Fried (too recent), or Elon Musk (doesn’t contribute enough). Miscellaneous other billionaires have given various amounts to various EA causes, but this is the case for lots of activist movements, hell, if miscellaneous controversial contributions from Elon Musk are damning, that’s damning for climate change activism as well.

I think it is true that most of EA’s funds come from billionaire money at this point, which has problems all its own, but this is because of Dustin Moskovitz and Sam Bankman Fried, who I’ve already covered. A related suspicion is to wonder why billionaires are overrepresented within Effective Altruism (I think probably true). My own guess is that people who want to give to some charity but don’t have a specific one in mind are just generally overrepresented within Effective Altruism, because it’s what you stumble upon when searching for reasons to give some place rather than others. You could be cynical about it and say that it’s all for PR reasons, or more generous and say it’s what any normal person who has more money than they could ever imagine spending on themself would do, but either way I don’t think it’s that controversial to say that lots of billionaires decide to give to charity regardless of whether they really have a plan for where to give in advance.

Another related concern is that EA hasn’t been directly corrupted by billionaire bribery, but that the creeping influence of this money will change how EAs approach issues in subtle ways anyway. I think this sort of suspicion is well worth keeping in mind, but is another double standard. Rich people also give tons of money to universities, but people rarely show that much suspicion to the work of academics in general as a result.

Q14: People say that longtermism only means treating future beings as a key moral priority, but that doesn’t seem very controversial, shouldn’t we just treat the word as the specific ideology it refers to in practice?

In practice I think there are roughly three ways the word longtermism might be used. One is the way critics tend to use, another is the way proponents tend to use it, and the final is the way it draws a line around a real movement.

The version proponents use is given in the question, the way critics use it is basically as a synonym for bullet-biting total utilitarianism applied to the future. Many longtermists lean total utilitarian but aren’t that bullet biting in practice. Other longtermists have different views though, for instance there is a non-trivial subgroup of more pessimistic or cautious longtermists I’ve had a decent amount of exposure to. They tend to have an anti-frustrationist or “suffering-focused” ethics, and lean into something like a person-affecting view. They tend to prefer reducing S-Risks to X-Risks, and are often especially suspicious of space colonization.

One thing that unites this group with the more totalist, “grand futures” longtermists is the literal definition longtermists give, considering future beings a key moral priority. The “movement” grouping, what actually describes people who self-identify as longtermists as a rule, is basically just people who apply Effective Altruism to the future.

I don’t especially mind this type of definition, but it seems like “Effective Altruists who are looking at the future” is adequate without needing its own separate name. The idea of treating future beings as a key moral priority strikes me as having more use for its own separate name, and pointing to a unique possible coalition. For this reason I sort of just do prefer this use, even if it doesn’t seem to be catching on much in practice.

(Edit 10/​24/​22: mostly cosmetic changes to improve readability, and swapping out a dead link)