I can see how the work of several EA projects, especially CEA, contributed to this. I think that some of these were mistakes (and we think some of them were significant enough to list on our website). I personally caused several of the mistakes that you list, and I’m sorry for that.
Often my take on these cases is more like “it’s bad that we called this thing “EA”″, rather than “it’s bad that we did this thing”. E.g. I think that the first round of EA Grants made some good grants (e.g. to LessWrong 2.0), but that it would have been better to have used a non-EA brand for it. I think that calling things “EA” means that there’s a higher standard of representativeness, which we sometimes failed to meet.
I do want to note that all of the things you list took place around 2017-2018[1], and our work and plans have changed since then. For instance, CBG evaluation criteria are no longer as you state, EA Grants changed a lot after the first round and was closed down around 2019, the EA Handbook is different, and effectivealtruism.org has a new design.
As I noted in an earlier comment, I want CEA to be promoting the principles of effective altruism. We have been careful not to bake cause area preferences into our metrics, and instead to focus on whether people are thinking carefully and open-mindedly about how to help others the most.
Where we have to decide a content split (e.g. for EA Global or the Handbook), I want CEA to represent the range of expert views on cause prioritization. I still don’t think we have amazing data on this, but my best guess is that this skews towards longtermist-motivated or X-risk work (like maybe 70-80%).
I would love someone to do a proper survey of everyone (trying to avoid one’s own personal networks) who has spent >1 year thinking about cause prioritization with a scope-sensitive and open-minded lens. I’ve tried to commission someone to do this a couple of times but it hasn’t worked out. If someone did this, it would help to shape our content, so I’d be happy to offer some advice and could likely find funding. If anyone is interested, let me know!
I acknowledge that the effectivealtruism.org design was the same in 2021 as it was in 2017, but this was mostly because we didn’t have much capacity to update the site at all, so I think the thing you’re complaining about was mostly a fact about 2017.
I want CEA to represent the range of expert views on cause prioritization. I still don’t think we have amazing data on this, but my best guess is that this skews towards longtermist-motivated or X-risk work (like maybe 70-80%).
I would love someone to do a proper survey of everyone (trying to avoid one’s own personal networks) who has spent >1 year thinking about cause prioritization with a scope-sensitive and open-minded lens. I’ve tried to commission someone to do this a couple of times but it hasn’t worked out. If someone did this, it would help to shape our content, so I’d be happy to offer some advice and could likely find funding. If anyone is interested, let me know!
Thank you for wanting to be principled about such an important issue. However (speaking as someone who is both very strongly longtermist and a believer of the importance of cause prioritization), a core problem with the “neutrality”/expert-views framing of this comment is selection bias. We would naively expect people who spend a lot of time on cause prioritization to systematically overrate (relative to the broader community) both the non-obviousness of the most important causes, and their esotericism.
Put another way, if you were a thoughtful, altruistic, person who heard about EA in 2013 and your first instinct was to start what would be become Wave or earn-to-give for global poverty, you’d be systematically less represented in such a survey.
Now, I happen to think focusing a lot on cause prioritization is correct: I think ethics is hard, in many weird and surprising ways. But I don’t think I can (justifiably) get this from expert appeal/deference, it all comes down to specific beliefs I have about the world and how hard it is, and to some degree making specific bets that my own epistemology isn’t completely screwed up (because if it was, I probably can’t have much of an impact anyway).
Analogously, I also think we should update approximately not at all on the existence of God if we see surveys that philosophers of religion are much more likely to believe in God than other philosophers, or if ethicists are more likely to be deontologists than utilitarians.
I agree that all sorts of selection biases are going to be at play in this sort of project: the methodology would be a minefield and I don’t have all the answers.
I agree that there’s going to be a selection bias towards people who think cause prio is hard. Honestly, I guess I also believe that ethics is hard, so I was basically assuming that worldview. But maybe this is a very contentious position? I’d be interested to hear from anyone who thinks that cause prio is just really easy.
More generally, I agree that I/CEA can’t just defer our way out of this problem or other problems: you always need to choose the experts or the methodology or whatever. But, partly because ethics seems hard to me, I feel better about something like what I proposed, rather than just going with our staff’s best guess (when we mostly haven’t engaged deeply with all of the arguments).
I agree that there’s going to be a selection bias towards people who think cause prio is hard.
To be more explicit, there’s also a selection bias towards esotericism. Like how much you think most of the work is “done for you” by the rest of the world (e.g. in developmental economics or moral philosophy), versus needing to come up with the frameworks yourself.
As a side note, I think there’s an analogous selection bias within longtermism, where many of our best and brightest people end up doing technical alignment, making it harder to have clearer thinking about other longtermist issues (including issues directly related to making the development of transformative AI go well, like understanding the AI strategic landscape and AI safety recruitment strategy)
I do want to note that all of the things you list took place around 2017-2018, and our work and plans have changed since then.
My observations about 80k, GPI, and CFAR are all ongoing (though they originated earlier). I also think there are plenty of post-2018 examples related to CEA’s work, such as the Introductory Fellowship content Michael noted (not to mention the unexplained downvoting he got for doing so), Domassoglia’s observations about the most recent EAG and EAGx (James hits on similar themes), and the late 2019 event that was framed as a “Leader’s Forum” but was actually “some people who CEA staff think it would be useful to get together for a few days” (your words) with those people skewing heavily longtermist. I think all of these things “contribute to the view that EA essentially is longtermism/AI Safety?”(though of course longtermism could be “right” in which case these would all be positive developments.)
Where we have to decide a content split (e.g. for EA Global or the Handbook), I want CEA to represent the range of expert views on cause prioritization. I still don’t think we have amazing data on this, but my best guess is that this skews towards longtermist-motivated or X-risk work (like maybe 70-80%).
I would love someone to do a proper survey of everyone (trying to avoid one’s own personal networks) who has spent >1 year thinking about cause prioritization with a scope-sensitive and open-minded lens. I’ve tried to commission someone to do this a couple of times but it hasn’t worked out. If someone did this, it would help to shape our content, so I’d be happy to offer some advice and could likely find funding. If anyone is interested, let me know!
I agree with Linch’s concern about the selection bias this might entail: “a core problem with the “neutrality”/expert-views framing of this comment is selection bias. We would naively expect people who spend a lot of time on cause prioritization to systematically overrate (relative to the broader community) both the non-obviousness of the most important causes, and their esotericism.”
Also related to selection bias: most of the opportunities and incentives to work on cause prioritization have been at places like GPI or Forethought Foundation that use a longtermist lens, making it difficult to find an unbiased set of experts. I’m not sure how to get around this issue. Even trying to capture the views of the EA community as a whole (at the expense of deferring to experts) would be problematic to the extent “mistakes” have shaped the composition of the community by making EA more attractive to longtermists and less attractive to neartermists.
I appreciate that CEA is looking to “outsource” cause prioritization in some way. I just have concerns about how this will work in practice, as it strikes me as a very difficult thing to do well.
I also strongly share this worry about selection effects. There are additional challenges to those mentioned already: the more EA looks like an answer, rather than a question, the more inclined anyone who doesn’t share that answer is simply to ‘exit’, rather than ‘voice’, leading to an increasing skew over time of what putative experts believe. A related issue is that, if you want to work on animal welfare or global development you can do that without participating in EA, which is much harder if you want to work on longtermism.
Further, it’s a sort of double counting if you consider people as experts because they work in a particular organisation when they would only realistically be hired if they had a certain worldview. If FHI hired 100 more staff, and they were polled, I’m not sure we should update our view on what the expert consensus is any more than I should become more certain of the day’s events by reading different copies of the same newspaper. (I mean no offence to FHI or its staff, by the way, it’s just a salient example).
I can see how the work of several EA projects, especially CEA, contributed to this. I think that some of these were mistakes (and we think some of them were significant enough to list on our website)… Often my take on these cases is more like “it’s bad that we called this thing “EA”″, rather than “it’s bad that we did this thing”… I think that calling things “EA” means that there’s a higher standard of representativeness, which we sometimes failed to meet.
I do want to note that all of the things you list took place around 2017-2018, and our work and plans have changed since then. For instance… the EA Handbook is different.
The EA Handbook is different, but as far as I can tell the mistakes made with the Handbook 2.0 were repeated for the 3rd edition.
CEA describing those “mistakes” around the Handbook 2.0:
“it emphasized our longtermist view of cause prioritization, contained little information about why many EAs prioritize global health and animal advocacy, and focused on risks from AI to a much greater extent than any other cause. This caused some community members to feel that CEA was dismissive of the causes they valued. We think that this project ostensibly represented EA thinking as a whole, but actually represented the views of some of CEA’s staff, which was a mistake. We think we should either have changed the content, or have presented the content in a way that made clear what it was meant to represent.”
CEA acknowledges it was a mistake for the 2nd edition to exclude the views of large portions of the community, but frame the content as representative of EA. But the 3rd edition does the exact same thing!
As Michael relates, he observed to CEA staff that the EA Introductory Fellowship curriculum was heavily skewed toward longtermist content, and was told that it had been created without much/any input from non-longtermists. Since the Intro Fellowship curriculum is identical to the EA Handbook 3.0 material, that means non-longertermists had minimal input on the Handbook.
Despite that, the Handbook 3.0 and the Intro Fellowship curriculum (and for that matter the In-Depth EA Program, which includes topics on biorisk and AI but nothing on animals or poverty) are clearly framed as EA materials, which you say should be held to “a higher standard of representativeness” rather than CEA’s views. So I struggle to see how the Handbook 3.0 (and other content) isn’t simply repeating the mistakes of the second edition; it feels like we’re right back where we were four years ago. Arguably a worse place, since at least the Handbook 2.0 was updated to clarify that CEA selected the content and other community members might disagree.
I realize CEA posted on the Forum soliciting suggestions on what should be included in the 3rd edition and asking for feedback on an initial sequence on motivation (which doesn’t seem to have made it into the final handbook). But from Michael’s anecdote, it doesn’t sound like CEA reached out to critics of the 2nd edition or the animal or poverty communities. I would have expected those steps to be taken given the criticism surrounding the 2nd edition, CEA’s response to that criticism and its characterization of how it addressed its mistakes (“we took this [community] feedback into account when we developed the latest version of the handbook”), and how the 3rd edition is still framed as “EA” vs. “CEA’s take on EA”.
Hey, I’ve just messaged the people directly involved to double check, but my memory is that we did check in with some non-longtermists, including previous critics (as well as asking more broadly for input, as you note). (I’m not sure exactly what causes the disconnect between this and what Aaron is saying, but Aaron was not the person leading this project.) In any case, we’re working on another update, and I’ll make sure to run that version by some critics/non-longtermists.
Also, per other bits of my reply, we’re aiming to be ~70-80% longtermist, and I think that the intro curriculum is consistent with that. (We are not aiming to give equal weight to all cause areas, or to represent the views of everyone who fills out the EA survey.)
Since the content is aiming to represent the range of expert opinion in EA, since we encourage people to reflect on the readings and form their own views, and since we asked the community for input into it, I think that it’s more appropriate to call it the “EA Handbook” than the previous edition.
I don’t recall seeing the ~70-80% number mentioned before in previous posts but I may have missed it.
I’m curious to know what the numbers are for the other cause areas and to see the reasoning for each laid out transparently in a separate post.
I think that CEA’s cause prioritisation is the closest thing the community has to an EA ‘parliament’ and for that process to have legitimacy it should be presented openly and be subject to critique.
Thanks for following up regarding who was consulted on the Fellowship content.
And nice to know you’re planning to run the upcoming update by some critics. Proactively seeking out critical opinions seems quite important, as I suspect many critics won’t respond to general requests for feedback due to a concern that they’ll be ignored. Michael noted that concern, I’ve personally been discouraged from offering feedback because of it (I’ve engaged with this thread to help people understand the context and history of the current state of EA cause prioritization, not because I really expect CEA to meaningfully change its content/behavior), and I can’t imagine we’re alone in this.
I’ve engaged with this thread to help people understand the context and history of the current state of EA cause prioritization, not because I really expect CEA to meaningfully change its content/behavior
Fwiw, my model of CEA is approximately that it doesn’t want to look like it’s ignoring differing opinions but that, nevertheless, it isn’t super fussed about integrating them or changing what it does.
This is my view of CEA as an organisation. Basically, every CEA staff member I’ve ever met (including Max D) has been a really lovely, thoughtful individual.
I agree with your takes on CEA as an organization and as individuals (including Max).
Personally, I’d have a more positive view of CEA the organization if it were more transparent about its strategy around cause prioritization and representativeness (even if I disagree with the strategy) vs. trying to make it look like they are more representative than they are. E.g. Max has made it pretty clear in these comments that poverty and animal welfare aren’t high priorities, but you wouldn’t know that from reading CEA’s strategy page where the very first sentence states: “CEA’s overall aim is to do the most we can to solve pressing global problems — like global poverty, factory farming, and existential risk — and prepare to face the challenges of tomorrow.”
It’s possibly worth flagging that these are (sadly) quite long-running issues. I wrote an EA forum post now 5 years ago on the ‘marketing gap’, the tension between what EA organisations present EA as being about and what those the organisations believe it should be about, and arguing they should be more ‘morally inclusive’. By ‘moral inclusive’, I mean welcoming and representing the various different ways of doing the most good that thoughtful, dedicated individuals have proposed.
This gap has since closed a bit, although not always in the way I hoped for, i.e. greater transparency and inclusiveness. As two examples, GWWC has been spun off from CEA, rebooted, and now does seem to be cause neutral. 80k is much more openly longtermist.
I recognise this is a challenging issue, but I still think the right solution to this is for the more central EA organisations to actually try hard to be morally inclusive. I’ve been really impressed at how well GWWC seem to be doing this. I think it’s worth doing this for the same reasons I gave in that (now ancient) blogpost: it reduces groupthink, increases movement size, and reduces infighting. If people truly felt like EA was morally inclusive, I don’t think this post, or any of these comments (including this one) would have been written.
Well, now that GiveWell has already put in the years of vetting work, we can reliably have a pretty large impact on global poverty just by channeling however many million to AMF + similar. And I guess, it’s not exactly that we need to do too much more than that.
While at CEA, I was asked to take the curriculum for the Intro Fellowship and turn it into the Handbook, and I made a variety of changes (though there have been other changes to the Fellowship and the Handbook since then, making it hard to track exactly what I changed). The Intro Fellowship curriculum and the Handbook were never identical.
I exchanged emails with Michael Plant and Sella Nevo, and reached out to several other people in the global development/animal welfare communities who didn’t reply. I also had my version reviewed by a dozen test readers (at least three readers for each section), who provided additional feedback on all of the material.
I incorporated many of the suggestions I received, though at this point I don’t remember which came from Michael, Sella, or other readers. I also made many changes on my own.
It’s reasonable to argue that I should have reached out to even more people, or incorporated more of the feedback I received. But I (and the other people who worked on this at CEA) were very aware of representativeness concerns. And I think the 3rd edition was a lot more balanced than the 2nd edition. I’d break down the sections as follows:
“The Effectiveness Mindset”, “Differences in Impact”, and “Expanding Our Compassion” are about EA philosophy with a near-term focus (most of the pieces use examples from near-term causes, and the “More to Explore” sections share a bunch of material specifically focused on anima welfare and global development).
“Longtermism” and “Existential Risk” are about longtermism and X-risk in general.
“Emerging Technologies” covers AI and biorisk specifically.
These topics get more specific detail than animal welfare and global development do if you look at the required reading alone. This is a real imbalance, but seems minor compared to the imbalance in the 2nd edition. For example, the 3rd edition doesn’t set aside a large chunk of the only global health + development essay for “why you might not want to work in this area”.
“What might we be missing?” covers a range of critical arguments, including many against longtermism!
Michael Plant seems not to have noticed the longtermism critiques in his comment, though they include “Pascal’s Mugging” in the “Essentials” section and a bunch of other relevant material in the “More to Explore” section.
“Putting it into practice” is focused on career choice and links mostly to 80K resources, which does give it a longtermist tilt. But it also links to a bunch of resources on finding careers in neartermist spaces, and if someone wanted to work on e.g. global health, I think they’d still find much to value among those links.
I wouldn’t be surprised if this section became much more balanced over time as more material becomes available from Probably Good (and other career orgs focused on specific areas).
In the end, you have three “neartermist” sections, four “longtermist” sections (if you count career choice), and one “neutral” section (critiques and counter-critiques that span the gamut of common focus areas).
Thanks for sharing this history and your perspective Aaron.
I agree that 1) the problems with the 3rd edition were less severe than those with the 2nd edition (though I’d say that’s a very low bar to clear) and 2) the 3rd edition looks more representative if you weigh the “more to explore” sections equally with “the essentials” (though IMO it’s pretty clear that the curriculum places way more weight on the content it frames as “essential” than a content linked to at the bottom of the “further reading” section.)
I disagree with your characterization of “The Effectiveness Mindset”, “Differences in Impact”, and “Expanding Our Compassion” as neartermist content in a way that’s comparable to how subsequent sections are longtermist content. The early sections include some content that is clearly neartermist (e.g. “The case against speciesism”, and “The moral imperative toward cost-effectiveness in global health”). But much, maybe most, of the “essential” reading in the first three sections isn’t really about neartermist (or longtermist) causes. For instance, “We are in triage every second of every day” is about… triage. I’d also put “On Fringe Ideas”, “Moral Progress and Cause X”, “Can one person make a difference?”, “Radical Empathy”, and “Prospecting for Gold” in this bucket.
By contrast, the essential reading in the “Longtermism”, “Existential Risk”, and “Emerging technologies” section is all highly focused on longtermist causes/worldview; it’s all stuff like “Reducing global catastrophic biological risks”, “The case for reducing existential risk”, and “The case for strong longtermism”.
I also disagree that the “What we may be missing?” section places much emphasis on longtermist critiques (outside of the “more to explore” section, which I don’t think carries much weight as mentioned earlier). “Pascal’s mugging” is relevant to, but not specific to, longtermism, and “The case of the missing cause prioritization research” doesn’t criticize longtermist ideas per se, it more argues that the shift toward prioritizing longtermism hasn’t been informed by significant amounts of relevant research. I find it telling that “Objections to EA” (framed as a bit of a laundry list) doesn’t include anything about longtermism and that as far as I can tell no content in this whole section addresses the most frequent and intuitive criticism of longtermism I’ve heard (that it’s really really hard to influence the far future so we should be skeptical of our ability to do so).
Process-wise, I don’t think the use of test readers was an effective way of making sure the handbook was representative. Each test reader only saw a fraction of the content, so they’d be in no position to comment on the handbook as a whole. While I’m glad you approached members of the animal and global development communities for feedback, I think the fact that they didn’t respond is itself a form of (negative) feedback (which I would guess reflects the skepticism Michael expressed that his feedback would be incorporated). I’d feel better about the process if, for example, you’d posted in poverty and animal focused Facebook groups and offered to pay people (like the test readers were paid) to weigh in on whether the handbook represented their cause appropriately.
I’ll read any reply to this and make sure CEA sees it, but I don’t plan to respond further myself, as I’m no longer working on this project.
Thanks for the response. I agree with some of your points and disagree with others.
To preface this, I wouldn’t make a claim like “the 3rd edition was representative for X definition of the word” or “I was satisfied with the Handbook when we published it” (I left CEA with 19 pages of notes on changes I was considering). There’s plenty of good criticism that one could make of it, from almost any perspective.
It’s pretty clear that the curriculum places way more weight on the content it frames as “essential” than a content linked to at the bottom of the “further reading” section.
I agree.
But much, maybe most, of the “essential” reading in the first three sections isn’t really about neartermist (or longtermist) causes. For instance, “We are in triage every second of every day” is about… triage. I’d also put “On Fringe Ideas”, “Moral Progress and Cause X”, “Can one person make a difference?”, “Radical Empathy”, and “Prospecting for Gold” in this bucket.
Many of these have ideas that can be applied to either perspective. But the actual things they discuss are mostly near-term causes.
“On Fringe Ideas” focuses on wild animal welfare.
“We are in triage” ends with a discussion of global development (an area where the triage metaphor makes far more intuitive sense than it does for longtermist areas).
“Radical Empathy” is almost entirely focused on specific neartermist causes.
“Can one person make a difference” features three people who made a big difference — two doctors and Petrov. Long-term impact gets a brief shout-out at the end, but the impact of each person is measured by how many lives they saved in their own time (or through to the present day).
This is different from e.g. detailed pieces describing causes like malaria prevention or vitamin supplementation. I think that’s a real gap in the Handbook, and worth addressing.
But it seems to me like anyone who starts the Handbook will get a very strong impression in those first three sections that EA cares a lot about near-term causes, helping people today, helping animals, and tackling measurable problems. That impression matters more to me than cause-specific knowledge (though again, some of that would still be nice!).
However, I may be biased here by my teaching experience. In the two introductory fellowships I’ve facilitated, participants who read these essays spent their first three weeks discussing almost exclusively near-term causes and examples.
By contrast, the essential reading in the “Longtermism”, “Existential Risk”, and “Emerging technologies” section is all highly focused on longtermist causes/worldview; it’s all stuff like “Reducing global catastrophic biological risks”, “The case for reducing existential risk”, and “The case for strong longtermism”.
I agree that the reading in these sections is more focused. Nonetheless, I still feel like there’s a decent balance, for reasons that aren’t obvious from the content alone:
Most people have a better intuitive sense for neartermist causes and ideas. I found that longtermism (and AI specifically) required more explanation and discussion before people understood them, relative to the causes and ideas mentioned in the first three weeks. Population ethics alone took up most of a week.
“Longtermist” causes sometimes aren’t. I still don’t quite understand how we decided to add pandemic prevention to the “longtermist” bucket. When that issue came up, people were intensely interested and found the subject relative to their own lives/the lives of people they knew.
I wouldn’t be surprised if many people in EA (including people in my intro fellowships) saw many of Toby Ord’s “policy and research ideas” as competitive with AMF just for saving people alive today.
I assume there are also people who would see AMF as competitive with many longtermist orgs in terms of improving the future, but I’d guess they aren’t nearly as common.
“Pascal’s mugging” is relevant to, but not specific to, longtermism
I don’t think I’ve seen Pascal’s Mugging discussed in any non-longtermist context, unless you count actual religion. Do you have an example on hand for where people have applied the idea to a neartermist cause?
“The case of the missing cause prioritization research” doesn’t criticize longtermist ideas per se, it more argues that the shift toward prioritizing longtermism hasn’t been informed by significant amounts of relevant research.
I agree. I wouldn’t think of that piece as critical of longtermism.
As far as I can tell, no content in this whole section addresses the most frequent and intuitive criticism of longtermism I’ve heard (that it’s really really hard to influence the far future so we should be skeptical of our ability to do so).
I haven’t gone back to check all the material, but I assume you’re correct. I think it would be useful to add more content on this point.
This is another case where my experience as a facilitator warps my perspective; I think both of my groups discussed this, so it didn’t occur to me that it wasn’t an “official” topic.
Process-wise, I don’t think the use of test readers was an effective way of making sure the handbook was representative. Each test reader only saw a fraction of the content, so they’d be in no position to comment on the handbook as a whole.
I agree. That wasn’t the purpose of selecting test readers; I mentioned them only because some of them happened to make useful suggestions on this front.
While I’m glad you approached members of the animal and global development communities for feedback, I think the fact that they didn’t respond is itself a form of (negative) feedback (which I would guess reflects the skepticism Michael expressed that his feedback would be incorporated).
I wrote to four people, two of whom (including Michael) sent useful feedback . The other two also responded; one said they were busy, the other seemed excited/interested but never wound up sending anything.
A 50% useful-response rate isn’t bad, and makes me wish I’d sent more of those emails. My excuse is the dumb-but-true “I was busy, and this was one project among many”.
(As an aside, if someone wanted to draft a near-term-focused version of the Handbook, I think they’d have a very good shot at getting a grant.)
I’d feel better about the process if, for example, you’d posted in poverty and animal focused Facebook groups and offered to pay people (like the test readers were paid) to weigh in on whether the handbook represented their cause appropriately.
I’d probably have asked “what else should we include?” rather than “is this current stuff good?”, but I agree with this in spirit.
(As another aside, if you specifically have ideas for material you’d like to see included, I’d be happy to pass them along to CEA — or you could contact someone like Max or Lizka.)
But it seems to me like anyone who starts the Handbook will get a very strong impression in those first three sections that EA cares a lot about near-term causes, helping people today, helping animals, and tackling measurable problems. That impression matters more to me than cause-specific knowledge (though again, some of that would still be nice!).
However, I may be biased here by my teaching experience. In the two introductory fellowships I’ve facilitated, participants who read these essays spent their first three weeks discussing almost exclusively near-term causes and examples.
That’s helpful anecdata about your teaching experience. I’d love to see a more rigorous and thorough study of how participants respond to the fellowships to see how representative your experience is.
I don’t think I’ve seen Pascal’s Mugging discussed in any non-longtermist context, unless you count actual religion. Do you have an example on hand for where people have applied the idea to a neartermist cause?
I’m pretty sure I’ve heard it used in the context of a scenario questioning whether torture is justified to stop the threat dirty bomb that’s about to go off in a city.
I wrote to four people, two of whom (including Michael) sent useful feedback . The other two also responded; one said they were busy, the other seemed excited/interested but never wound up sending anything.
A 50% useful-response rate isn’t bad, and makes me wish I’d sent more of those emails. My excuse is the dumb-but-true “I was busy, and this was one project among many”.
That’s a good excuse :) I misinterpreted Michael’s previous comment as saying his feedback didn’t get incorporated at all. This process seems better than I’d realized (though still short of what I’d have liked to see after the negative reaction to the 2nd edition).
if you specifically have ideas for material you’d like to see included, I’d be happy to pass them along to CEA — or you could contact someone like Max or Lizka.
GiveWell’s Giving 101 would be a great fit for global poverty. For animal welfare content, I’d suggest making the first chapter of Animal Liberation part of the essential content (or at least further reading), rather than part of the “more to explore” content. But my meta-suggestion would be to ask people who specialize in doing poverty/animal outreach for suggestions.
EA Grants changed a lot after the first round and was closed down around 2019
Did subsequent rounds of EA Grants give non-trivial amounts to animal welfare and/or global poverty? What percentage of funding did these cause areas receive, and how much went to longtermist causes? Only the first round of grants was made public.
I did more research here and talked to more people, and I think that 60% is closer to the right skew here. (I also think that there are some other important considerations that I don’t go into here.) I still endorse the other parts of this comment (apart from the 70-80% bit).
Thank you for sharing these thoughts.
I can see how the work of several EA projects, especially CEA, contributed to this. I think that some of these were mistakes (and we think some of them were significant enough to list on our website). I personally caused several of the mistakes that you list, and I’m sorry for that.
Often my take on these cases is more like “it’s bad that we called this thing “EA”″, rather than “it’s bad that we did this thing”. E.g. I think that the first round of EA Grants made some good grants (e.g. to LessWrong 2.0), but that it would have been better to have used a non-EA brand for it. I think that calling things “EA” means that there’s a higher standard of representativeness, which we sometimes failed to meet.
I do want to note that all of the things you list took place around 2017-2018[1], and our work and plans have changed since then. For instance, CBG evaluation criteria are no longer as you state, EA Grants changed a lot after the first round and was closed down around 2019, the EA Handbook is different, and effectivealtruism.org has a new design.
If you have comments about our current work, then please give us (anonymous) feedback!
As I noted in an earlier comment, I want CEA to be promoting the principles of effective altruism. We have been careful not to bake cause area preferences into our metrics, and instead to focus on whether people are thinking carefully and open-mindedly about how to help others the most.
Where we have to decide a content split (e.g. for EA Global or the Handbook), I want CEA to represent the range of expert views on cause prioritization. I still don’t think we have amazing data on this, but my best guess is that this skews towards longtermist-motivated or X-risk work (like maybe 70-80%).
I would love someone to do a proper survey of everyone (trying to avoid one’s own personal networks) who has spent >1 year thinking about cause prioritization with a scope-sensitive and open-minded lens. I’ve tried to commission someone to do this a couple of times but it hasn’t worked out. If someone did this, it would help to shape our content, so I’d be happy to offer some advice and could likely find funding. If anyone is interested, let me know!
I acknowledge that the effectivealtruism.org design was the same in 2021 as it was in 2017, but this was mostly because we didn’t have much capacity to update the site at all, so I think the thing you’re complaining about was mostly a fact about 2017.
Thank you for wanting to be principled about such an important issue. However (speaking as someone who is both very strongly longtermist and a believer of the importance of cause prioritization), a core problem with the “neutrality”/expert-views framing of this comment is selection bias. We would naively expect people who spend a lot of time on cause prioritization to systematically overrate (relative to the broader community) both the non-obviousness of the most important causes, and their esotericism.
Put another way, if you were a thoughtful, altruistic, person who heard about EA in 2013 and your first instinct was to start what would be become Wave or earn-to-give for global poverty, you’d be systematically less represented in such a survey.
Now, I happen to think focusing a lot on cause prioritization is correct: I think ethics is hard, in many weird and surprising ways. But I don’t think I can (justifiably) get this from expert appeal/deference, it all comes down to specific beliefs I have about the world and how hard it is, and to some degree making specific bets that my own epistemology isn’t completely screwed up (because if it was, I probably can’t have much of an impact anyway).
Analogously, I also think we should update approximately not at all on the existence of God if we see surveys that philosophers of religion are much more likely to believe in God than other philosophers, or if ethicists are more likely to be deontologists than utilitarians.
I agree that all sorts of selection biases are going to be at play in this sort of project: the methodology would be a minefield and I don’t have all the answers.
I agree that there’s going to be a selection bias towards people who think cause prio is hard. Honestly, I guess I also believe that ethics is hard, so I was basically assuming that worldview. But maybe this is a very contentious position? I’d be interested to hear from anyone who thinks that cause prio is just really easy.
More generally, I agree that I/CEA can’t just defer our way out of this problem or other problems: you always need to choose the experts or the methodology or whatever. But, partly because ethics seems hard to me, I feel better about something like what I proposed, rather than just going with our staff’s best guess (when we mostly haven’t engaged deeply with all of the arguments).
To be more explicit, there’s also a selection bias towards esotericism. Like how much you think most of the work is “done for you” by the rest of the world (e.g. in developmental economics or moral philosophy), versus needing to come up with the frameworks yourself.
As a side note, I think there’s an analogous selection bias within longtermism, where many of our best and brightest people end up doing technical alignment, making it harder to have clearer thinking about other longtermist issues (including issues directly related to making the development of transformative AI go well, like understanding the AI strategic landscape and AI safety recruitment strategy)
My observations about 80k, GPI, and CFAR are all ongoing (though they originated earlier). I also think there are plenty of post-2018 examples related to CEA’s work, such as the Introductory Fellowship content Michael noted (not to mention the unexplained downvoting he got for doing so), Domassoglia’s observations about the most recent EAG and EAGx (James hits on similar themes), and the late 2019 event that was framed as a “Leader’s Forum” but was actually “some people who CEA staff think it would be useful to get together for a few days” (your words) with those people skewing heavily longtermist. I think all of these things “contribute to the view that EA essentially is longtermism/AI Safety?”(though of course longtermism could be “right” in which case these would all be positive developments.)
I agree with Linch’s concern about the selection bias this might entail: “a core problem with the “neutrality”/expert-views framing of this comment is selection bias. We would naively expect people who spend a lot of time on cause prioritization to systematically overrate (relative to the broader community) both the non-obviousness of the most important causes, and their esotericism.”
Also related to selection bias: most of the opportunities and incentives to work on cause prioritization have been at places like GPI or Forethought Foundation that use a longtermist lens, making it difficult to find an unbiased set of experts. I’m not sure how to get around this issue. Even trying to capture the views of the EA community as a whole (at the expense of deferring to experts) would be problematic to the extent “mistakes” have shaped the composition of the community by making EA more attractive to longtermists and less attractive to neartermists.
I appreciate that CEA is looking to “outsource” cause prioritization in some way. I just have concerns about how this will work in practice, as it strikes me as a very difficult thing to do well.
I also strongly share this worry about selection effects. There are additional challenges to those mentioned already: the more EA looks like an answer, rather than a question, the more inclined anyone who doesn’t share that answer is simply to ‘exit’, rather than ‘voice’, leading to an increasing skew over time of what putative experts believe. A related issue is that, if you want to work on animal welfare or global development you can do that without participating in EA, which is much harder if you want to work on longtermism.
Further, it’s a sort of double counting if you consider people as experts because they work in a particular organisation when they would only realistically be hired if they had a certain worldview. If FHI hired 100 more staff, and they were polled, I’m not sure we should update our view on what the expert consensus is any more than I should become more certain of the day’s events by reading different copies of the same newspaper. (I mean no offence to FHI or its staff, by the way, it’s just a salient example).
The EA Handbook is different, but as far as I can tell the mistakes made with the Handbook 2.0 were repeated for the 3rd edition.
CEA describing those “mistakes” around the Handbook 2.0:
CEA acknowledges it was a mistake for the 2nd edition to exclude the views of large portions of the community, but frame the content as representative of EA. But the 3rd edition does the exact same thing!
As Michael relates, he observed to CEA staff that the EA Introductory Fellowship curriculum was heavily skewed toward longtermist content, and was told that it had been created without much/any input from non-longtermists. Since the Intro Fellowship curriculum is identical to the EA Handbook 3.0 material, that means non-longertermists had minimal input on the Handbook.
Despite that, the Handbook 3.0 and the Intro Fellowship curriculum (and for that matter the In-Depth EA Program, which includes topics on biorisk and AI but nothing on animals or poverty) are clearly framed as EA materials, which you say should be held to “a higher standard of representativeness” rather than CEA’s views. So I struggle to see how the Handbook 3.0 (and other content) isn’t simply repeating the mistakes of the second edition; it feels like we’re right back where we were four years ago. Arguably a worse place, since at least the Handbook 2.0 was updated to clarify that CEA selected the content and other community members might disagree.
I realize CEA posted on the Forum soliciting suggestions on what should be included in the 3rd edition and asking for feedback on an initial sequence on motivation (which doesn’t seem to have made it into the final handbook). But from Michael’s anecdote, it doesn’t sound like CEA reached out to critics of the 2nd edition or the animal or poverty communities. I would have expected those steps to be taken given the criticism surrounding the 2nd edition, CEA’s response to that criticism and its characterization of how it addressed its mistakes (“we took this [community] feedback into account when we developed the latest version of the handbook”), and how the 3rd edition is still framed as “EA” vs. “CEA’s take on EA”.
Hey, I’ve just messaged the people directly involved to double check, but my memory is that we did check in with some non-longtermists, including previous critics (as well as asking more broadly for input, as you note). (I’m not sure exactly what causes the disconnect between this and what Aaron is saying, but Aaron was not the person leading this project.) In any case, we’re working on another update, and I’ll make sure to run that version by some critics/non-longtermists.
Also, per other bits of my reply, we’re aiming to be ~70-80% longtermist, and I think that the intro curriculum is consistent with that. (We are not aiming to give equal weight to all cause areas, or to represent the views of everyone who fills out the EA survey.)
Since the content is aiming to represent the range of expert opinion in EA, since we encourage people to reflect on the readings and form their own views, and since we asked the community for input into it, I think that it’s more appropriate to call it the “EA Handbook” than the previous edition.
I don’t recall seeing the ~70-80% number mentioned before in previous posts but I may have missed it.
I’m curious to know what the numbers are for the other cause areas and to see the reasoning for each laid out transparently in a separate post.
I think that CEA’s cause prioritisation is the closest thing the community has to an EA ‘parliament’ and for that process to have legitimacy it should be presented openly and be subject to critique.
Agree! This decision has huge implications for the entire community, and should be made explicitly and transparently.
Thanks for following up regarding who was consulted on the Fellowship content.
And nice to know you’re planning to run the upcoming update by some critics. Proactively seeking out critical opinions seems quite important, as I suspect many critics won’t respond to general requests for feedback due to a concern that they’ll be ignored. Michael noted that concern, I’ve personally been discouraged from offering feedback because of it (I’ve engaged with this thread to help people understand the context and history of the current state of EA cause prioritization, not because I really expect CEA to meaningfully change its content/behavior), and I can’t imagine we’re alone in this.
Fwiw, my model of CEA is approximately that it doesn’t want to look like it’s ignoring differing opinions but that, nevertheless, it isn’t super fussed about integrating them or changing what it does.
This is my view of CEA as an organisation. Basically, every CEA staff member I’ve ever met (including Max D) has been a really lovely, thoughtful individual.
I agree with your takes on CEA as an organization and as individuals (including Max).
Personally, I’d have a more positive view of CEA the organization if it were more transparent about its strategy around cause prioritization and representativeness (even if I disagree with the strategy) vs. trying to make it look like they are more representative than they are. E.g. Max has made it pretty clear in these comments that poverty and animal welfare aren’t high priorities, but you wouldn’t know that from reading CEA’s strategy page where the very first sentence states: “CEA’s overall aim is to do the most we can to solve pressing global problems — like global poverty, factory farming, and existential risk — and prepare to face the challenges of tomorrow.”
It’s possibly worth flagging that these are (sadly) quite long-running issues. I wrote an EA forum post now 5 years ago on the ‘marketing gap’, the tension between what EA organisations present EA as being about and what those the organisations believe it should be about, and arguing they should be more ‘morally inclusive’. By ‘moral inclusive’, I mean welcoming and representing the various different ways of doing the most good that thoughtful, dedicated individuals have proposed.
This gap has since closed a bit, although not always in the way I hoped for, i.e. greater transparency and inclusiveness. As two examples, GWWC has been spun off from CEA, rebooted, and now does seem to be cause neutral. 80k is much more openly longtermist.
I recognise this is a challenging issue, but I still think the right solution to this is for the more central EA organisations to actually try hard to be morally inclusive. I’ve been really impressed at how well GWWC seem to be doing this. I think it’s worth doing this for the same reasons I gave in that (now ancient) blogpost: it reduces groupthink, increases movement size, and reduces infighting. If people truly felt like EA was morally inclusive, I don’t think this post, or any of these comments (including this one) would have been written.
Thanks for sharing that post! Very well thought out and prescient, just unfortunate (through no fault of yours) that it’s still quite timely.
Well, now that GiveWell has already put in the years of vetting work, we can reliably have a pretty large impact on global poverty just by channeling however many million to AMF + similar. And I guess, it’s not exactly that we need to do too much more than that.
While at CEA, I was asked to take the curriculum for the Intro Fellowship and turn it into the Handbook, and I made a variety of changes (though there have been other changes to the Fellowship and the Handbook since then, making it hard to track exactly what I changed). The Intro Fellowship curriculum and the Handbook were never identical.
I exchanged emails with Michael Plant and Sella Nevo, and reached out to several other people in the global development/animal welfare communities who didn’t reply. I also had my version reviewed by a dozen test readers (at least three readers for each section), who provided additional feedback on all of the material.
I incorporated many of the suggestions I received, though at this point I don’t remember which came from Michael, Sella, or other readers. I also made many changes on my own.
It’s reasonable to argue that I should have reached out to even more people, or incorporated more of the feedback I received. But I (and the other people who worked on this at CEA) were very aware of representativeness concerns. And I think the 3rd edition was a lot more balanced than the 2nd edition. I’d break down the sections as follows:
“The Effectiveness Mindset”, “Differences in Impact”, and “Expanding Our Compassion” are about EA philosophy with a near-term focus (most of the pieces use examples from near-term causes, and the “More to Explore” sections share a bunch of material specifically focused on anima welfare and global development).
“Longtermism” and “Existential Risk” are about longtermism and X-risk in general.
“Emerging Technologies” covers AI and biorisk specifically.
These topics get more specific detail than animal welfare and global development do if you look at the required reading alone. This is a real imbalance, but seems minor compared to the imbalance in the 2nd edition. For example, the 3rd edition doesn’t set aside a large chunk of the only global health + development essay for “why you might not want to work in this area”.
“What might we be missing?” covers a range of critical arguments, including many against longtermism!
Michael Plant seems not to have noticed the longtermism critiques in his comment, though they include “Pascal’s Mugging” in the “Essentials” section and a bunch of other relevant material in the “More to Explore” section.
“Putting it into practice” is focused on career choice and links mostly to 80K resources, which does give it a longtermist tilt. But it also links to a bunch of resources on finding careers in neartermist spaces, and if someone wanted to work on e.g. global health, I think they’d still find much to value among those links.
I wouldn’t be surprised if this section became much more balanced over time as more material becomes available from Probably Good (and other career orgs focused on specific areas).
In the end, you have three “neartermist” sections, four “longtermist” sections (if you count career choice), and one “neutral” section (critiques and counter-critiques that span the gamut of common focus areas).
Thanks for sharing this history and your perspective Aaron.
I agree that 1) the problems with the 3rd edition were less severe than those with the 2nd edition (though I’d say that’s a very low bar to clear) and 2) the 3rd edition looks more representative if you weigh the “more to explore” sections equally with “the essentials” (though IMO it’s pretty clear that the curriculum places way more weight on the content it frames as “essential” than a content linked to at the bottom of the “further reading” section.)
I disagree with your characterization of “The Effectiveness Mindset”, “Differences in Impact”, and “Expanding Our Compassion” as neartermist content in a way that’s comparable to how subsequent sections are longtermist content. The early sections include some content that is clearly neartermist (e.g. “The case against speciesism”, and “The moral imperative toward cost-effectiveness in global health”). But much, maybe most, of the
“essential” reading in the first three sections isn’t really about neartermist (or longtermist) causes. For instance, “We are in triage every second of every day” is about… triage. I’d also put “On Fringe Ideas”, “Moral Progress and Cause X”, “Can one person make a difference?”, “Radical Empathy”, and “Prospecting for Gold” in this bucket.
By contrast, the essential reading in the “Longtermism”, “Existential Risk”, and “Emerging technologies” section is all highly focused on longtermist causes/worldview; it’s all stuff like “Reducing global catastrophic biological risks”, “The case for reducing existential risk”, and “The case for strong longtermism”.
I also disagree that the “What we may be missing?” section places much emphasis on longtermist critiques (outside of the “more to explore” section, which I don’t think carries much weight as mentioned earlier). “Pascal’s mugging” is relevant to, but not specific to, longtermism, and “The case of the missing cause prioritization research” doesn’t criticize longtermist ideas per se, it more argues that the shift toward prioritizing longtermism hasn’t been informed by significant amounts of relevant research. I find it telling that “Objections to EA” (framed as a bit of a laundry list) doesn’t include anything about longtermism and that as far as I can tell no content in this whole section addresses the most frequent and intuitive criticism of longtermism I’ve heard (that it’s really really hard to influence the far future so we should be skeptical of our ability to do so).
Process-wise, I don’t think the use of test readers was an effective way of making sure the handbook was representative. Each test reader only saw a fraction of the content, so they’d be in no position to comment on the handbook as a whole. While I’m glad you approached members of the animal and global development communities for feedback, I think the fact that they didn’t respond is itself a form of (negative) feedback (which I would guess reflects the skepticism Michael expressed that his feedback would be incorporated). I’d feel better about the process if, for example, you’d posted in poverty and animal focused Facebook groups and offered to pay people (like the test readers were paid) to weigh in on whether the handbook represented their cause appropriately.
I’ll read any reply to this and make sure CEA sees it, but I don’t plan to respond further myself, as I’m no longer working on this project.
Thanks for the response. I agree with some of your points and disagree with others.
To preface this, I wouldn’t make a claim like “the 3rd edition was representative for X definition of the word” or “I was satisfied with the Handbook when we published it” (I left CEA with 19 pages of notes on changes I was considering). There’s plenty of good criticism that one could make of it, from almost any perspective.
I agree.
Many of these have ideas that can be applied to either perspective. But the actual things they discuss are mostly near-term causes.
“On Fringe Ideas” focuses on wild animal welfare.
“We are in triage” ends with a discussion of global development (an area where the triage metaphor makes far more intuitive sense than it does for longtermist areas).
“Radical Empathy” is almost entirely focused on specific neartermist causes.
“Can one person make a difference” features three people who made a big difference — two doctors and Petrov. Long-term impact gets a brief shout-out at the end, but the impact of each person is measured by how many lives they saved in their own time (or through to the present day).
This is different from e.g. detailed pieces describing causes like malaria prevention or vitamin supplementation. I think that’s a real gap in the Handbook, and worth addressing.
But it seems to me like anyone who starts the Handbook will get a very strong impression in those first three sections that EA cares a lot about near-term causes, helping people today, helping animals, and tackling measurable problems. That impression matters more to me than cause-specific knowledge (though again, some of that would still be nice!).
However, I may be biased here by my teaching experience. In the two introductory fellowships I’ve facilitated, participants who read these essays spent their first three weeks discussing almost exclusively near-term causes and examples.
I agree that the reading in these sections is more focused. Nonetheless, I still feel like there’s a decent balance, for reasons that aren’t obvious from the content alone:
Most people have a better intuitive sense for neartermist causes and ideas. I found that longtermism (and AI specifically) required more explanation and discussion before people understood them, relative to the causes and ideas mentioned in the first three weeks. Population ethics alone took up most of a week.
“Longtermist” causes sometimes aren’t. I still don’t quite understand how we decided to add pandemic prevention to the “longtermist” bucket. When that issue came up, people were intensely interested and found the subject relative to their own lives/the lives of people they knew.
I wouldn’t be surprised if many people in EA (including people in my intro fellowships) saw many of Toby Ord’s “policy and research ideas” as competitive with AMF just for saving people alive today.
I assume there are also people who would see AMF as competitive with many longtermist orgs in terms of improving the future, but I’d guess they aren’t nearly as common.
I don’t think I’ve seen Pascal’s Mugging discussed in any non-longtermist context, unless you count actual religion. Do you have an example on hand for where people have applied the idea to a neartermist cause?
I agree. I wouldn’t think of that piece as critical of longtermism.
I haven’t gone back to check all the material, but I assume you’re correct. I think it would be useful to add more content on this point.
This is another case where my experience as a facilitator warps my perspective; I think both of my groups discussed this, so it didn’t occur to me that it wasn’t an “official” topic.
I agree. That wasn’t the purpose of selecting test readers; I mentioned them only because some of them happened to make useful suggestions on this front.
I wrote to four people, two of whom (including Michael) sent useful feedback . The other two also responded; one said they were busy, the other seemed excited/interested but never wound up sending anything.
A 50% useful-response rate isn’t bad, and makes me wish I’d sent more of those emails. My excuse is the dumb-but-true “I was busy, and this was one project among many”.
(As an aside, if someone wanted to draft a near-term-focused version of the Handbook, I think they’d have a very good shot at getting a grant.)
I’d probably have asked “what else should we include?” rather than “is this current stuff good?”, but I agree with this in spirit.
(As another aside, if you specifically have ideas for material you’d like to see included, I’d be happy to pass them along to CEA — or you could contact someone like Max or Lizka.)
That’s helpful anecdata about your teaching experience. I’d love to see a more rigorous and thorough study of how participants respond to the fellowships to see how representative your experience is.
I’m pretty sure I’ve heard it used in the context of a scenario questioning whether torture is justified to stop the threat dirty bomb that’s about to go off in a city.
That’s a good excuse :) I misinterpreted Michael’s previous comment as saying his feedback didn’t get incorporated at all. This process seems better than I’d realized (though still short of what I’d have liked to see after the negative reaction to the 2nd edition).
GiveWell’s Giving 101 would be a great fit for global poverty. For animal welfare content, I’d suggest making the first chapter of Animal Liberation part of the essential content (or at least further reading), rather than part of the “more to explore” content. But my meta-suggestion would be to ask people who specialize in doing poverty/animal outreach for suggestions.
Did subsequent rounds of EA Grants give non-trivial amounts to animal welfare and/or global poverty? What percentage of funding did these cause areas receive, and how much went to longtermist causes? Only the first round of grants was made public.
I did more research here and talked to more people, and I think that 60% is closer to the right skew here. (I also think that there are some other important considerations that I don’t go into here.) I still endorse the other parts of this comment (apart from the 70-80% bit).