Can you give an example of a time when you believe that the EA community got the wrong answer to an important question as a result of not following your advice here, and how we could have gotten the right answer by following it?
Links aren’t working.
Apologies if this is a silly question, but could you give examples of specific, concrete problems that you think this analysis is relevant to?
Does your recommendation account for the staff-time costs of doing anything other than whatever an org’s current setup is? Orgs like CEA have stated that this is why they don’t do financial-optimization things like this.
I don’t think there was necessarily anything wrong with it, I’d just encourage future organizers to consider more explicitly what the goal is and how to achieve it.
No one on the team knew the donor, though he had donated to EA causes in the past and was acquainted with relevant people at CEA. We offered him VIP tickets and then he put $2,000 in the pay-what-you-want box in our online ticketing system. I think it was primarily thought of as defraying conference costs, and indeed we came in less than $2,000 under budget.
The organizers included Matt Reardon (OP and lead organizer) from Harvard Law School, Jen Eason and Vanessa Ruales from Harvard College, Juan Gil from MIT, Rebecca Baron from Tufts, and myself (no institutional affiliation).
When writing this postmortem, we actually did devote a section of it to a discussion of how the content was received, including individual presentations. Because most of the speakers were invited guests, this section will not be made public. I can share a few overall conclusions.
Overall, reception of the content in aggregate was positive. Some attendees were surprised by, and in a few cases critical of, the proportion of it devoted to animal welfare. This was not by design; most of the conference organizers are interested in animal welfare, but not moreso than other EA focus areas. Rather, it was determined primarily by the availability of speakers (most notably keynote speaker Bruce Friedrich). A few talks were also criticized by some attendees for being overly technical or of narrow interest.
Most of the panels were moderated by members of the organizing team; I think it would have been better to have these be moderated by people with deeper knowledge of the respective topics.
The anti-debate was an interesting idea whose specific workings we kind of just made up ad-hoc. I’d like to see it tried again, but only after further refinement of the format and clarity on how exactly it is supposed to work.
I don’t think nobody delved into the Cool Earth numbers because they assumed a bunch of smart people had already done it. I think nobody delved into the Cool Earth numbers because it wasn’t worth their time, because climate change charities generally aren’t competitive with the standard EA donation opportunities, so the question is only relevant if you’ve decided for non-EA reasons that you’re going to focus on climate change. (Indeed, if I understand correctly the Founders Pledge report was written primarily for non-EA donors who’d decided this.)
Whatever’s been going on with global poverty and AI risk, I think it’s probably a different problem.
(And yes, Doing Good Better was part of what I was referring to with respect to nuance getting lost in popularizations. It’s that problem specifically that I claim is difficult, not the more general problem of groupthink within EA.)
I don’t think I would call this hubris. We all knew that the Cool Earth recommendation was low-confidence. But what else were we going to do? To paraphrase Scott Alexander from another recent community controversy, our probability distribution was wide but centered around Cool Earth.
I do think that that nuance occasionally got lost when doing outreach to people not already very informed about EA, but that’s a different problem. We haven’t solved it, but I feel like that’s because it’s hard, not because nobody’s thought about it.
(One could also argue that outreach to mainstream audiences about EA shouldn’t discuss climate change at all, given its place in the movement, but the temptation to make those mainstream audiences more receptive by talking about something they already care about is strong.)
I suspect that it was widely recognized for quite some time that GWWC’s analysis of Cool Earth was outdated enough not to be trustworthy. People donated to Cool Earth anyway because it was the only climate-change charity that we had any particular reason to believe was better than others. This, of course, has changed with the Founders Pledge report, and as such I predict that EA interest in Cool Earth will fade with time.
I looked a little to try to figure out why the criticisms of Cool Earth don’t also apply to the Coalition for Rainforest Nations. It sounds like the primary reason is because CfRN influences nationwide policy, so the loggers can be displaced only to a different country, which is inconvenient enough that most would give up.
Also, the cases for contraception and female education as climate-change interventions seem much, much more speculative than the case for rainforest conservation, so much so that their respective cost-effectiveness numbers probably ought not to be directly compared.
GiveWell doesn’t directly use literal DALYs in their current cost-effectiveness estimates. They have a research page on them; the linked blog posts were originally published a long time ago, but were updated relatively recently, so they presumably still stand by them. See also this more recent post.
GiveWell’s cost-effectiveness spreadsheet includes a tab on moral weights. You can make a copy of it, change the numbers to represent your preferred views on population ethics, and see what this does to the results.
I think the big problem with the narrow focus is that newbie EAs, especially if they’re students, tend to get saturated with the message that the way to do good with your life is to go to 80,000 Hours and follow their career advice. Indeed, CEA’s official advice for local group leaders says to heavily emphasize this. And they get this message relatively early in the sales funnel, long before they’ve gone through anything that would filter out the majority who aren’t good candidates for 80,000 Hours’s top priority paths. So it ought not to surprise anyone that a huge fraction of them come away demoralized.
There’s an obvious sense in which this is still the impact-maximizing approach, in that the global utilitarian cost of demoralizing a bunch of people who weren’t going to change the world anyway, is likely outweighed by the benefit of getting even one person who needed that extra push to start working on a priority program. But it still leaves a bad taste in my mouth. I feel as though, if EA is going to choose to be a community (as opposed to just a thing that some individuals happen to do), then it has at least some kind of responsibility to take care of its own, separate from its mission to maximize aggregate global utility. And there’s a sense in which setting up expectations that most of us can’t live up to constitutes a systematic failure to do that.
(Incidentally, I think most local group leaders don’t want to send their members through the gauntlet like this. But even if they realize that there’s a problem, it’s still the accepted thing to do and they don’t have any better ideas. EAs want to be doing something impactful, or else they wouldn’t be EAs, and there aren’t a lot of great alternative activities that groups of nonspecialists can do, especially now that fundraising for GiveWell top charities has (rightly) gone out of fashion.)
I suspect that it is a bad idea to publicly advocate this (though using it is fine). I’m not worried so much about moral licensing; rather, I think the amount of money being moved in this way is so tiny, relative to the amount of attention required in order to move it, that in a genuinely impact-focused discussion of possible ways to do good it would not even come up. I fear that bringing it up in association with EA gives a misleading impression of what the EA approach to prioritization looks like.
Is that form supposed to be accessible to outside CEA? Right now it’s not.
Prior work on this topic [PDF]
All of the endnote links are broken.
Is the nomination form supposed to have contact information? I just nominated a potential speaker who I’m connected to, but realized that you may have no way to get in touch with me.
So assuming you don’t win, are you allowed to post your essay on your own blog? Or would this undermine CEA’s ability to cannibalize bits of it?