I work as an engineer, donate 10% of my income, and occasionally enjoy doing independent research. I’m most interested in farmed animal welfare and the nitty-gritty details of global health and development work. In 2022, I was a co-winner of the GiveWell Change Our Mind Contest.
MHR
I’m not a lawyer, but my understanding is that even informal agreements against headhunting other EA organizations’ employees would likely violate US antitrust law.
I share your sentiment, but I do think the choice of who to center in those messages matters. To use a sort of absurd example, if QUALY the lightbulb was caught robbing a bank, it would come across as tone deaf to say “I would like to emphasize my wishes of wellness and care for QUALY, their bank robber crew, and the many others impacted by this.”
There’s a lot we don’t know at this point. Was FTX’s downfall the result of unfortunate luck? poor judgement? bad practices? Especially without knowing more, I’d be wary of messages that center care for Sam and the FTX team over care for the retail investors and customers who may end up facing the worst hardship as a result of these events.
Also, I found the lack of discussion of animal welfare frustrating. That’s one of the three big cause areas within EA (or one of four if you count community building)!
I think there’s a good chance this basic point is right, but I’m not sure your takeaway from the Samotsvety forecast is correct? I think the 3-100 hours lost in expectation is based on the current information about risk. The Samotsvety forecast is that conditional on a nuclear weapon being used in Ukraine, there is a ~2% chance of London being nuked. I think the mean estimate for expected hours of life loss if one stays in London in that case is ~2000. That’s a substantial number of lost hours, and I can see it being rational to get to a safer location if those are the stakes.
Very exciting to see this rolled out! I love the new recommendations page, and I’m thrilled that GWWC is taking the “evaluate the evaluators” mission seriously. The one thing I’m kind of confused by is the new GWWC funds. Don’t EA funds already serve as the natural choice for donors who want high impact giving opportunities within a particular cause area and don’t want to worry about having to manually update their selections as recommendations change? Having a duplicate set of funds within Effective Ventures seems like it will add overhead and confusion without necessarily providing a clear benefit. In trying to think through the potential benefits, I do see how having the GWWC funds would make it possible to not recommend certain EA funds in future years if you were to find issues with their grantmaking. However, it seems like those kinds of issues could also be addressed via the EA funds making changes in response to the GWWC research team’s findings. Having two sets of competing funds trying to do the same thing within EV just appears to me to be a potentially poor use of resources unless there’s a clear justification for keeping them separate.
My apologies if this question has already been addressed elsewhere, I tried to look back through the previous announcement and AMA but may have missed some discussions.
This is pretty astounding. It seems to me that it’s a result that’s consistent with all the other recent progress AI/ML systems have been making in playing competitive and cooperative strategy games, as well as in using language, but it’s still a really impressive outcome. My sense is that this is the kind of result that you’d tend to see in a world with shorter rather than longer timelines.
As for my personal feelings on the matter, I think they’d best be summed up by this image.
I also watched the video and was also pleasantly surprised by how fair it ended up feeling.
For what it’s worth, I didn’t find the EA and systemic change section to be that interesting, but that might just be because it’s a critique I’ve spent time reading about previously. My guess is that most other forum readers won’t find much new in that section relative to existing discussions around the issue. And Thorn doesn’t mention anything about tradeoffs or opportunity costs in making that critique, which makes it feel like it’s really missing something. Because for practical purposes, the systemic change argument she’s making requires arguing that it’s worth letting a substantial number of people die from preventable diseases (plus letting a substantial number of people suffer from lack of mental healthcare, letting a substantial number of animals be subject to terrible conditions on factory farms etc.) in the short run in order to bring about systemic change that will do more to save and improve lives in the long run. It’s possible that’s right, but I think making that case really requires a clear understanding of what those opportunity costs are and a justification of why they would be worth accepting.
Here’s a twitter thread by Nathan Young with some further discussion and links to Manifold markets about the sale’s impact on the FTX Future Fund.
I think this is perfectly good to try, but I’m personally skeptical that it will end up being especially useful. My sense is that right now, there isn’t a shortage of frontpage content on the forum. Rather, there seems to often be a shortage of deep reading, engagement, and discussion when someone writes a long object-level post. I would be interested to see initiatives aimed at fostering that kind of deeper engagement with content, rather than at trying to get more frontpage posts.
It’s awesome that you’ve put this together, as I think this is really valuable information. Honestly, what surprises me most here is how similar all four organizations’ numbers are across most of the items involved.
As you pointed out, however, your use of the highest-possible value for HLI’s value of extending a life by a year definitely undersells how different HLI is from the others. I think it would be better if you explicitly showed both endpoints of the range HLI considers, which includes negative values on the low end. Without that, I worry that readers who were otherwise not highly familiar with HLI’s work would not come away with a correct impression of HLI’s views.
(Note that this comment is quick and not super well thought out. I hope to research and think about it more deeply at some point in the future, and maybe write it up in a better form).
As with many articles critical of EA, this article spends a while arguing against the early EA focus on earning to give:
To that end, I heard an EA-sympathetic graduate student explaining to a law student that she shouldn’t be a public defender, because it would be morally more beneficial for her to work at a large corporate law firm and donate most of her salary to an anti-malaria charity. The argument he made was that if she didn’t become a public defender, someone else would fill the post, but if she didn’t take the position as a Wall Street lawyer, the person who did take it probably wouldn’t donate their income to charity, thus by taking the public defender job instead of the Wall Street job she was essentially murdering the people whose lives she could have saved by donating a Wall Street income to charity.1
...
MacAskill wrote a moral philosophy paper arguing that even if we “suppose that the typical petrochemical company harms others by adding to the overall production of CO2 and thereby speeding up anthropogenic climate change” (a thing we do not need to “suppose”), if working for one would be “more lucrative” than any other career, “thereby enabling [a person] to donate more” then “the fact that she would be working for a company that harms others through producing CO2” wouldn’t be “a reason against her pursuing that career” since it “only makes others worse off if more CO2 is produced as a result of her working in that job than as a result of her replacement working in that job.” (You can of course see here the basic outlines of an EA argument in favor of becoming a concentration camp guard, if doing so was lucrative and someone else would take the job if you didn’t. But MacAskill says that concentration camp guards are “reprehensible” while it is merely “morally controversial” to take jobs like working for the fossil fuel industry, the arms industry, or making money “speculating on wheat, thereby increasing price volatility and disrupting the livelihoods of the global poor.” It remains unclear how one draws the line between “reprehensibly” causing other people’s deaths and merely “controversially” causing them.)4
It’s a little frustrating to me that EA orgs and public figures have basically conceded this argument and tend to shy away from actively defending earning to give as a standard EA path. I think the utilitarian argument that the quoted graduate student was making is basically correct (with the need to properly account for one’s career decision marginally impacting salaries in your given field, and whether one is likely to be a more effective worker than the person one is displacing). On the flip side, I think the deontological argument that NJR is making doesn’t really hold up that well under scrutiny? Current Affairs is a print magazine, printing and mailing thousands of copies of it every month contributes to resource usage and climate change. NJR presumably is okay with this because he thinks that the benefits of educating and informing his readership exceed the harms of his resource usage. In the same way, I think working in a job that produces some negative harms can be okay if the net benefits of donating one’s income substantially outweigh those harms. I think this gets even more stark when you try and actually think through the human scale of it all. Imagine having to tell ten thousand parents that the reason their kids won’t get anti-malaria pills this year is that you working as a stock trader violates the categorical imperative. It sounds absurd, but that’s the kind of thing we’re talking about here.
Something that I do think I and NJR would agree on is that it’s really screwed up that the world is in this situation to start with. There’s something deeply unjust about a random American lawyer getting to decide whether people die from malaria based on their career and donation decisions. But we can’t wave a magic wand and change that at the drop of a hat. And choosing to focus only on efforts to create systemic change means not getting lifesaving medicine to a ton of people who need it right now. I wish critics engaged more deeply with those really hard tradeoffs, and that EAs did a better job of articulating them. Just trying to sidestep the conversation about earning to give really undersells the moral challenge and stakes we’re dealing with.
This is fantastic! Thanks for publishing this update, as well as for all the work you’ve done over the past two years. I’ve been very impressed at how well SWP has done at securing commitments and creating partnerships within the industry. It seems like you all have a very strong and potentially very cost-effective plan for next steps, and I’m excited to dig further into the latest research updates.
A few questions:
The BOTEC you published a few months ago estimated a cost-effectiveness of about 4000 shrimp stunned per dollar per year. Can you talk about some of the factors that led to the updated estimate being 1500? Is the 1500 shrimp/$/year number accounting for more of the overhead costs associated with the stunners program?
The ASC consultation document uses the acronym UoC a bunch. Can you explain what that means?
Is there a video of the panel discussion from the Global Shrimp Forum?
I also continue to be surprised that there hasn’t been more effort within the alt protein space on cultivated or plant-based shrimp paste. As you noted in the alternative shrimp report, shrimp paste seems like not only a huge market, but also one of the easier animal products to replicate from a taste and texture perspective. That might be a really promising area for other orgs to focus on.
Can you give examples of EAs harshly punishing visible failures that weren’t matters of genuine unethical conduct? I can think of some pretty big visible failures that didn’t lead to any significant backlash (and actually get held up as positive examples of orgs taking responsibility). For example, Evidence Action discovering that No Lean Season didn’t work and terminating it, or GiveDirectly’s recent fraud problems after suspending some of their standard processes to get out money in a war zone. Maybe people have different standards for failure in longtermist/meta EA stuff?
This is really fascinating and in-depth work, and seems very valuable. You might want to consider working for GiveWell or Open Phil given your skillset and clear passion for this topic!
I did want to comment about one particular item you mentioned. You said:
New Incentives includes an ‘adjustment towards skeptical prior’, while no other charity does
I think this is not actually correct? Deworming charities have a “replicability adjustment for deworming” applied to the cost-effectiveness number (see e.g. line 11 of the Deworm the World sheet), which is arrived at via a similar kind of Bayesian framework.
I upvoted this despite not personally agreeing with it because I think it’s a good demonstration of the reputational risks associated with this strategy.
Thanks for putting this together! I’m skeptical of putting too much weight on the conclusions (just given how much uncertainty there is), but I think this is valuable addition to the conversation on this subject.
It’s worth noting that MacAskill touched on this topic in chapter 9 of WWOTF, using neuron counts as a proxy for moral weight. He comes to a very different conclusion, which makes sense given that neuron count-based moral weights for basically all animals are much lower than the RP moral weights:
To capture the importance of differences in capacity for wellbeing, we could, as a very rough heuristic, weight animals’ interests by the number of neurons they have. The motivating thought behind weighting by neurons is that, since we know that conscious experience of pain is the result of activity in certain neurons in the brain, then it should not matter more that the neurons are divided up among four hundred chickens rather than present in one human. If we do this, then a beetle with 50,000 neurons would have very little capacity for wellbeing; honeybees, with 960,000 neurons, would count a little more; chickens, with 200 million neurons, count a lot more; and humans, with over 80 billion neurons, count the most. This gives a very different picture than looking solely at numbers of animals: by neuron count, humans outweigh all farmed animals (including farmed fish) by a factor of thirty to one. This was very surprising to me; before looking into this, I hadn’t appreciated just how great the difference in brain size is between human beings and nonhuman animals.
If, however, we allow neuron count as a rough proxy, we get the conclusion that the total weighted interests of farm land animals are fairly small compared to that of humans, though their wellbeing is decisively negative.
This does not yet resolve whether the welfare of humans and farmed animals combined is negative. Even though, in totality, farmed animals may have fewer neurons, the vast majority of farmed animals (chicken and fish) live lives full of intense suffering, which could well outweigh total human wellbeing. If the intensity of the suffering of chickens and fish is at least forty times the intensity of average human happiness, then the combined wellbeing of humans and farmed animals is negative.
Are you willing to share your underlying source code? I might be interested in adding uncertainty to the analysis.
This is a great list of resources! One thing I’d add is that the effective animal advocacy space is pretty seriously funding constrained right now, and I don’t see any signs that the situation is likely to change in the next few years. For that reason, I think it’s worth calling out earning to give as a potentially uniquely promising path to impact. Animal Advocacy Careers had a good post on ETG for animals a few months ago.
Strong upvoted, I found this very interesting and I expect that quite a few people will find it helpful.
I don’t know if it’s been posted here before, but Scott Alexander has a detailed writeup on depression treatment that people may also find useful, including information on the order he often has his patients try medications in.
I’m so grateful to everyone who wrote submissions for the EA criticism and red-teaming contest! I was really blown away by the number and quality of submissions.
In particular, I was super impressed by Froolow’s submission (someone needs to pay him whatever it costs to get him to come work for an EA org full-time!) and the work by the Happier Lives Institute.
I think Ozy Brennan’s response to this section was very good. To quote the relevant section (though I would encourage readers to read the whole piece, which also includes some footnotes) :