I work as an engineer, donate 10% of my income, and occasionally enjoy doing independent research. I’m most interested in farmed animal welfare and the nitty-gritty details of global health and development work. In 2022, I was a co-winner of the GiveWell Change Our Mind Contest.
MHR
Planned Updates to U.S. Regulatory Analysis Methods are Likely Relevant to EAs
Electric Shrimp Stunning: a Potential High-Impact Donation Opportunity
Improving EA Communication Surrounding Disability
The Case for Funding New Long-Term Randomized Controlled Trials of Deworming
Neuron Count-Based Measures May Currently Underweight Suffering in Farmed Fish
Linkpost: 7 A.I. Companies Agree to Safeguards After Pressure From the White House
An Examination of GiveWell’s Water Quality Intervention Cost-Effectiveness Analysis
I’m not a lawyer, but my understanding is that even informal agreements against headhunting other EA organizations’ employees would likely violate US antitrust law.
I share your sentiment, but I do think the choice of who to center in those messages matters. To use a sort of absurd example, if QUALY the lightbulb was caught robbing a bank, it would come across as tone deaf to say “I would like to emphasize my wishes of wellness and care for QUALY, their bank robber crew, and the many others impacted by this.”
There’s a lot we don’t know at this point. Was FTX’s downfall the result of unfortunate luck? poor judgement? bad practices? Especially without knowing more, I’d be wary of messages that center care for Sam and the FTX team over care for the retail investors and customers who may end up facing the worst hardship as a result of these events.
Also, I found the lack of discussion of animal welfare frustrating. That’s one of the three big cause areas within EA (or one of four if you count community building)!
I think there’s a good chance this basic point is right, but I’m not sure your takeaway from the Samotsvety forecast is correct? I think the 3-100 hours lost in expectation is based on the current information about risk. The Samotsvety forecast is that conditional on a nuclear weapon being used in Ukraine, there is a ~2% chance of London being nuked. I think the mean estimate for expected hours of life loss if one stays in London in that case is ~2000. That’s a substantial number of lost hours, and I can see it being rational to get to a safer location if those are the stakes.
Very exciting to see this rolled out! I love the new recommendations page, and I’m thrilled that GWWC is taking the “evaluate the evaluators” mission seriously. The one thing I’m kind of confused by is the new GWWC funds. Don’t EA funds already serve as the natural choice for donors who want high impact giving opportunities within a particular cause area and don’t want to worry about having to manually update their selections as recommendations change? Having a duplicate set of funds within Effective Ventures seems like it will add overhead and confusion without necessarily providing a clear benefit. In trying to think through the potential benefits, I do see how having the GWWC funds would make it possible to not recommend certain EA funds in future years if you were to find issues with their grantmaking. However, it seems like those kinds of issues could also be addressed via the EA funds making changes in response to the GWWC research team’s findings. Having two sets of competing funds trying to do the same thing within EV just appears to me to be a potentially poor use of resources unless there’s a clear justification for keeping them separate.
My apologies if this question has already been addressed elsewhere, I tried to look back through the previous announcement and AMA but may have missed some discussions.
This is pretty astounding. It seems to me that it’s a result that’s consistent with all the other recent progress AI/ML systems have been making in playing competitive and cooperative strategy games, as well as in using language, but it’s still a really impressive outcome. My sense is that this is the kind of result that you’d tend to see in a world with shorter rather than longer timelines.
As for my personal feelings on the matter, I think they’d best be summed up by this image.
I also watched the video and was also pleasantly surprised by how fair it ended up feeling.
For what it’s worth, I didn’t find the EA and systemic change section to be that interesting, but that might just be because it’s a critique I’ve spent time reading about previously. My guess is that most other forum readers won’t find much new in that section relative to existing discussions around the issue. And Thorn doesn’t mention anything about tradeoffs or opportunity costs in making that critique, which makes it feel like it’s really missing something. Because for practical purposes, the systemic change argument she’s making requires arguing that it’s worth letting a substantial number of people die from preventable diseases (plus letting a substantial number of people suffer from lack of mental healthcare, letting a substantial number of animals be subject to terrible conditions on factory farms etc.) in the short run in order to bring about systemic change that will do more to save and improve lives in the long run. It’s possible that’s right, but I think making that case really requires a clear understanding of what those opportunity costs are and a justification of why they would be worth accepting.
Here’s a twitter thread by Nathan Young with some further discussion and links to Manifold markets about the sale’s impact on the FTX Future Fund.
I think this is perfectly good to try, but I’m personally skeptical that it will end up being especially useful. My sense is that right now, there isn’t a shortage of frontpage content on the forum. Rather, there seems to often be a shortage of deep reading, engagement, and discussion when someone writes a long object-level post. I would be interested to see initiatives aimed at fostering that kind of deeper engagement with content, rather than at trying to get more frontpage posts.
It’s awesome that you’ve put this together, as I think this is really valuable information. Honestly, what surprises me most here is how similar all four organizations’ numbers are across most of the items involved.
As you pointed out, however, your use of the highest-possible value for HLI’s value of extending a life by a year definitely undersells how different HLI is from the others. I think it would be better if you explicitly showed both endpoints of the range HLI considers, which includes negative values on the low end. Without that, I worry that readers who were otherwise not highly familiar with HLI’s work would not come away with a correct impression of HLI’s views.
(Note that this comment is quick and not super well thought out. I hope to research and think about it more deeply at some point in the future, and maybe write it up in a better form).
As with many articles critical of EA, this article spends a while arguing against the early EA focus on earning to give:
To that end, I heard an EA-sympathetic graduate student explaining to a law student that she shouldn’t be a public defender, because it would be morally more beneficial for her to work at a large corporate law firm and donate most of her salary to an anti-malaria charity. The argument he made was that if she didn’t become a public defender, someone else would fill the post, but if she didn’t take the position as a Wall Street lawyer, the person who did take it probably wouldn’t donate their income to charity, thus by taking the public defender job instead of the Wall Street job she was essentially murdering the people whose lives she could have saved by donating a Wall Street income to charity.1
...
MacAskill wrote a moral philosophy paper arguing that even if we “suppose that the typical petrochemical company harms others by adding to the overall production of CO2 and thereby speeding up anthropogenic climate change” (a thing we do not need to “suppose”), if working for one would be “more lucrative” than any other career, “thereby enabling [a person] to donate more” then “the fact that she would be working for a company that harms others through producing CO2” wouldn’t be “a reason against her pursuing that career” since it “only makes others worse off if more CO2 is produced as a result of her working in that job than as a result of her replacement working in that job.” (You can of course see here the basic outlines of an EA argument in favor of becoming a concentration camp guard, if doing so was lucrative and someone else would take the job if you didn’t. But MacAskill says that concentration camp guards are “reprehensible” while it is merely “morally controversial” to take jobs like working for the fossil fuel industry, the arms industry, or making money “speculating on wheat, thereby increasing price volatility and disrupting the livelihoods of the global poor.” It remains unclear how one draws the line between “reprehensibly” causing other people’s deaths and merely “controversially” causing them.)4
It’s a little frustrating to me that EA orgs and public figures have basically conceded this argument and tend to shy away from actively defending earning to give as a standard EA path. I think the utilitarian argument that the quoted graduate student was making is basically correct (with the need to properly account for one’s career decision marginally impacting salaries in your given field, and whether one is likely to be a more effective worker than the person one is displacing). On the flip side, I think the deontological argument that NJR is making doesn’t really hold up that well under scrutiny? Current Affairs is a print magazine, printing and mailing thousands of copies of it every month contributes to resource usage and climate change. NJR presumably is okay with this because he thinks that the benefits of educating and informing his readership exceed the harms of his resource usage. In the same way, I think working in a job that produces some negative harms can be okay if the net benefits of donating one’s income substantially outweigh those harms. I think this gets even more stark when you try and actually think through the human scale of it all. Imagine having to tell ten thousand parents that the reason their kids won’t get anti-malaria pills this year is that you working as a stock trader violates the categorical imperative. It sounds absurd, but that’s the kind of thing we’re talking about here.
Something that I do think I and NJR would agree on is that it’s really screwed up that the world is in this situation to start with. There’s something deeply unjust about a random American lawyer getting to decide whether people die from malaria based on their career and donation decisions. But we can’t wave a magic wand and change that at the drop of a hat. And choosing to focus only on efforts to create systemic change means not getting lifesaving medicine to a ton of people who need it right now. I wish critics engaged more deeply with those really hard tradeoffs, and that EAs did a better job of articulating them. Just trying to sidestep the conversation about earning to give really undersells the moral challenge and stakes we’re dealing with.
This is fantastic! Thanks for publishing this update, as well as for all the work you’ve done over the past two years. I’ve been very impressed at how well SWP has done at securing commitments and creating partnerships within the industry. It seems like you all have a very strong and potentially very cost-effective plan for next steps, and I’m excited to dig further into the latest research updates.
A few questions:
The BOTEC you published a few months ago estimated a cost-effectiveness of about 4000 shrimp stunned per dollar per year. Can you talk about some of the factors that led to the updated estimate being 1500? Is the 1500 shrimp/$/year number accounting for more of the overhead costs associated with the stunners program?
The ASC consultation document uses the acronym UoC a bunch. Can you explain what that means?
Is there a video of the panel discussion from the Global Shrimp Forum?
I also continue to be surprised that there hasn’t been more effort within the alt protein space on cultivated or plant-based shrimp paste. As you noted in the alternative shrimp report, shrimp paste seems like not only a huge market, but also one of the easier animal products to replicate from a taste and texture perspective. That might be a really promising area for other orgs to focus on.
I think Ozy Brennan’s response to this section was very good. To quote the relevant section (though I would encourage readers to read the whole piece, which also includes some footnotes) :