Currently: ?
Previously: Biosecurity at Telis and Alvea, Cellular Agriculture at Tufts and Mission Barns, Global Health at Medical Teams International
Currently: ?
Previously: Biosecurity at Telis and Alvea, Cellular Agriculture at Tufts and Mission Barns, Global Health at Medical Teams International
Hm, I’m not sure how I would have read this if it had been your original wording, but in context it still feels like an effort to slightly spin my claims to make them more convenient for your critique. So for now I’m just gonna reference back to my original post—the language therein (including the title) is what I currently endorse.
I’m concerned about that dynamic too and think it’s important to keep in mind, especially in the general case of researchers’ intuitions tending to bias their work, even when attempting objectivity. However, I’m also concerned about the dismissal of results like RP’s welfare ranges on the basis of speculation about the researchers’ priors and/or the counterintuitive conclusions, rather than on the merits of the analyses themselves.
Thanks, Jeff! This helps a lot, though ideally a summary of my conclusions would acknowledge the tentativeness/uncertainty thereof, as I aim to do in the post (perhaps, “concludes that things may be bad and getting worse”).
I strongly object to the (Edit: previous) statement that my post “concludes that human extinction would be a very good thing”. I do not endorse this claim and think it’s a grave misconstrual of my analysis. My findings are highly uncertain, and, as Peter mentions, there are many potential reasons for believing human extinction would be bad even if my conclusions in the post were much more robust (e.g. lock-in effects, to name a particularly salient one).
I’m skeptical of anchoring on people’s initial intuitions about cross-species tradeoffs as a default for moral weights, as there are strong reasons to expect that those intuitions are inappropriately biased. The weights I use are far from perfect and are not robust enough to allow confident conclusions to be drawn, but I do think they’re the best ones available for this kind of analysis by a decent margin.
Agreed! While I do think there’s value in looking just at humans and farmed animals given the current state of available data and welfare analysis, a major hope of mine for this work is that it might inspire more comprehensive and more rigorous models that include wild animals.
The sign of the conclusion would be the same (though significantly weaker) even if you ignore shrimp entirely, provided all other assumptions are held constant. That said, the final numbers are indeed quite sensitive to the moral weights, particularly those of chickens, shrimp, and fish as the most abundant nonhumans.
I agree re: the value of both a function-based version that would allow folks to put in their own weights/assumptions, and a version that explicitly considers uncertainty. I don’t have plans to build these out myself, but might reconsider if there’s sufficient interest, and in any case would be happy to support someone else in doing so.
Thanks for the kind words! I’m also skeptical of putting too much weight on the conclusions given the huge uncertainties, which I hope comes across in the post.
Re: underlying code—I’m working on a sharable version. Just sent you a DM!
Thank you for flagging this, Laura! I’ve edited the definition to correct the misstatement.
Wow, I’m thrilled about this! I’ve been wondering recently why EA “Campus Centres” aren’t more of a thing, and am delighted to see a big push in that direction. Thank you for an excellent plan and write-up!
What is your process for identifying and prioritizing new research questions? And what percentage of your work is going toward internal top priorities vs. commissioned projects?
A few things that jump to mind:
Data on the development of EA-related fields (e.g. growth of AI safety/alignment as an academic discipline, including things like funding, number of publications, number of faculty/graduate students, etc.)
Data on the history of philanthropy (e.g. how much have private philanthropists spent over the years, and on what?)
I’m still confused by the perceived need to state this in a way that’s stronger than my chosen wording. I used “may” when presenting the top line conclusions because the analysis is rough/preliminary, incomplete, and predicated on a long list of assumptions. I felt it was appropriate to express this degree of uncertainty when making my claims in the the post, and I think that that becomes all the more important when summarizing the conclusions in other contexts without mention of the underlying assumptions and other caveats.