I am a generalist quantitative researcher. I am open to volunteering and paid work. I welcome suggestions for posts. You can give me feedback here (anonymously or not).
Vasco Grilošø
Thanks, Wladimir. That makes sense. I look forward to your future work on this. Let me know if funding ever becomes bottleneck, in which case I may want to help with a few k$.
Here is the crosspost on the EA Forum. Rob preferred I shared it myself.
The critical question is whether shrimp or insects can support the kinds of negative states that make suffering severe, rather than merely possible.
I think suffering matters proportionally to its intensity. So I would not neglect mild suffering in principle, although it may not matter much in practice due to contributing little to total expected suffering.
In any case, I would agree the total expected welfare of farmed invertebrates may be tiny compared with that of humans due invertebratesā experiences having a very low intensity. For expected individual welfare per fully-healthy-animal-year proportional to āindividual number of neuronsā^āexponentā, and āexponentā from 0.5 to 1.5, which I believe covers reasonable best guesses, I estimate that the expected total welfare of farmed shrimps ranges from ā0.282 to ā2.82*10^-7 times that of humans, and that of farmed black soldier fly (BSF) larvae and mealworms from ā4.80*10^-4 to ā6.23*10^-11 times that of humans. In addition, I calculate the Shrimp Welfare Projectās (SWPās) Humane Slaughter Initiative (HSI) has increased the welfare of shrimps 0.00167 (= 2.06*10^5/ā0.0123) to 1.67 k (= 20.6/ā0.0123) times as cost-effectively as GiveWellās top charities increase the welfare of humans.
And even granting the usual EA filtersātractability, neglectedness, feasibility, and evidential robustnessāthe scale gradient from shrimp to insects (via agriculture-related deaths) is so steep that these filters donāt, by themselves, explain why the precautionary logic should settle on shrimp. All else equal, once you shift to a target that is thousands of times larger, an intervention could be far less effective [in terms of robustly increasing welfare in expectation] and still compete on expected impact.
I very much agree. Moreover, I do not even know whether electrically stunning farmed shrimps increases or decreases welfare due to effects on soil animals and microorganisms.
Are you thinking about humans as an aligned collective in the 1st paragraph of your comment? I agree all humans coordinating their actions together would have more power than other groups of organisms with their actual levels of coordination. However, such level of coordination among humans is not realistic. All 10^30 bacteria (see Table S1 of Bar-On et al. (2018)) coordinating their actions together would arguably also have more power than all humans with their actual level of coordination.
I agree it is good that no human has power over all humans. However, I still think one being dominating all others has a probability lower than 0.001 % over the next 10 years. I am open to bets against short AI timelines, or what they supposedly imply, up to 10 k$. Do you see any that we could make that is good for both of us under our own views?
Hi Guy. Elon Musk was not the only person responsible for the recent large cuts in foreign aid from the United States (US). In addition, I believe outcomes like human extinction are way less likely. I agree it makes sense to worry about concentration of power, but not about extreme outcomes like human extinction.
Thanks for sharing, James.
Thanks for the relevant post, Wladimir and Cynthia. I strongly upvoted it. Do you have any practical ideas about how to apply the Sentience Bargain framework to compare welfare across species? I would be curious to know your thoughts on Rethink Prioritiesā (RPās) research agenda on valuing impacts across species.
Thanks for the great post, Lukas. I strongly upvoted it. I also agree with your concluding thoughts and implications.
Thank you all for the very interesting discussion.
I think addressing the greatest sources of suffering is a promising approach to robustly increase welfare. However, I believe the focus should be on the greatest sources of suffering in the ecosystem, not in any given population, such that effects on non-target organisms can be neglected. Electrically stunning farmed shrimps arguably addresses one of the greatest sources of suffering of farmed shrimps, and the ratio between its effects on target and non-target organisms is much larger than for the vast majority of interventions, but I still do not know whether it increases or decreases welfare (even in expectation) due to potentially dominant effects on soil animals and microorganisms.
I expect the greatest sources of suffering in the ecosystem to be found in the organisms accounting for the most suffering in the ecosystem. However, I would say much more research on comparing welfare across species is needed to identify such organisms. I can see them being vertebrates, invertebrates, trees, or microorganisms.
I worry very specific unrealistic conditions will be needed to ensure the effects on non-target organisms can be neglected if it is not known which organisms account for the most suffering in the ecosystem. So I would prioritise research on comparing welfare across species over mapping sources of suffering in ecosystems.
Thanks, ZoĆ«. I see funders are the ones deciding what to fund, and that you only provide advice if they so wish, as explained below. What if funders ask you for advice on which species to support? Do you base your advice on the welfare ranges presented in Bobās book? Have you considered recommending research on welfare comparisons across species to such funders, such as the projects in RPās research agenda on valuing impacts across species?
Q: Do Senterra Funders staff decide how funders make grant decisions?
A: No, each Senterra member maintains full autonomy over their grantmaking. Some Senterra members seek Senterraās philanthropic advising, in which Senterra staff conduct research and make recommendations specific to the donorās interests. Some Senterra members engage in collaborative grantmaking facilitated by Senterra staff. Ultimately, itās up to each member to decide how and where to give.
Thanks for the great post, Srdjan. I strongly upvoted it.
Fair point, Nick. I would just keep in mind there may be very different types of digital minds, and some types may not speak any human language. We can more easily understand chimps than shrimps. In addition, the types of digital minds driving the expected total welfare might not speak any human language. I think there is a case for keeping an eye out for something like digital soil animals or microorganisms, by which I mean simple AI agents or algorithms, at least for people caring about invertebrate welfare. On the other end of the spectrum, I am also open to just a few planet-size digital beings being the driver of expected total welfare.
Thanks for the post, Noah. I strongly upvoted it.
5. How much welfare total capacity might digital minds have relative to humans/āother animals
a. Related questions include: the estimated scale of digital minds, moral weights-esque projects, which part of the model would have moral weight.
I think this is a very important uncertainty. Discussions of digital minds overwhelmingly focus on the number of individuals, and probability of consciousness or sentience. However, one has to multiply these factors by the expected individual welfare per year conditional on consciousness or sentience to get the expected total welfare per year. I believe this should eventually be determined for different types of digital minds because there could be huge differences in their expected individual welfare per year. I did this for biological organisms assuming expected individual welfare per fully-healthy-organism-year proportional to āindividual number of neuronsā^āexponentā, and to āenergy consumption per unit time at rest [basal metabolic rate (BMR)] at 25 ĀŗCā^āexponentā, and found potentially super large differences in the expected total welfare per year.
I think much more work on welfare comparisons across species is needed to conclude which interventions robustly increase welfare. I do not know about any intervention which robustly increases welfare due to potentially dominant uncertain effects on soil animals and microorganisms. I suspect work on welfare comparisons across different digital minds will be important for the same reason.
In a 2019 report from Rethink Priorities (though it could be very different now for various reasons), Saulius Simcikas found that $1 spent on corporate campaigns 9-120 years of chicken lives could be affected (excluding indirect effects which could be very important too).
Animal Charity Evaluators (ACE) estimated The Humane Leagueās (THL) work targeting layers in 2024 helped 11 layers per $. The Welfare Footprint Institute (WFI) assumes layers have a lifespand of ā60 to 80 weeks for all systemsā, around 1.36 chicken-years (= (60 + 80)/ā2*7/ā365.25). So I estimate THLās work targeting layers in 2024 improved 14.8 chicken-years per $ (= 11*1.36), which is close to the lower bound from Saulius you mention above.
Thanks for sharing, Kevin and Max. Are you planning to do any cost-effectiveness analyses (CEAs) to assess potential grants? I may help with these for free if you are interested.
Global wealth would have to increase a lot for everyone to become billionaire. There are 10 billion people. So everyone being a billionaire would require a global wealth of 10^19 $ (= 10*10^9*1*10^9) for perfect distribution. Global wealth is 600 T$. So it would have to become 16.7 k (= 10^19/ā(600*10^12)) times as large. For a growth of 10 %/āyear, it would take 102 years (= LN(16.7*10^3)/āLN(1 + 0.10)). For a growth of 30 %/āyear, it would take 37.1 years (= LN(16.7*10^3)/āLN(1 + 0.30)).
I was considering hypothetical scenarios of the type āimagine this offer from MIRI arrived, would a lab acceptā
When would the offer from MIRI arrive in the hypothetical scenario? I am sceptical of an honest endorsement from MIRI today being worth 3 billion $, but I do not have a good sense of what MIRI will look like in the future. I would also agree a full-proof AI safety certification is or will be worth more than 3 billion $ depending on how it is defined.
With your bets about timelinesāI did 8:1 bet with Daniel Kokotajlo against AI 2027 being as accurate as his previous forecast, so not sure which side of the āconfident about short timelinesā do you expect I should take.
I was guessing I would have longer timelines. What is your median date of superintelligent AI as defined by Metaculus?
Agreed, Ben. I encouraged Rob to crosspost it on the EA Forum. Thanks to your comment, I just set up a reminder to ping him again in 7 days in case he has not replied by then.
Hi Ruth. I only care about seeking truth to the extent it increases welfare (more happiness, and less pain). I just think applicants optimising for increasing their chances of being funded usually leads to worse decisions, and therefore lower welfare, than them optimising for improving the decisions of the funders. I also do not think there is much of a trade-off between being funded by and improving the decisions of impact-focussed funders, who often value honesty and transparency about the downsides of the project quite highly.
Thanks for the reply.
Right. There was a weight of 45 % on a ratio of 7.06, and of 55 % on one of 62.8 k (= 3.44*10^6/ā54.8), 8.90 k (= 62.8*10^3/ā7.06) times as much. My explanation for the large difference is that very little can be inferred about the intensity of excruciating pain, as defined by the Welfare Footprint Institute (WFI), from the academic studies AIM analysed to derive the pain intensities linked to the lower ratio.
The study is not relevant for assessing excruciating pain? Excruciating pain is ānot normally tolerated even if only for a few secondsā. Here is the clarification of what this means from Cynthia Schuck, WFIās scientific director.
I doubt the women in the study would prefer 2 h of āsevere burning in large areas of the body, dismemberment, or extreme tortureā over 18 h of a ā1/ā10 painā. I believe excruciating pain is way more intense than their ā9/ā10 painā assuming they are indifferent between 2 h of ā9/ā10 painā and 18 h of ā1/ā10 painā.
From the study, āIn the questionnaire, the intensity of pain was evaluated using an NRS 0ā10, with 0=no pain and 10=worst pain imaginableā. However, this does not imply the womenās ā10/ā10 painā was excruciating. I guess their ā10/ā10 painā was disabling as defined by WFI.
I think point estimates like AIMās SADs derived from aggregating such different results have very little robustness. The takeaway for me is that we have basically no idea about which pain intensities to use when they differ so much. I believe this calls for a more robust estimation method and input research, not for aggregating more widely different results, although I still expect large uncertainty will remain (just not so large).
@vicky_cox, has AIM has considered commissioning surveys asking random people, people who regularly experience disabling pain, and people who have experienced excruciating pain about how they trade-off WFIās pain and pleasure categories. I believe Rethink Prioritiesā (RPās) surveys and data analysis team would be a good fit to run such surveys. @Vince Mak šø, has ACE considered commissioning such surveys?