I am a generalist quantitative researcher. I am open to volunteering and paid work. I welcome suggestions for posts. You can give me feedback here (anonymously or not).
Vasco Grilošø
Thanks for the great post, Lukas. I strongly upvoted it. I also agree with your concluding thoughts and implications.
Thank you all for the very interesting discussion.
I think addressing the greatest sources of suffering is a promising approach to robustly increase welfare. However, I believe the focus should be on the greatest sources of suffering in the ecosystem, not in any given population, such that effects on non-target organisms can be neglected. Electrically stunning farmed shrimps arguably addresses one of the greatest sources of suffering of farmed shrimps, and the ratio between its effects on target and non-target organisms is much larger than for the vast majority of interventions, but I still do not know whether it increases or decreases welfare (even in expectation) due to potentially dominant effects on soil animals and microorganisms.
I expect the greatest sources of suffering in the ecosystem to be found in the organisms accounting for the most suffering in the ecosystem. However, I would say much more research on comparing welfare across species is needed to identify such organisms. I can see them being vertebrates, invertebrates, trees, or microorganisms.
I worry very specific unrealistic conditions will be needed to ensure the effects on non-target organisms can be neglected if it is not known which organisms account for the most suffering in the ecosystem. So I would prioritise research on comparing welfare across species over mapping sources of suffering in ecosystems.
Thanks, ZoĆ«. I see funders are the ones deciding what to fund, and that you only provide advice if they so wish, as explained below. What if funders ask you for advice on which species to support? Do you base your advice on the welfare ranges presented in Bobās book? Have you considered recommending research on welfare comparisons across species to such funders, such as the projects in RPās research agenda on valuing impacts across species?
Q: Do Senterra Funders staff decide how funders make grant decisions?
A: No, each Senterra member maintains full autonomy over their grantmaking. Some Senterra members seek Senterraās philanthropic advising, in which Senterra staff conduct research and make recommendations specific to the donorās interests. Some Senterra members engage in collaborative grantmaking facilitated by Senterra staff. Ultimately, itās up to each member to decide how and where to give.
Thanks for the great post, Srdjan. I strongly upvoted it.
Fair point, Nick. I would just keep in mind there may be very different types of digital minds, and some types may not speak any human language. We can more easily understand chimps than shrimps. In addition, the types of digital minds driving the expected total welfare might not speak any human language. I think there is a case for keeping an eye out for something like digital soil animals or microorganisms, by which I mean simple AI agents or algorithms, at least for people caring about invertebrate welfare. On the other end of the spectrum, I am also open to just a few planet-size digital beings being the driver of expected total welfare.
Thanks for the post, Noah. I strongly upvoted it.
5. How much welfare total capacity might digital minds have relative to humans/āother animals
a. Related questions include: the estimated scale of digital minds, moral weights-esque projects, which part of the model would have moral weight.
I think this is a very important uncertainty. Discussions of digital minds overwhelmingly focus on the number of individuals, and probability of consciousness or sentience. However, one has to multiply these factors by the expected individual welfare per year conditional on consciousness or sentience to get the expected total welfare per year. I believe this should eventually be determined for different types of digital minds because there could be huge differences in their expected individual welfare per year. I did this for biological organisms assuming expected individual welfare per fully-healthy-organism-year proportional to āindividual number of neuronsā^āexponentā, and to āenergy consumption per unit time at rest [basal metabolic rate (BMR)] at 25 ĀŗCā^āexponentā, and found potentially super large differences in the expected total welfare per year.
I think much more work on welfare comparisons across species is needed to conclude which interventions robustly increase welfare. I do not know about any intervention which robustly increases welfare due to potentially dominant uncertain effects on soil animals and microorganisms. I suspect work on welfare comparisons across different digital minds will be important for the same reason.
In a 2019 report from Rethink Priorities (though it could be very different now for various reasons), Saulius Simcikas found that $1 spent on corporate campaigns 9-120 years of chicken lives could be affected (excluding indirect effects which could be very important too).
Animal Charity Evaluators (ACE) estimated The Humane Leagueās (THL) work targeting layers in 2024 helped 11 layers per $. The Welfare Footprint Institute (WFI) assumes layers have a lifespand of ā60 to 80 weeks for all systemsā, around 1.36 chicken-years (= (60 + 80)/ā2*7/ā365.25). So I estimate THLās work targeting layers in 2024 improved 14.8 chicken-years per $ (= 11*1.36), which is close to the lower bound from Saulius you mention above.
Thanks for sharing, Kevin and Max. Are you planning to do any cost-effectiveness analyses (CEAs) to assess potential grants? I may help with these for free if you are interested.
Global wealth would have to increase a lot for everyone to become billionaire. There are 10 billion people. So everyone being a billionaire would require a global wealth of 10^19 $ (= 10*10^9*1*10^9) for perfect distribution. Global wealth is 600 T$. So it would have to become 16.7 k (= 10^19/ā(600*10^12)) times as large. For a growth of 10 %/āyear, it would take 102 years (= LN(16.7*10^3)/āLN(1 + 0.10)). For a growth of 30 %/āyear, it would take 37.1 years (= LN(16.7*10^3)/āLN(1 + 0.30)).
I was considering hypothetical scenarios of the type āimagine this offer from MIRI arrived, would a lab acceptā
When would the offer from MIRI arrive in the hypothetical scenario? I am sceptical of an honest endorsement from MIRI today being worth 3 billion $, but I do not have a good sense of what MIRI will look like in the future. I would also agree a full-proof AI safety certification is or will be worth more than 3 billion $ depending on how it is defined.
With your bets about timelinesāI did 8:1 bet with Daniel Kokotajlo against AI 2027 being as accurate as his previous forecast, so not sure which side of the āconfident about short timelinesā do you expect I should take.
I was guessing I would have longer timelines. What is your median date of superintelligent AI as defined by Metaculus?
Agreed, Ben. I encouraged Rob to crosspost it on the EA Forum. Thanks to your comment, I just set up a reminder to ping him again in 7 days in case he has not replied by then.
Hi Ruth. I only care about seeking truth to the extent it increases welfare (more happiness, and less pain). I just think applicants optimising for increasing their chances of being funded usually leads to worse decisions, and therefore lower welfare, than them optimising for improving the decisions of the funders. I also do not think there is much of a trade-off between being funded by and improving the decisions of impact-focussed funders, who often value honesty and transparency about the downsides of the project quite highly.
Thanks, Jan. I think it is very unlikely that AI companies with frontier models will seek the technical assistance of MIRI in the way you described in your 1st operationalisation. So I believe a bet which would only resolve in this case has very little value. I am open to bets against short AI timelines, or what they supposedly imply, up to 10 k$. Do you see any that we could make that is good for both of us under our own views considering we could invest our money, and that you could take loans?
The post The shrimp bet by Rob Velzeboer illustrates why there is a far from strong case for the sentience of Litopenaeus vannamei (whiteleg shrimp), which is the species accounting for the most production of farmed shrimp.
Thanks for sharing, Ben. I like the concept. Do you have a target total (time and financial) cost? I wonder what is the ideal ratio between total amount granted and cost for grants of ā$670 to $3300ā.
Nice post, Alex.
Sometimes when thereās a lot of self doubt itās not really feasible for me to carefully dismantle all of my inaccurate thoughts. This is where I find cognitive diffusion helpful- just separating myself from my thoughts, so rather than saying āI donāt know enoughā I say āIām having the thought that I donāt know enough.ā I donāt have to believe or argue with the thought, I can just acknowledge it and return to what Iām doing.
Clearer Thinking has launched a program to learn cognitive diffusion.
Thanks for the nice point, Thomas. Generalising, if the impact is 0 for productivity P_0, and P_av is the productivity of random employees, an employee N times as productive as random employees would be (N*P_avāP_0)/ā(P_avāP_0) as impactful as random employees. Assuming the cost of employing someone is proportional to their productivity, the cost-effectiveness as a fraction of that of random employees would be (P_avāP_0/āN)/ā(P_avāP_0). So the cost-effectiveness of an infinitely productive employee as a fraction of that of random employees would be P_av/ā(P_avāP_0) = 1/ā(1 - P_0/āP_av). In this model, super productive employees becoming more proctive would not increase their cost-effectiveness. It would just make them more impactful. For your parameters, employing an infinitely productive employee would be 3 (= 1/ā(1 ā 100ā150)) times as cost-effective as employing random employees.
Thanks for the relevant comment, Nick.
2. Iāve been generally unimpressed by responses to criticisms of animal sentience. Iāve rarely seen an animal welfare advocate make an even small concession. This makes me even more skeptical about the neutrality of thought processes and research done by animal welfare folks.
I concede there is huge uncertainty about welfare comparisons across species. For individual welfare per fully-healthy-animal-year proportional to āindividual number of neuronsā^āexponentā, and āexponentā ranging from 0.5 to 1.5, which I believe covers reasonable best guesses, I estimate the absolute value of the total welfare of farmed shrimps ranges from 2.82*10^-7 to 0.282 times that of humans. In addition, I calculate the Shrimp Welfare Projectās (SWPās) Humane Slaughter Initiative (HSI) has increased the welfare of shrimps 0.00167 (= 2.06*10^5/ā0.0123) to 1.67 k (= 20.6/ā0.0123) times as cost-effectively as GiveWellās top charities increase the welfare of humans. Moreover, I have no idea whether HSI or GiveWellās top charities increase or decrease welfare accounting for effects on soil animals and microorganisms.
Yes, it is unlikely these cause extinction, but if they do, no humans means no AI (after all the power-plants fail). Seems to imply moving forward with a lot of caution.
Toby and Matthew, what is your guess for the probability of human extinction over the next 10 years? I personally guess 10^-7. I think disagreements are often driven by different assessment of the risk.
Hi David!
Some people argue that the value of the universe would be higher if AIs took over, and the vast majority of people argue that it would be lower.
Would the dinossaurs have argued their extinction would be bad, although it may well have contributed to the emergence of mammals and ultimately humans? Would the vast majority of non-human primates have argued that humans taking over would be bad?
But it is extremely unlikely to have exactly the same value. Therefore, in all likelihood, whether AI takes over or not does have long-term and enormous implications.
Why? It could be that future value is not exactly the same if AI takes over or not by a given date, but that the longer difference in value is negligible.
Thanks for the relevant post, Wladimir and Cynthia. I strongly upvoted it. Do you have any practical ideas about how to apply the Sentience Bargain framework to compare welfare across species? I would be curious to know your thoughts on Rethink Prioritiesā (RPās) research agenda on valuing impacts across species.