I am a generalist quantitative researcher. I am open to volunteering and paid work. I welcome suggestions for posts. You can give me feedback here (anonymously or not).
Vasco Grilošø
Thanks for the comment, Noah.
I am looking forward to the results of RPās Digital Consciousness Project, but I do not expect any significant updates to my views. It focusses on the probability of consciousness, but I think this says very little about the (expected hedonistic) welfare per unit time. This is because I believe there is much more uncertainty in the welfare per unit time conditional on consciousness than in the probability of consciousness.
I suspect the number of digital minds is not among the most relevant parameters to track. It may not be proportional to total digital welfare because more digital minds will tend to have less individual welfare per unit time. I would focus more on the digital welfare per FLOP, and FLOPs per year.
I am also interested in more comparisons between the promise of increasing biological and digital welfare. Nice to know you are working on a piece!
Fair! What they mean is closer to āAI as a more normal technology than many predictā. Somewhat relatedly, I liked the post Common Ground between AI 2027 & AI as Normal Technology.
For my assumed human performance of 10^15 FLOP/ās, and human basal metabolic power of 80 W, the computational energy efficiency (operations per unit energy) of humans is 12.5 k GFLOP/āJ (= 10^15/ā80). This is 5.00 times (= 12.5*10^3/ā(2.5*10^3)) that of NVIDIA B100 (released on 15 November 2024), which is the ML hardware on Epoch AIās database with the highest computational energy efficiency. Epoch AI estimates the computational energy efficiency of ML hardware has increased 30 % per year, as is illustrated below. At this rate, ML hardware will reach the computational energy efficiency of humans in 6.13 years (= LN(5.00)/āLN(1 + 0.3)). As a result, digital welfare per unit energy consumption will soon be similar to human welfare per unit energy consumption if digital welfare per FLOP is similar to human welfare per FLOP.
Computational energy efficiency tends to increase with performance, as is illustrated by the data below collected by Epoch AI. I assume organisms with a smaller individual mass tend to have lower performance. So I think humans have a higher computational energy efficiency than animals, and that these have a higher one than microorganisms.
Thanks for the questions, Huw!
In your opinion, if C. Elegans is conscious and has some moral significance, and suppose we could hypothetically train artificial neural networks to simulate a C. Elegans, would the resulting simulation have moral significance?
I would say the moral significance, which for me is the expected hedonistic welfare per unit time, of the simulation would tend to that of the C. Elegans as more components of this were accurately simulated. I do not think perfectly simulating the behaviour is enough for the moral significance of the simulation to match that of the C. Elegans. I believe simulating some of the underlying mechanisms that produced the behaviour may also be relevant, as Anil Seth discussed on The 80,000 Hours Podcast.
Consciousness does not necessarily imply valenced (positive or negative) subjective experiences (sentience), which is what I care about (I strongly endorse hedonism). C. Elegans being conscious with 100 % probability would update me towards them having a greater probability of being sentient, but not that much. I am mostly uncertain about their expected hedonistic welfare per unit time conditional on sentience, not about their probability of sentience. I would say everything, including a Planck volume in deep space vacuum, could have a probability of sentience of more than, for example, 1 % if it is operationalised in a very inclusive way. However, more inclusive operationalisations of sentience will lead to a smaller expected hedonistic welfare per unit time conditional on sentience. So I would like discussions of moral significance to focus on the expected hedonistic welfare per unit time instead of just the probability of sentience, or just the expected hedonistic welfare per unit time conditional on sentience.
If so, what other consequences flow from thisādo image recognition networks running on my phone have moral significance? Do LLMs? Are we already torturing billions of digital minds?
I think increasing the welfare of soil animals will remain much more cost-effective than increasing digital welfare. Assuming digital welfare per FLOP is equal to the welfare per FLOP of a fully healthy human, I calculate the price-performance of digital system has to surpass 2.23*10^27 FLOP/ā$ for increasing digital welfare to be more cost-effective than increasing the welfare of soil animals, which corresponds to doubling more than 29.0 times starting from the highest one on 9 November 2023. One would need 60.9 years for this to happen for Epoch AIās doubling time of the FP32 price-performance of machine learning (ML) hardware from 2006 to 2023 of 2.1 years.
Thanks, Daniel!
Thanks for the evaluations, Aidan! I think this project is very valuable.
Welcome to the EA Forum, Shloka! Thanks for the great comment. I strongly upvoted it.
I have not looked into the effects of land use change on different groups of nematodes. From Table 1 of van den Hoogen et al. (2019), which is below, the most abundant soil nematodes are bacterivores and herbivores, so I speculate effects on these are the most important. However, I agree a given land use change may increase the welfare of nematodes of a given type, but decrease that of ones of a different type. This strengthens my conclusion that the priority is research informing how to increase the welfare of soil animals, not pursuing whatever land use change interventions naively seem to achieve that the most cost-effectively.
Hi Abraham.
I think the opportunity for impact for wild animal welfare is way bigger, and itās much more ānormalā
I agree the absolute value of the total welfare of wild animals is much larger than that of farmed animals. On the other hand, the most popular opportunities to help wild animals focus on ones which only account for a small fraction of the total welfare of wild animals (although I think they change the welfare of soil animals much more). In extreme cases, such opportunitites would only improve the welfare of a few wild mammals to avoid the extinction of species. Of course, this is not the target of the Center for Wild Animal Welfare (CWAW), or Wild Animal Initiative (WAI). However, I still wonder about whether CWAW and WAI are focussing too much on what is popular, and underfunding research informing how to increase the welfare of (wild) soil animals. I currently think funding the Arthropoda Foundation is the best option for this. Mal Graham, who together with Bob Fischer āmake[s] most of the strategic and granting decisions for Arthropodaā, mentioned āWe collaborate with Wild Animal Initiative (Iām the strategy director at WAI) to reduce duplication of effort, and have a slightly better public profile for running soil invertebrate studies, so we expect it will generally be Arthropoda rather than WAI who would be more likely to run this kind of programā.
Some of the funders who evaluated proposals for the funding circle did cost-effectiveness BOTECs, but not all.
I would be happy to review your BOTECs for free if you think it would be useful.
Thanks, Cody.
will AGI systems become so cheap to run and scalable that they will make it unviable to instead pay a human to do any work?
It is not enough for AIs to be better than humans at jobs defined in an overly narrow sense. Chess engines are much cheaper to run, and play much better than top chess human players, but these still have jobs.
Itās possible AGI will never become cheap and scalable enough for this to happen, but Tabarrok doesnāt ever really make an argument that this is so.
I agree Maxwell does not make that argument. On the other hand, humans eventually running out of jobs is not necessarily bad either. Huge automation would increase wealth per capita a lot, and this has been associated with improvements in human welfare per capita throughout history.
Thanks, Matt. I agree. However, āIf AIs are a perfect substitute for humansā is a very big if. In particular, it is not enough for AIs to be better than humans at jobs defined in an overly narrow sense. Chess engines are much cheaper to run, and play much better than top chess human players, but these still have jobs.
I think MaxWell conceded Nathanās point, and I do not know about anyone disputing it in a mathematical sense (for all possible parameters of economic models). However, in practice, what matters is how automation will plausibly affect wages, and human welfare more broadly.
Hi Nathan and Ben.
If comparative advantage is a panacea, why are there fewer horses?
I liked Maxwellās follow-up post What About The Horses?.
The following framework explains why horses suffered complete replacement by more advanced technology and why humans are unlikely to face the same fate due to artificial intelligence.
Humans and AIs Arenāt Perfect Substitutes but Horses and Engines Were
Technological Growth and Capital Accumulation Will Raise Human Labor Productivity; Horses Canāt Use Technology or Capital
Humans Own AIs and Will Spend the Productivity Gains on Goods and Services that Humans Can Produce
Comparative advantage means Iām guaranteed work but not that that work will provide enough for me to eat
I agree. The last section of the post above briefly discusses this.
The argument is plausible and supported by history but itās not a mathematical deduction. The key elements are relative productivity differences, technological improvements that increase labor productivity, and increased income generating demand for goods and services produced by humans.
[...]
Higher wages are not always and everywhere guaranteed, but humans are not likely to face the same fate as horses. We are far from perfect substitutes for AIs which means we can specialize and trade with them, raising our productivity as the AI labor force multiplies. We can take advantage of technological growth and capital accumulation to raise our productivity further. Weāll continue inventing new ways to profitably integrate with automated production processes as we have in the past. And we control the abundant wealth that AI automation will create and will funnel it into human pursuits.
Also on comparative advantage, I liked Noah Smithās post Plentiful, high-paying jobs in the age of AI.
Thanks for sharing, Aaron! I like the ambition.
Do you have any thoughts on the effects of HSI on soil animals? For individual welfare per fully-happy-animal-year proportional to ānumber of neuronsā^0.5, I estimate electrically stunning farmed shrimp changes the welfare of soil animals more than it increases the welfare of shrimps if it results in the replacement of more than 0.0374 % of the consumption of the affected farmed shrimp by farmed fish. I can easily see this happening for even a slight increase in the cost of shrimp. I do not know whether the welfare of soil animals would be increased or decreases. So it is very unclear to me whether electrically stunning shrimp increases or decreases welfare (in expectation). In any case, I do not think HSI is amongst the most cost-effective ways of increasing animal welfare. I calculate interventions targeting other farmed animals or humans change animal welfare much more cost-effectively accounting for effects on soil animals resulting from land use changes as long as individual welfare per fully-happy-animal-year is proportional to ānumber of neuronsā^āexponent of the number of neuronsā, as is illustrated in the graph below. However, given the large uncertainty about the effects on soil animals (āIncreaseā in the graph below should be interpreted as āChangeā), I recommend research informing how to increase the welfare of soil animals over pursuing whatever land use change interventions naively seem to achieve that the most cost-effectively.
Great post!
Instead of undermining pro-animal cost-effectiveness analyses with a human-favoring philosophical theory (namely, Conscious Subsystems itself), Conscious Subsystems appears to support the conclusion that there are scalable nonhuman animal-targeting interventions that are far more cost-effective than GiveWellās recommended charities.
I estimate cage-free corporate campaigns, buying beef, broiler welfare corporate campaigns, GiveWellās top charities, Centre for Exploratory Altruism Researchās (CEARCHās) High Impact Philanthropy Fund (HIPF), and Shrimp Welfare Projectās (SWPās) Humane Slaughter Initiative (HSI) increase the welfare of their target beneficiaries roughly as cost-effectively for individual welfare per animal-year proportional to the number of neurons. This is illustrated for an exponent of the number of neurons of 1 in the graph below.
This isnāt a situation where it makes sense to maximize expected utility at any given moment. Instead, we should acknowledge our uncertainty, explore related hypotheses and try to figure out whether thereās a way to make the questions empirically tractable.
Agreed.
Thank you for the highlights! Impressive output. I enjoyed the read.
In terms of direct work, I think interventions with smaller effects on soil animals as a fraction of those on the target beneficiaries have a lower risk of decreasing animal welfare in expectation. For example, I believe cage-free corporate campaigns have a lower risk of decreasing animal welfare in expectation than decreasing the consumption of chicken meat. For my preferred way of comparing welfare across species (where individual welfare per animal-year is proportional to ānumber of neuronsā^0.5), I estimate decreasing the consumption of chicken meat changes the welfare of soil ants, termites, springtails, mites, and nematodes 83.7 k times as much as it increaes the welfare of chickens, whereas I calculate cage-free corporate campaigns change the welfare of such soil animals 1.15 k times as much as they increase the welfare of chickens. On the other hand, in practice, I expect the effects on soil animals to be sufficiently large in both cases for me to be basically agnostic about whether they increase or decrease welfare in expectation.
Thanks, Vicky!
Thanks for the interesting points, Noah.
I said in the post āI analyse the cost-effectiveness of increasing the welfare of soil animals via funding HIPF, which is the most cost-effective way of increasing welfare I am aware ofā. This was for my best guess at the time that increasing agricultural land increases welfare due to decreasing the number of soil animals, and these having negative lives. However, I am now very uncertain not only about whether soil animals have positive or negative lives, but also about whether increasing agricultural land decreases or increases the number of soil animals. I recommend research informing how to increase the welfare of soil animals over pursuing whatever land use change interventions naively seem to achieve that the most cost-effectively. In addition, I would prioritise more research on comparing the hedonistic welfare of different potential beings (animals, microoranisms, and AIs).
I estimated the cost-effectiveness of HIPF assuming āthe [expected hedonistic individual] welfare per animal-year of soil ants/ātermites/āspringtails/āmites/ānematodes is ā25 % that of fully happy soil ants/ātermites/āspringtails/āmites/ānematodesā. The expected cost-effectiveness only depends on the mean of the distribution of the individual welfare per animal-year as a fraction of that of fully happy animals. So one could have my best guess for the mean of this distribution while being arbitrarily uncertain. For example, a normal distribution with 5th and 95th percentiles of ā5.25 (= ā0.25 ā 5) and 4.75 (= ā0.25 + 5) would have a mean of ā0.25 (my best guess), standard deviation of 11.1 (= 2*5/ā(0.95 ā 0.05)), and probability of being negative of 50.9 % (= NORMDIST(0, ā0.25, 11.1, 1)), which seems reasonable to me. A normal distribution with 5th and 95th percentiles of ā50.25 (= ā0.25 ā 50) and 49.75 (= ā0.25 + 50) would have the same mean of ā0.25, standard deviation of 111 (= 2*50/ā(0.95 ā 0.05)), and probability of being negative of 50.1 % (= NORMDIST(0, ā0.25, 111, 1)). This is 9.00 (= (0.509 ā 0.5)/ā(0.501 ā 0.5)) times as close to 50 % as the probability of the 1st distribution being negative, but the expected cost-effectiveness would be the same for both distributions. What changes is that the cost-effectiveness of decreasing the uncertainty about whether soil animals have positive or negative lives is higher for the 2nd distribution.