I am a generalist quantitative researcher. I am open to volunteering and paid work. I welcome suggestions for posts. You can give me feedback here (anonymously or not).
Vasco Grilošø
Thanks for the questions, Huw!
In your opinion, if C. Elegans is conscious and has some moral significance, and suppose we could hypothetically train artificial neural networks to simulate a C. Elegans, would the resulting simulation have moral significance?
I would say the moral significance, which for me is the expected hedonistic welfare per unit time, of the simulation would tend to that of the C. Elegans as more components of this were accurately simulated. I do not think perfectly simulating the behaviour is enough for the moral significance of the simulation to match that of the C. Elegans. I believe simulating some of the underlying mechanisms that produced the behaviour may also be relevant, as Anil Seth discussed on The 80,000 Hours Podcast.
Consciousness does not necessarily imply valenced (positive or negative) subjective experiences (sentience), which is what I care about (I strongly endorse hedonism). C. Elegans being conscious with 100 % probability would update me towards them having a greater probability of being sentient, but not that much. I am mostly uncertain about their expected hedonistic welfare per unit time conditional on sentience, not about their probability of sentience. I would say everything, including a Planck volume in deep space vacuum, could have a probability of sentience of more than, for example, 1 % if it is operationalised in a very inclusive way. However, more inclusive operationalisations of sentience will lead to a smaller expected hedonistic welfare per unit time conditional on sentience. So I would like discussions of moral significance to focus on the expected hedonistic welfare per unit time instead of just the probability of sentience, or just the expected hedonistic welfare per unit time conditional on sentience.
If so, what other consequences flow from thisādo image recognition networks running on my phone have moral significance? Do LLMs? Are we already torturing billions of digital minds?
I think increasing the welfare of soil animals will remain much more cost-effective than increasing digital welfare. Assuming digital welfare per FLOP is equal to the welfare per FLOP of a fully healthy human, I calculate the price-performance of digital system has to surpass 2.23*10^27 FLOP/ā$ for increasing digital welfare to be more cost-effective than increasing the welfare of soil animals, which corresponds to doubling more than 29.0 times starting from the highest one on 9 November 2023. One would need 60.9 years for this to happen for Epoch AIās doubling time of the FP32 price-performance of machine learning (ML) hardware from 2006 to 2023 of 2.1 years.
Thanks, Daniel!
The ConĀscious NeĀmaĀtode: ExĀplorĀing HalĀlĀmarks of MinĀiĀmal PhenomĀeĀnal ConĀsciousĀness in CaenorhabĀdiĀtis Elegans
Thanks for the evaluations, Aidan! I think this project is very valuable.
Welcome to the EA Forum, Shloka! Thanks for the great comment. I strongly upvoted it.
I have not looked into the effects of land use change on different groups of nematodes. From Table 1 of van den Hoogen et al. (2019), which is below, the most abundant soil nematodes are bacterivores and herbivores, so I speculate effects on these are the most important. However, I agree a given land use change may increase the welfare of nematodes of a given type, but decrease that of ones of a different type. This strengthens my conclusion that the priority is research informing how to increase the welfare of soil animals, not pursuing whatever land use change interventions naively seem to achieve that the most cost-effectively.
Hi Abraham.
I think the opportunity for impact for wild animal welfare is way bigger, and itās much more ānormalā
I agree the absolute value of the total welfare of wild animals is much larger than that of farmed animals. On the other hand, the most popular opportunities to help wild animals focus on ones which only account for a small fraction of the total welfare of wild animals (although I think they change the welfare of soil animals much more). In extreme cases, such opportunitites would only improve the welfare of a few wild mammals to avoid the extinction of species. Of course, this is not the target of the Center for Wild Animal Welfare (CWAW), or Wild Animal Initiative (WAI). However, I still wonder about whether CWAW and WAI are focussing too much on what is popular, and underfunding research informing how to increase the welfare of (wild) soil animals. I currently think funding the Arthropoda Foundation is the best option for this. Mal Graham, who together with Bob Fischer āmake[s] most of the strategic and granting decisions for Arthropodaā, mentioned āWe collaborate with Wild Animal Initiative (Iām the strategy director at WAI) to reduce duplication of effort, and have a slightly better public profile for running soil invertebrate studies, so we expect it will generally be Arthropoda rather than WAI who would be more likely to run this kind of programā.
Some of the funders who evaluated proposals for the funding circle did cost-effectiveness BOTECs, but not all.
I would be happy to review your BOTECs for free if you think it would be useful.
Thanks, Cody.
will AGI systems become so cheap to run and scalable that they will make it unviable to instead pay a human to do any work?
It is not enough for AIs to be better than humans at jobs defined in an overly narrow sense. Chess engines are much cheaper to run, and play much better than top chess human players, but these still have jobs.
Itās possible AGI will never become cheap and scalable enough for this to happen, but Tabarrok doesnāt ever really make an argument that this is so.
I agree Maxwell does not make that argument. On the other hand, humans eventually running out of jobs is not necessarily bad either. Huge automation would increase wealth per capita a lot, and this has been associated with improvements in human welfare per capita throughout history.
Thanks, Matt. I agree. However, āIf AIs are a perfect substitute for humansā is a very big if. In particular, it is not enough for AIs to be better than humans at jobs defined in an overly narrow sense. Chess engines are much cheaper to run, and play much better than top chess human players, but these still have jobs.
I think MaxWell conceded Nathanās point, and I do not know about anyone disputing it in a mathematical sense (for all possible parameters of economic models). However, in practice, what matters is how automation will plausibly affect wages, and human welfare more broadly.
Hi Nathan and Ben.
If comparative advantage is a panacea, why are there fewer horses?
I liked Maxwellās follow-up post What About The Horses?.
The following framework explains why horses suffered complete replacement by more advanced technology and why humans are unlikely to face the same fate due to artificial intelligence.
Humans and AIs Arenāt Perfect Substitutes but Horses and Engines Were
Technological Growth and Capital Accumulation Will Raise Human Labor Productivity; Horses Canāt Use Technology or Capital
Humans Own AIs and Will Spend the Productivity Gains on Goods and Services that Humans Can Produce
Comparative advantage means Iām guaranteed work but not that that work will provide enough for me to eat
I agree. The last section of the post above briefly discusses this.
The argument is plausible and supported by history but itās not a mathematical deduction. The key elements are relative productivity differences, technological improvements that increase labor productivity, and increased income generating demand for goods and services produced by humans.
[...]
Higher wages are not always and everywhere guaranteed, but humans are not likely to face the same fate as horses. We are far from perfect substitutes for AIs which means we can specialize and trade with them, raising our productivity as the AI labor force multiplies. We can take advantage of technological growth and capital accumulation to raise our productivity further. Weāll continue inventing new ways to profitably integrate with automated production processes as we have in the past. And we control the abundant wealth that AI automation will create and will funnel it into human pursuits.
Also on comparative advantage, I liked Noah Smithās post Plentiful, high-paying jobs in the age of AI.
Thanks for sharing, Aaron! I like the ambition.
Do you have any thoughts on the effects of HSI on soil animals? For individual welfare per fully-happy-animal-year proportional to ānumber of neuronsā^0.5, I estimate electrically stunning farmed shrimp changes the welfare of soil animals more than it increases the welfare of shrimps if it results in the replacement of more than 0.0374 % of the consumption of the affected farmed shrimp by farmed fish. I can easily see this happening for even a slight increase in the cost of shrimp. I do not know whether the welfare of soil animals would be increased or decreases. So it is very unclear to me whether electrically stunning shrimp increases or decreases welfare (in expectation). In any case, I do not think HSI is amongst the most cost-effective ways of increasing animal welfare. I calculate interventions targeting other farmed animals or humans change animal welfare much more cost-effectively accounting for effects on soil animals resulting from land use changes as long as individual welfare per fully-happy-animal-year is proportional to ānumber of neuronsā^āexponent of the number of neuronsā, as is illustrated in the graph below. However, given the large uncertainty about the effects on soil animals (āIncreaseā in the graph below should be interpreted as āChangeā), I recommend research informing how to increase the welfare of soil animals over pursuing whatever land use change interventions naively seem to achieve that the most cost-effectively.
AGI Will Not Make LaĀbor Worthless
Great post!
Instead of undermining pro-animal cost-effectiveness analyses with a human-favoring philosophical theory (namely, Conscious Subsystems itself), Conscious Subsystems appears to support the conclusion that there are scalable nonhuman animal-targeting interventions that are far more cost-effective than GiveWellās recommended charities.
I estimate cage-free corporate campaigns, buying beef, broiler welfare corporate campaigns, GiveWellās top charities, Centre for Exploratory Altruism Researchās (CEARCHās) High Impact Philanthropy Fund (HIPF), and Shrimp Welfare Projectās (SWPās) Humane Slaughter Initiative (HSI) increase the welfare of their target beneficiaries roughly as cost-effectively for individual welfare per animal-year proportional to the number of neurons. This is illustrated for an exponent of the number of neurons of 1 in the graph below.
This isnāt a situation where it makes sense to maximize expected utility at any given moment. Instead, we should acknowledge our uncertainty, explore related hypotheses and try to figure out whether thereās a way to make the questions empirically tractable.
Agreed.
Thank you for the highlights! Impressive output. I enjoyed the read.
In terms of direct work, I think interventions with smaller effects on soil animals as a fraction of those on the target beneficiaries have a lower risk of decreasing animal welfare in expectation. For example, I believe cage-free corporate campaigns have a lower risk of decreasing animal welfare in expectation than decreasing the consumption of chicken meat. For my preferred way of comparing welfare across species (where individual welfare per animal-year is proportional to ānumber of neuronsā^0.5), I estimate decreasing the consumption of chicken meat changes the welfare of soil ants, termites, springtails, mites, and nematodes 83.7 k times as much as it increaes the welfare of chickens, whereas I calculate cage-free corporate campaigns change the welfare of such soil animals 1.15 k times as much as they increase the welfare of chickens. On the other hand, in practice, I expect the effects on soil animals to be sufficiently large in both cases for me to be basically agnostic about whether they increase or decrease welfare in expectation.
Thanks, Vicky!
If weāre not something like robustly certain that factory farming increases animal welfare then weāre not robustly certain anything increases animal welfare.
I think you meant āstopping factory-farmingā. I would say research on the welfare of soil animals has a much lower risk of decreasing welfare in expectation.
Trading can happen second to second. Real work on real issues requires years of planning and many years of carrying out.
Here is how I think about this.
Making āTiny shiftsā in a charity portfolio isnāt super practical.
I do not know what you mean by this. However, what I meant is that it makes sense to recommend Y over X if Y is more cost-effective at the margin than X, and the recommendation is not expected to change the marginal cost-effectiveness of X and Y much as a result of changes in their funding caused by the recommendation (which I believe applies to my post).
Thanks, Tristan. I have now replaced āevidence for bugs not being weirdā with āevidence for interest in bugs not being weirdā, which is what I meant (in agreement with my subsequent comment about Google Trends).
For my assumed human performance of 10^15 FLOP/ās, and human basal metabolic power of 80 W, the computational energy efficiency (operations per unit energy) of humans is 12.5 k GFLOP/āJ (= 10^15/ā80). This is 5.00 times (= 12.5*10^3/ā(2.5*10^3)) that of NVIDIA B100 (released on 15 November 2024), which is the ML hardware on Epoch AIās database with the highest computational energy efficiency. Epoch AI estimates the computational energy efficiency of ML hardware has increased 30 % per year, as is illustrated below. At this rate, ML hardware will reach the computational energy efficiency of humans in 6.13 years (= LN(5.00)/āLN(1 + 0.3)). As a result, digital welfare per unit energy consumption will soon be similar to human welfare per unit energy consumption if digital welfare per FLOP is similar to human welfare per FLOP.
Computational energy efficiency tends to increase with performance, as is illustrated by the data below collected by Epoch AI. I assume organisms with a smaller individual mass tend to have lower performance. So I think humans have a higher computational energy efficiency than animals, and that these have a higher one than microorganisms.