In your opinion, if C. Elegans is conscious and has some moral significance, and suppose we could hypothetically train artificial neural networks to simulate a C. Elegans, would the resulting simulation have moral significance?
I would say the moral significance, which for me is the expected hedonistic welfare per unit time, of the simulation would tend to that of the C. Elegans as more components of this were accurately simulated. I do not think perfectly simulating the behaviour is enough for the moral significance of the simulation to match that of the C. Elegans. I believe simulating some of the underlying mechanisms that produced the behaviour may also be relevant, as Anil Seth discussed on The 80,000 Hours Podcast.
Consciousness does not necessarily imply valenced (positive or negative) subjective experiences (sentience), which is what I care about (I strongly endorse hedonism). C. Elegans being conscious with 100 % probability would update me towards them having a greater probability of being sentient, but not that much. I am mostly uncertain about their expected hedonistic welfare per unit time conditional on sentience, not about their probability of sentience. I would say everything, including a Planck volume in deep space vacuum, could have a probability of sentience of more than, for example, 1 % if it is operationalised in a very inclusive way. However, more inclusive operationalisations of sentience will lead to a smaller expected hedonistic welfare per unit time conditional on sentience. So I would like discussions of moral significance to focus on the expected hedonistic welfare per unit time instead of just the probability of sentience, or just the expected hedonistic welfare per unit time conditional on sentience.
If so, what other consequences flow from this—do image recognition networks running on my phone have moral significance? Do LLMs? Are we already torturing billions of digital minds?
I think increasing the welfare of soil animals will remain much more cost-effective than increasing digital welfare. Assuming digital welfare per FLOP is equal to the welfare per FLOP of a fully healthy human, I calculate the price-performance of digital system has to surpass 2.23*10^27 FLOP/​$ for increasing digital welfare to be more cost-effective than increasing the welfare of soil animals, which corresponds to doubling more than 29.0 times starting from the highest one on 9 November 2023. One would need 60.9 years for this to happen for Epoch AI’s doubling time of the FP32 price-performance of machine learning (ML) hardware from 2006 to 2023 of 2.1 years.
Thanks for the questions, Huw!
I would say the moral significance, which for me is the expected hedonistic welfare per unit time, of the simulation would tend to that of the C. Elegans as more components of this were accurately simulated. I do not think perfectly simulating the behaviour is enough for the moral significance of the simulation to match that of the C. Elegans. I believe simulating some of the underlying mechanisms that produced the behaviour may also be relevant, as Anil Seth discussed on The 80,000 Hours Podcast.
Consciousness does not necessarily imply valenced (positive or negative) subjective experiences (sentience), which is what I care about (I strongly endorse hedonism). C. Elegans being conscious with 100 % probability would update me towards them having a greater probability of being sentient, but not that much. I am mostly uncertain about their expected hedonistic welfare per unit time conditional on sentience, not about their probability of sentience. I would say everything, including a Planck volume in deep space vacuum, could have a probability of sentience of more than, for example, 1 % if it is operationalised in a very inclusive way. However, more inclusive operationalisations of sentience will lead to a smaller expected hedonistic welfare per unit time conditional on sentience. So I would like discussions of moral significance to focus on the expected hedonistic welfare per unit time instead of just the probability of sentience, or just the expected hedonistic welfare per unit time conditional on sentience.
I think increasing the welfare of soil animals will remain much more cost-effective than increasing digital welfare. Assuming digital welfare per FLOP is equal to the welfare per FLOP of a fully healthy human, I calculate the price-performance of digital system has to surpass 2.23*10^27 FLOP/​$ for increasing digital welfare to be more cost-effective than increasing the welfare of soil animals, which corresponds to doubling more than 29.0 times starting from the highest one on 9 November 2023. One would need 60.9 years for this to happen for Epoch AI’s doubling time of the FP32 price-performance of machine learning (ML) hardware from 2006 to 2023 of 2.1 years.