I guess the welfare per FLOP of current AI systems is lower than human welfare per FLOP because humans are sentient, whereas AI systems may not be, but I do not know how to estimate digital welfare in any principled way. It would be great to have some research on estimating digital welfare in QALY/​FLOP, which matters much more from the point of view of increasing welfare than the probability of consciousness or sentience that are often the focus of discussion.
For my preferred exponent of the number of neurons of 0.5, the price-performance has to double more than 29.0 times (becoming 530 M times as high), starting from the highest on 9 November 2023, for increasing digital welfare to be more cost-effective than increasing the welfare of soil animals. I think the world after so many doublings would be very different from the current one, which makes me pessimistic about our ability to influence it. It would be like trying to influence digital welfare today via interventions 60.9 years ago, which is my best guess for the time from 9 November 2023 until increasing digital welfare being as cost-effective as increasing the welfare of soil animals for price-performance doubling every 2.1 years.
I would look out for Rethink Priorities’ digital consciousness model (and other work they are doing here) which should be coming out soon-ish. I don’t think they would call it definitive in any sense, but it could be helpful here.
I think a major way this could be wrong is if you think we could get lots of digital minds in some amount of decades and early research/​public engagement can have an oversized impact on shaping the conversation. This might make digital minds way more important, I think.
I’m also generally pretty interested in people doing more digital minds cross-cause prio (I’m working on a piece now)!
I am looking forward to the results of RP’s Digital Consciousness Project, but I do not expect any significant updates to my views. It focusses on the probability of consciousness, but I think this says very little about the (expected hedonistic) welfare per unit time. This is because I believe there is much more uncertainty in the welfare per unit time conditional on consciousness than in the probability of consciousness.
I suspect the number of digital minds is not among the most relevant parameters to track. It may not be proportional to total digital welfare because more digital minds will tend to have less individual welfare per unit time. I would focus more on the digital welfare per FLOP, and FLOPs per year.
I am also interested in more comparisons between the promise of increasing biological and digital welfare. Nice to know you are working on a piece!
Thanks for the comment, Lucius!
I guess the welfare per FLOP of current AI systems is lower than human welfare per FLOP because humans are sentient, whereas AI systems may not be, but I do not know how to estimate digital welfare in any principled way. It would be great to have some research on estimating digital welfare in QALY/​FLOP, which matters much more from the point of view of increasing welfare than the probability of consciousness or sentience that are often the focus of discussion.
For my preferred exponent of the number of neurons of 0.5, the price-performance has to double more than 29.0 times (becoming 530 M times as high), starting from the highest on 9 November 2023, for increasing digital welfare to be more cost-effective than increasing the welfare of soil animals. I think the world after so many doublings would be very different from the current one, which makes me pessimistic about our ability to influence it. It would be like trying to influence digital welfare today via interventions 60.9 years ago, which is my best guess for the time from 9 November 2023 until increasing digital welfare being as cost-effective as increasing the welfare of soil animals for price-performance doubling every 2.1 years.
I would look out for Rethink Priorities’ digital consciousness model (and other work they are doing here) which should be coming out soon-ish. I don’t think they would call it definitive in any sense, but it could be helpful here.
I think a major way this could be wrong is if you think we could get lots of digital minds in some amount of decades and early research/​public engagement can have an oversized impact on shaping the conversation. This might make digital minds way more important, I think.
I’m also generally pretty interested in people doing more digital minds cross-cause prio (I’m working on a piece now)!
Thanks for the comment, Noah.
I am looking forward to the results of RP’s Digital Consciousness Project, but I do not expect any significant updates to my views. It focusses on the probability of consciousness, but I think this says very little about the (expected hedonistic) welfare per unit time. This is because I believe there is much more uncertainty in the welfare per unit time conditional on consciousness than in the probability of consciousness.
I suspect the number of digital minds is not among the most relevant parameters to track. It may not be proportional to total digital welfare because more digital minds will tend to have less individual welfare per unit time. I would focus more on the digital welfare per FLOP, and FLOPs per year.
I am also interested in more comparisons between the promise of increasing biological and digital welfare. Nice to know you are working on a piece!