Unsurprisingly, I agree with a lot of this! It’s nice to see these principles laid out clearly and concisely:
You write
AI welfare is potentially an extremely large-scale issue. In the same way that the invertebrate population is much larger than the vertebrate population at present, the digital population has the potential to be much larger than the biological population in the future.
Do you know of any work that estimates these sizes? There are various places that people have estimated the ‘size of the future’ including potential digital moral patients in the long run, but do you know of anything that estimates how many AI moral patients there could be by (say) 2030?
No, but this would be useful! Some quick thoughts:
A lot depends on our standard for moral inclusion. If we think that we should include all potential moral patients in the moral circle, then we might include a large number of near-term AI systems. If, in contrast, we think that we should include only beings with at least, say, a 0.1% chance of being moral patients, then we might include a smaller number.
With respect to the AI systems we include, one question is how many there will be. This is partly a question about moral individuation. Insofar as digital minds are connected, we might see the world as containing a large number of small moral patients, a small number of large moral patients, or both. Luke Roelofs and I will be releasing work about this soon.
Another question is how much welfare they might have. No matter how we individuate them, they could have a lot, either because a large number of them have a small amount, a small number of them have a large amount, or both. I discuss possible implications here: https://www.tandfonline.com/doi/abs/10.1080/21550085.2023.2200724
It also seems plausible that some digital minds could process welfare more efficiently than biological minds because they lack our evolutionary baggage. But assessing this claim requires developing a framework for making intersubstrate welfare comparisons, which, as I note in the post, will be difficult. Bob Fischer and I will be releasing work about this soon.
A few weeks ago I did a quick calculation for the amount of digital suffering I expect in the short term, which probably gets at your question about these sizes, for the short term. tldr of my thinking on the topic:
There is currently a global compute stock of ~1.4e21 FLOP/s (each second, we can do about that many floating point operations).
It seems reasonable to expect this to grow ~40x in the next 10 years based on naively extrapolating current trends in spending and compute efficiency per dollar. That brings us to 1.6e23 FLOP/s in 2033.
Human brains do about 1e15 FLOP/s (each second, a human brain does about 1e15 floating point operations worth of computation)
We might naively assume that future AIs will have similar consciousness-compute efficiency to humans. We’ll also assume that 63% of the 2033 compute stock is being used to run such AIs (makes the numbers easier).
Then the number of human-consciousness-second-equivalent AIs that can be run each second in 2033 is 1e23 / 1e15 = 1e8, or 100 million.
For reference, there are probably around 31 billion land animals being factory farmed each second. I make a few adjustments based on brain size and guesses about the experience of suffering AIs and get that digital suffering in 2033 seems to be similar in scale to factory farming.
Overall my analysis is extremely uncertain, and I’m unsurprised if it’s off by 3 orders of magnitude in either direction. Also note that I am only looking at the short term.
You can read the slightly more thorough, but still extremely rough and likely wrong BOTEC here
Somewhat relatedly, do you happen to have a guess for the welfare range of GPT-4 compared to that of a human? Feel free to give a 90 % confidence interval with as many orders of magnitude as you like. My intuitive guess would be something like a loguniform distribution ranging from 10^-6 to 1, whose mean of 0.07 is similar to Rethink Priorities’ median welfare range for bees.
Unsurprisingly, I agree with a lot of this! It’s nice to see these principles laid out clearly and concisely:
You write
Do you know of any work that estimates these sizes? There are various places that people have estimated the ‘size of the future’ including potential digital moral patients in the long run, but do you know of anything that estimates how many AI moral patients there could be by (say) 2030?
No, but this would be useful! Some quick thoughts:
A lot depends on our standard for moral inclusion. If we think that we should include all potential moral patients in the moral circle, then we might include a large number of near-term AI systems. If, in contrast, we think that we should include only beings with at least, say, a 0.1% chance of being moral patients, then we might include a smaller number.
With respect to the AI systems we include, one question is how many there will be. This is partly a question about moral individuation. Insofar as digital minds are connected, we might see the world as containing a large number of small moral patients, a small number of large moral patients, or both. Luke Roelofs and I will be releasing work about this soon.
Another question is how much welfare they might have. No matter how we individuate them, they could have a lot, either because a large number of them have a small amount, a small number of them have a large amount, or both. I discuss possible implications here: https://www.tandfonline.com/doi/abs/10.1080/21550085.2023.2200724
It also seems plausible that some digital minds could process welfare more efficiently than biological minds because they lack our evolutionary baggage. But assessing this claim requires developing a framework for making intersubstrate welfare comparisons, which, as I note in the post, will be difficult. Bob Fischer and I will be releasing work about this soon.
A few weeks ago I did a quick calculation for the amount of digital suffering I expect in the short term, which probably gets at your question about these sizes, for the short term. tldr of my thinking on the topic:
There is currently a global compute stock of ~1.4e21 FLOP/s (each second, we can do about that many floating point operations).
It seems reasonable to expect this to grow ~40x in the next 10 years based on naively extrapolating current trends in spending and compute efficiency per dollar. That brings us to 1.6e23 FLOP/s in 2033.
Human brains do about 1e15 FLOP/s (each second, a human brain does about 1e15 floating point operations worth of computation)
We might naively assume that future AIs will have similar consciousness-compute efficiency to humans. We’ll also assume that 63% of the 2033 compute stock is being used to run such AIs (makes the numbers easier).
Then the number of human-consciousness-second-equivalent AIs that can be run each second in 2033 is 1e23 / 1e15 = 1e8, or 100 million.
For reference, there are probably around 31 billion land animals being factory farmed each second. I make a few adjustments based on brain size and guesses about the experience of suffering AIs and get that digital suffering in 2033 seems to be similar in scale to factory farming.
Overall my analysis is extremely uncertain, and I’m unsurprised if it’s off by 3 orders of magnitude in either direction. Also note that I am only looking at the short term.
You can read the slightly more thorough, but still extremely rough and likely wrong BOTEC here
Hi Robert,
Somewhat relatedly, do you happen to have a guess for the welfare range of GPT-4 compared to that of a human? Feel free to give a 90 % confidence interval with as many orders of magnitude as you like. My intuitive guess would be something like a loguniform distribution ranging from 10^-6 to 1, whose mean of 0.07 is similar to Rethink Priorities’ median welfare range for bees.