EDIT: It looks like you heavily edited your comment so my reply here doesn’t make much sense anymore.
Well different people at MIRI have different opinions so I don’t want to treat them like a monolith. Nate explicitly agrees with me that extrapolating from human values could be really bad for non-human animals; Rob things vegetarianism is morally obligatory; Katja thinks animals matter but vegetarianism is probably not useful; Eliezer doesn’t think animals matter.
It seems to me that they’re likely right that the future welfare of animals is not particularly important (it’d make more sense that computational real estate would be used for people who could buy it, and then would try to optimise their use of it, rather than putting animals there, which are unlikely to be optimal for any purpose in particular.)
I agree, as I explain here. But I’m not that confident, and given that in expectation non-human animals currently account for maybe 99.9%[^1] of the utility of the world, it’s pretty important that we get this right. I’m not remotely comfortable saying “Well, according to this wild speculation that seems prima facie reasonable, wild animals won’t exist in the future, so we can safely ignore these beings that currently account for 99.9% of the utility.”
might there be an update in the works that they might be less wrong about animals also than they had seemed?
I don’t know what you mean by “less wrong about animals.” Less wrong about what, exactly? Do you mean about how morally valuable animals are? About the probability that wild animal suffering will dominate the far future?
It’s plausible that a lot of AI researchers have explicitly reasoned about why they expect safety research to be good for the far future even when you don’t massively discount the value of animals. The only person I’ve seen discuss this publicly is Carl Shulman, and I’ve talked to Nate Soares about it privately so I know he’s thought about it. But all of MIRI’s public materials are entirely focused on why AI safety is important for humans, and make no mention of non-humans (i.e. almost all the beings that matter). Nate has adequately convinced me that he has thought about these issues but I haven’t seen evidence that anyone else at MIRI has thought about them. I’m sure some of them have but I’m in the dark about it. Since hardly anyone talks publicly about this, I used “cares about animals/is veg*an” as a proxy for “will try to make sure that an AI produces a future that’s good for all beings, not just humans.” This is an imperfect metric but it’s the best I could do in some cases. I did speak to Nate about this directly though and I felt good about his response.
Of course I did still come out strongly in favor of MIRI, and I’m supporting REG because I expect REG to product a lot of donations to MIRI in the future.
As Carl points out, it’s not the case that non-human animals account for 99.9% of utility if you’re using brain mass as a heuristic for the importance of each animal.
I don’t know what you mean by “less wrong about animals.” Less wrong about what, exactly?
About how important valuing animals is to the future? Though Katja and Robin are on a different side of the spectrum to you on this question, epistemic modesty means better to avoid penalizing them for their views.
It sounds like you and Michael just have different values. It’s pretty clear that you’d only find Michael’s argument viable if you share his opinion on animals. If you don’t share his value, you’d place different weights on the importance of the risk of MIRI doing a lot of bad things to animals.
I disagree that “[f]rom the reader’s point of view, this kind of argument shouldn’t get much weight.” It should get weight for readers that agree with the value, and shouldn’t get weight for readers that disagree with the value.
No, that’s exactly the issue—I want as much as the next person to see animals have better lives. I just don’t see why the ratio of humans to animals would be high in the future, especially if you weight the moral consideration to brain mass or information states.
I just don’t see why the ratio of humans to animals would be high in the future
I agree with you that it probably won’t be high. But I would have to be >99% confident that animals won’t comprise much of the utility of the far future for me to be willing to just ignore this factor, and I’m nowhere near that confident. Maybe you’re just a lot more confident than I am.
As Carl points out, it’s not the case that non-human animals account for 99.9% of utility if you’re using brain mass as a heuristic for the importance of each animal.
That’s a good point. I’d like to see what the numbers look like when you include wild animals too.
Most of the neural mass will be wild animals, but I think more like 90% than 99.9% (the ratio has changed by orders of magnitude in recent thousands of years, and only needs to go a bit further on a log scale for human brain mass to dominate). Unless you very confidently think that a set of neurons being incorporated into a larger structure destroys almost all of their expected value, the ‘small animals are dominant’ logic can likewise be used to say ’small neural systems are dominant, within and between animals.” If sapient populations grow rapidly (e.g. AI) then wild animals (including simulated ones) would be absolutely dwarfed on this measure. However, non-sapient artificial life might or might not use more computation than sapient artificial beings.
Also, there can be utility monsters both above and below. The number of states a brain can be in goes up exponentially as you add bits. The finite numbers it can represent (for pleasure, pain, preferences) go up super-exponentially. If you think a simple reinforcement learning Pac-Man program isn’t enough for much moral value, that one needs more sensory or processing complexity, then one is allowing that the values of preferences and reward can scale depending on other features of the system. And once you allow that, it is plausible that parallel reinforcement/decision processes in a large mind will get a higher multiplier (i.e. not only will there be more neural-equivalent processes doing reinforcement updating, but each individual one will get a larger multiplier due to the system it is embedded in).
The conclusion that no existing animal will be maximally efficient at producing welfare according to a fairly impartial hedonistic utilitarianism is on much firmer ground than the conclusion that the maximally efficient production system on that ethical theory would involve exceedingly tiny minds rather than vast ones or enhanced medium-size ones, or complex systems overlapping these scales.
Small insects (the most common) have a order 10,000 neurons. One estimate is 10^18 insects, implying 10^22 neurons. In humans it is 10^21 neurons total. However, smaller organisms tend to have smaller cells, so if you go by mass, humans might actually be dominant. Of course there are other groups of wild and domestic animals, but it gives you some idea.
EDIT: It looks like you heavily edited your comment so my reply here doesn’t make much sense anymore.
Well different people at MIRI have different opinions so I don’t want to treat them like a monolith. Nate explicitly agrees with me that extrapolating from human values could be really bad for non-human animals; Rob things vegetarianism is morally obligatory; Katja thinks animals matter but vegetarianism is probably not useful; Eliezer doesn’t think animals matter.
I agree, as I explain here. But I’m not that confident, and given that in expectation non-human animals currently account for maybe 99.9%[^1] of the utility of the world, it’s pretty important that we get this right. I’m not remotely comfortable saying “Well, according to this wild speculation that seems prima facie reasonable, wild animals won’t exist in the future, so we can safely ignore these beings that currently account for 99.9% of the utility.”
I don’t know what you mean by “less wrong about animals.” Less wrong about what, exactly? Do you mean about how morally valuable animals are? About the probability that wild animal suffering will dominate the far future?
It’s plausible that a lot of AI researchers have explicitly reasoned about why they expect safety research to be good for the far future even when you don’t massively discount the value of animals. The only person I’ve seen discuss this publicly is Carl Shulman, and I’ve talked to Nate Soares about it privately so I know he’s thought about it. But all of MIRI’s public materials are entirely focused on why AI safety is important for humans, and make no mention of non-humans (i.e. almost all the beings that matter). Nate has adequately convinced me that he has thought about these issues but I haven’t seen evidence that anyone else at MIRI has thought about them. I’m sure some of them have but I’m in the dark about it. Since hardly anyone talks publicly about this, I used “cares about animals/is veg*an” as a proxy for “will try to make sure that an AI produces a future that’s good for all beings, not just humans.” This is an imperfect metric but it’s the best I could do in some cases. I did speak to Nate about this directly though and I felt good about his response.
Of course I did still come out strongly in favor of MIRI, and I’m supporting REG because I expect REG to product a lot of donations to MIRI in the future.
As Carl points out, it’s not the case that non-human animals account for 99.9% of utility if you’re using brain mass as a heuristic for the importance of each animal.
About how important valuing animals is to the future? Though Katja and Robin are on a different side of the spectrum to you on this question, epistemic modesty means better to avoid penalizing them for their views.
It sounds like you and Michael just have different values. It’s pretty clear that you’d only find Michael’s argument viable if you share his opinion on animals. If you don’t share his value, you’d place different weights on the importance of the risk of MIRI doing a lot of bad things to animals.
I disagree that “[f]rom the reader’s point of view, this kind of argument shouldn’t get much weight.” It should get weight for readers that agree with the value, and shouldn’t get weight for readers that disagree with the value.
No, that’s exactly the issue—I want as much as the next person to see animals have better lives. I just don’t see why the ratio of humans to animals would be high in the future, especially if you weight the moral consideration to brain mass or information states.
I’m just wary of making confident predictions of the far future. A lot can change in a million years...
I agree with you that it probably won’t be high. But I would have to be >99% confident that animals won’t comprise much of the utility of the far future for me to be willing to just ignore this factor, and I’m nowhere near that confident. Maybe you’re just a lot more confident than I am.
That’s a good point. I’d like to see what the numbers look like when you include wild animals too.
Most of the neural mass will be wild animals, but I think more like 90% than 99.9% (the ratio has changed by orders of magnitude in recent thousands of years, and only needs to go a bit further on a log scale for human brain mass to dominate). Unless you very confidently think that a set of neurons being incorporated into a larger structure destroys almost all of their expected value, the ‘small animals are dominant’ logic can likewise be used to say ’small neural systems are dominant, within and between animals.” If sapient populations grow rapidly (e.g. AI) then wild animals (including simulated ones) would be absolutely dwarfed on this measure. However, non-sapient artificial life might or might not use more computation than sapient artificial beings.
Also, there can be utility monsters both above and below. The number of states a brain can be in goes up exponentially as you add bits. The finite numbers it can represent (for pleasure, pain, preferences) go up super-exponentially. If you think a simple reinforcement learning Pac-Man program isn’t enough for much moral value, that one needs more sensory or processing complexity, then one is allowing that the values of preferences and reward can scale depending on other features of the system. And once you allow that, it is plausible that parallel reinforcement/decision processes in a large mind will get a higher multiplier (i.e. not only will there be more neural-equivalent processes doing reinforcement updating, but each individual one will get a larger multiplier due to the system it is embedded in).
The conclusion that no existing animal will be maximally efficient at producing welfare according to a fairly impartial hedonistic utilitarianism is on much firmer ground than the conclusion that the maximally efficient production system on that ethical theory would involve exceedingly tiny minds rather than vast ones or enhanced medium-size ones, or complex systems overlapping these scales.
Small insects (the most common) have a order 10,000 neurons. One estimate is 10^18 insects, implying 10^22 neurons. In humans it is 10^21 neurons total. However, smaller organisms tend to have smaller cells, so if you go by mass, humans might actually be dominant. Of course there are other groups of wild and domestic animals, but it gives you some idea.