It’s pretty harsh to defund people’s organisations because they make carefully reasoned arguments that disagree with your conclusions! I’m a vegetarian and thought the arguments were strong, so it’s hard to write that off as motivated reasoning. If you want to make a balanced judgement of what their blogs posts say about their values, mightn’t you want to do a more balanced survey of what the key players have written on a wider range of topics, rather than the one that reached your newsfeed because its claims were seemingly outrageous? It’d feel similarly unfair if people tried to discredit whatever outreach efforts I was performing because I’d made (quite good—or so I thought) arguments that organ donation was ineffective.
I assume you’re referring to my discussion of MIRI.
I’m NOT saying that some MIRI employees don’t care about animals, therefore they’re bad at reasoning. That’s NOT what I’m saying, and frankly that would be silly. Eliezer doesn’t care about animals but I believe he’s much smarter and probably more rational than I am.
What I AM saying is this:
MIRI/FAI researchers may have a large influence on what values end up shaping the far future.
Some sorts of FAI research are more likely to work out well for non-human animals than others. (I discuss this in OP.)
Therefore, I should want FAI researchers to have good values, and in particular, to assign appropriate worth to non-human animals because I think this is by far the biggest potential failure mode. I want to trust that they will choose to do the sorts of research that will work out well for non-human animals.
So I will attempt to assess how much value MIRI researchers assign to non-human animals, because this question is relevant to how much good I think they will produce for the far future.
This has nothing to do with my meta-level assessment of MIRI employees’ reasoning abilities and everything to do with their object-level beliefs on an issue that could be critically important for the shape of the far future.
I find this consideration less important than I used to because I’m more confident that preventing human extinction is net positive, but I still thought it was worth discussing.
You’re sceptical of their organisation because you disagree with them about the object-level topic of animals, which they assign less importance than you, right?
From the reader’s point of view, this kind of argument shouldn’t get much weight.
Why would the future welfare of animals be important in a future world with AIs? It’d make more sense for computing resources to be used to create things that people want (like fun virtual worlds?) and that they’d optimise their use of it, rather than putting animals there, which are unlikely to be useful for any specific human purpose, except perhaps as pets. Moreover, the activities of animals themselves are not going to have any long-run impacts. For reasons related to these two, it seems to me that those who argue that being vegetarian now is not useful in the long-run are closer to the mark than those like Rob (who nonetheless are well-represented in MRI), who argue that it is morally obligatory.
And at the bottom of all of this, the reader will note that you have converged toward MIRI’s views on other topics like the importance of AI research and existential risk reduction, and there’s little reason that you couldn’t update your views to be closer to the average of reasonable positions around this topic.
The argumentation ‘i won’t fund this because they criticised an endeavour that i value’ also gives a bad incentive, but at any rate, it seems like it is appopriate to downweight it.
I still feel like you’re misunderstanding my position but I don’t know how to explain it any differently than I already have, so I’ll just address some things I haven’t talked about yet.
A lot of what you’re talking about here is how I should change my beliefs when other smart people have different beliefs from me, which is a really complex question that I don’t know how to answer in a way that makes sense. I get the impression that you think I should put more weight on the fact that some MIRI researchers don’t think animals are important for the far future; and I don’t think I should do that.
I already agree that wild animals probably won’t exist in the far future, assuming humans survive. I also generally agree with Nate’s beliefs on non-human animals and I expect that he does a good job of considering their interests when he makes decisions. And my current best guess is that MIRI is the strongest object-level charity in the world. I don’t think I disagree with MIRI as much as you think I do.
Edited to add: I have seen evidence that Nate is asking questions like, “What makes a being conscious?” “How do we ensure that an AI makes all these beings well off and not just humans?” AI safety researchers need to be asking these questions.
EDIT: It looks like you heavily edited your comment so my reply here doesn’t make much sense anymore.
Well different people at MIRI have different opinions so I don’t want to treat them like a monolith. Nate explicitly agrees with me that extrapolating from human values could be really bad for non-human animals; Rob things vegetarianism is morally obligatory; Katja thinks animals matter but vegetarianism is probably not useful; Eliezer doesn’t think animals matter.
It seems to me that they’re likely right that the future welfare of animals is not particularly important (it’d make more sense that computational real estate would be used for people who could buy it, and then would try to optimise their use of it, rather than putting animals there, which are unlikely to be optimal for any purpose in particular.)
I agree, as I explain here. But I’m not that confident, and given that in expectation non-human animals currently account for maybe 99.9%[^1] of the utility of the world, it’s pretty important that we get this right. I’m not remotely comfortable saying “Well, according to this wild speculation that seems prima facie reasonable, wild animals won’t exist in the future, so we can safely ignore these beings that currently account for 99.9% of the utility.”
might there be an update in the works that they might be less wrong about animals also than they had seemed?
I don’t know what you mean by “less wrong about animals.” Less wrong about what, exactly? Do you mean about how morally valuable animals are? About the probability that wild animal suffering will dominate the far future?
It’s plausible that a lot of AI researchers have explicitly reasoned about why they expect safety research to be good for the far future even when you don’t massively discount the value of animals. The only person I’ve seen discuss this publicly is Carl Shulman, and I’ve talked to Nate Soares about it privately so I know he’s thought about it. But all of MIRI’s public materials are entirely focused on why AI safety is important for humans, and make no mention of non-humans (i.e. almost all the beings that matter). Nate has adequately convinced me that he has thought about these issues but I haven’t seen evidence that anyone else at MIRI has thought about them. I’m sure some of them have but I’m in the dark about it. Since hardly anyone talks publicly about this, I used “cares about animals/is veg*an” as a proxy for “will try to make sure that an AI produces a future that’s good for all beings, not just humans.” This is an imperfect metric but it’s the best I could do in some cases. I did speak to Nate about this directly though and I felt good about his response.
Of course I did still come out strongly in favor of MIRI, and I’m supporting REG because I expect REG to product a lot of donations to MIRI in the future.
As Carl points out, it’s not the case that non-human animals account for 99.9% of utility if you’re using brain mass as a heuristic for the importance of each animal.
I don’t know what you mean by “less wrong about animals.” Less wrong about what, exactly?
About how important valuing animals is to the future? Though Katja and Robin are on a different side of the spectrum to you on this question, epistemic modesty means better to avoid penalizing them for their views.
It sounds like you and Michael just have different values. It’s pretty clear that you’d only find Michael’s argument viable if you share his opinion on animals. If you don’t share his value, you’d place different weights on the importance of the risk of MIRI doing a lot of bad things to animals.
I disagree that “[f]rom the reader’s point of view, this kind of argument shouldn’t get much weight.” It should get weight for readers that agree with the value, and shouldn’t get weight for readers that disagree with the value.
No, that’s exactly the issue—I want as much as the next person to see animals have better lives. I just don’t see why the ratio of humans to animals would be high in the future, especially if you weight the moral consideration to brain mass or information states.
I just don’t see why the ratio of humans to animals would be high in the future
I agree with you that it probably won’t be high. But I would have to be >99% confident that animals won’t comprise much of the utility of the far future for me to be willing to just ignore this factor, and I’m nowhere near that confident. Maybe you’re just a lot more confident than I am.
As Carl points out, it’s not the case that non-human animals account for 99.9% of utility if you’re using brain mass as a heuristic for the importance of each animal.
That’s a good point. I’d like to see what the numbers look like when you include wild animals too.
Most of the neural mass will be wild animals, but I think more like 90% than 99.9% (the ratio has changed by orders of magnitude in recent thousands of years, and only needs to go a bit further on a log scale for human brain mass to dominate). Unless you very confidently think that a set of neurons being incorporated into a larger structure destroys almost all of their expected value, the ‘small animals are dominant’ logic can likewise be used to say ’small neural systems are dominant, within and between animals.” If sapient populations grow rapidly (e.g. AI) then wild animals (including simulated ones) would be absolutely dwarfed on this measure. However, non-sapient artificial life might or might not use more computation than sapient artificial beings.
Also, there can be utility monsters both above and below. The number of states a brain can be in goes up exponentially as you add bits. The finite numbers it can represent (for pleasure, pain, preferences) go up super-exponentially. If you think a simple reinforcement learning Pac-Man program isn’t enough for much moral value, that one needs more sensory or processing complexity, then one is allowing that the values of preferences and reward can scale depending on other features of the system. And once you allow that, it is plausible that parallel reinforcement/decision processes in a large mind will get a higher multiplier (i.e. not only will there be more neural-equivalent processes doing reinforcement updating, but each individual one will get a larger multiplier due to the system it is embedded in).
The conclusion that no existing animal will be maximally efficient at producing welfare according to a fairly impartial hedonistic utilitarianism is on much firmer ground than the conclusion that the maximally efficient production system on that ethical theory would involve exceedingly tiny minds rather than vast ones or enhanced medium-size ones, or complex systems overlapping these scales.
Small insects (the most common) have a order 10,000 neurons. One estimate is 10^18 insects, implying 10^22 neurons. In humans it is 10^21 neurons total. However, smaller organisms tend to have smaller cells, so if you go by mass, humans might actually be dominant. Of course there are other groups of wild and domestic animals, but it gives you some idea.
It’s pretty harsh to defund people’s organisations because they make carefully reasoned arguments that disagree with your conclusions! I’m a vegetarian and thought the arguments were strong, so it’s hard to write that off as motivated reasoning. If you want to make a balanced judgement of what their blogs posts say about their values, mightn’t you want to do a more balanced survey of what the key players have written on a wider range of topics, rather than the one that reached your newsfeed because its claims were seemingly outrageous? It’d feel similarly unfair if people tried to discredit whatever outreach efforts I was performing because I’d made (quite good—or so I thought) arguments that organ donation was ineffective.
I assume you’re referring to my discussion of MIRI.
I’m NOT saying that some MIRI employees don’t care about animals, therefore they’re bad at reasoning. That’s NOT what I’m saying, and frankly that would be silly. Eliezer doesn’t care about animals but I believe he’s much smarter and probably more rational than I am.
What I AM saying is this:
MIRI/FAI researchers may have a large influence on what values end up shaping the far future.
Some sorts of FAI research are more likely to work out well for non-human animals than others. (I discuss this in OP.)
Therefore, I should want FAI researchers to have good values, and in particular, to assign appropriate worth to non-human animals because I think this is by far the biggest potential failure mode. I want to trust that they will choose to do the sorts of research that will work out well for non-human animals.
So I will attempt to assess how much value MIRI researchers assign to non-human animals, because this question is relevant to how much good I think they will produce for the far future.
This has nothing to do with my meta-level assessment of MIRI employees’ reasoning abilities and everything to do with their object-level beliefs on an issue that could be critically important for the shape of the far future.
I find this consideration less important than I used to because I’m more confident that preventing human extinction is net positive, but I still thought it was worth discussing.
You’re sceptical of their organisation because you disagree with them about the object-level topic of animals, which they assign less importance than you, right?
From the reader’s point of view, this kind of argument shouldn’t get much weight.
Why would the future welfare of animals be important in a future world with AIs? It’d make more sense for computing resources to be used to create things that people want (like fun virtual worlds?) and that they’d optimise their use of it, rather than putting animals there, which are unlikely to be useful for any specific human purpose, except perhaps as pets. Moreover, the activities of animals themselves are not going to have any long-run impacts. For reasons related to these two, it seems to me that those who argue that being vegetarian now is not useful in the long-run are closer to the mark than those like Rob (who nonetheless are well-represented in MRI), who argue that it is morally obligatory.
And at the bottom of all of this, the reader will note that you have converged toward MIRI’s views on other topics like the importance of AI research and existential risk reduction, and there’s little reason that you couldn’t update your views to be closer to the average of reasonable positions around this topic.
The argumentation ‘i won’t fund this because they criticised an endeavour that i value’ also gives a bad incentive, but at any rate, it seems like it is appopriate to downweight it.
I still feel like you’re misunderstanding my position but I don’t know how to explain it any differently than I already have, so I’ll just address some things I haven’t talked about yet.
A lot of what you’re talking about here is how I should change my beliefs when other smart people have different beliefs from me, which is a really complex question that I don’t know how to answer in a way that makes sense. I get the impression that you think I should put more weight on the fact that some MIRI researchers don’t think animals are important for the far future; and I don’t think I should do that.
I already agree that wild animals probably won’t exist in the far future, assuming humans survive. I also generally agree with Nate’s beliefs on non-human animals and I expect that he does a good job of considering their interests when he makes decisions. And my current best guess is that MIRI is the strongest object-level charity in the world. I don’t think I disagree with MIRI as much as you think I do.
Edited to add: I have seen evidence that Nate is asking questions like, “What makes a being conscious?” “How do we ensure that an AI makes all these beings well off and not just humans?” AI safety researchers need to be asking these questions.
EDIT: It looks like you heavily edited your comment so my reply here doesn’t make much sense anymore.
Well different people at MIRI have different opinions so I don’t want to treat them like a monolith. Nate explicitly agrees with me that extrapolating from human values could be really bad for non-human animals; Rob things vegetarianism is morally obligatory; Katja thinks animals matter but vegetarianism is probably not useful; Eliezer doesn’t think animals matter.
I agree, as I explain here. But I’m not that confident, and given that in expectation non-human animals currently account for maybe 99.9%[^1] of the utility of the world, it’s pretty important that we get this right. I’m not remotely comfortable saying “Well, according to this wild speculation that seems prima facie reasonable, wild animals won’t exist in the future, so we can safely ignore these beings that currently account for 99.9% of the utility.”
I don’t know what you mean by “less wrong about animals.” Less wrong about what, exactly? Do you mean about how morally valuable animals are? About the probability that wild animal suffering will dominate the far future?
It’s plausible that a lot of AI researchers have explicitly reasoned about why they expect safety research to be good for the far future even when you don’t massively discount the value of animals. The only person I’ve seen discuss this publicly is Carl Shulman, and I’ve talked to Nate Soares about it privately so I know he’s thought about it. But all of MIRI’s public materials are entirely focused on why AI safety is important for humans, and make no mention of non-humans (i.e. almost all the beings that matter). Nate has adequately convinced me that he has thought about these issues but I haven’t seen evidence that anyone else at MIRI has thought about them. I’m sure some of them have but I’m in the dark about it. Since hardly anyone talks publicly about this, I used “cares about animals/is veg*an” as a proxy for “will try to make sure that an AI produces a future that’s good for all beings, not just humans.” This is an imperfect metric but it’s the best I could do in some cases. I did speak to Nate about this directly though and I felt good about his response.
Of course I did still come out strongly in favor of MIRI, and I’m supporting REG because I expect REG to product a lot of donations to MIRI in the future.
As Carl points out, it’s not the case that non-human animals account for 99.9% of utility if you’re using brain mass as a heuristic for the importance of each animal.
About how important valuing animals is to the future? Though Katja and Robin are on a different side of the spectrum to you on this question, epistemic modesty means better to avoid penalizing them for their views.
It sounds like you and Michael just have different values. It’s pretty clear that you’d only find Michael’s argument viable if you share his opinion on animals. If you don’t share his value, you’d place different weights on the importance of the risk of MIRI doing a lot of bad things to animals.
I disagree that “[f]rom the reader’s point of view, this kind of argument shouldn’t get much weight.” It should get weight for readers that agree with the value, and shouldn’t get weight for readers that disagree with the value.
No, that’s exactly the issue—I want as much as the next person to see animals have better lives. I just don’t see why the ratio of humans to animals would be high in the future, especially if you weight the moral consideration to brain mass or information states.
I’m just wary of making confident predictions of the far future. A lot can change in a million years...
I agree with you that it probably won’t be high. But I would have to be >99% confident that animals won’t comprise much of the utility of the far future for me to be willing to just ignore this factor, and I’m nowhere near that confident. Maybe you’re just a lot more confident than I am.
That’s a good point. I’d like to see what the numbers look like when you include wild animals too.
Most of the neural mass will be wild animals, but I think more like 90% than 99.9% (the ratio has changed by orders of magnitude in recent thousands of years, and only needs to go a bit further on a log scale for human brain mass to dominate). Unless you very confidently think that a set of neurons being incorporated into a larger structure destroys almost all of their expected value, the ‘small animals are dominant’ logic can likewise be used to say ’small neural systems are dominant, within and between animals.” If sapient populations grow rapidly (e.g. AI) then wild animals (including simulated ones) would be absolutely dwarfed on this measure. However, non-sapient artificial life might or might not use more computation than sapient artificial beings.
Also, there can be utility monsters both above and below. The number of states a brain can be in goes up exponentially as you add bits. The finite numbers it can represent (for pleasure, pain, preferences) go up super-exponentially. If you think a simple reinforcement learning Pac-Man program isn’t enough for much moral value, that one needs more sensory or processing complexity, then one is allowing that the values of preferences and reward can scale depending on other features of the system. And once you allow that, it is plausible that parallel reinforcement/decision processes in a large mind will get a higher multiplier (i.e. not only will there be more neural-equivalent processes doing reinforcement updating, but each individual one will get a larger multiplier due to the system it is embedded in).
The conclusion that no existing animal will be maximally efficient at producing welfare according to a fairly impartial hedonistic utilitarianism is on much firmer ground than the conclusion that the maximally efficient production system on that ethical theory would involve exceedingly tiny minds rather than vast ones or enhanced medium-size ones, or complex systems overlapping these scales.
Small insects (the most common) have a order 10,000 neurons. One estimate is 10^18 insects, implying 10^22 neurons. In humans it is 10^21 neurons total. However, smaller organisms tend to have smaller cells, so if you go by mass, humans might actually be dominant. Of course there are other groups of wild and domestic animals, but it gives you some idea.