I thought this piece was good. I agree that MCE work is likely quite high impact—perhaps around the same level as X-risk work—and that it has been generally ignored by EAs. I also agree that it would be good for there to be more MCE work going forward. Here’s my 2 cents:
You seem to be saying that AIA is a technical problem and MCE is a social problem. While I think there is something to this, I think there are very important technical and social sides to both of these. Much of the work related to AIA so far has been about raising awareness about the problem (eg the book Superintelligence), and this is more a social solution than a technical one. Also, avoiding a technological race for AGI seems important for AIA, and this also is more a social problem than a technical one.
For MCE, the 2 best things I can imagine (that I think are plausible) are both technical in nature. First, I expect clean meat will lead to the moral circle expanding more to animals. I really don’t see any vegan social movement succeeding in ending factory farming anywhere near as much as I expect clean meat to. Second, I’d imagine that a mature science of consciousness would increase MCE significantly. Many people don’t think animals are conscious, and almost no one thinks anything besides animals can be conscious. How would we even know if an AI was conscious, and if so, if it was experiencing joy or suffering? The only way would be if we develop theories of consciousness that we have high confidence in. But right now we’re very limited in studying consciousness, because our tools at interfacing with the brain are crude. Advanced neurotechnologies could change that—they could allow us to potentially test hypotheses about consciousness. Again, developing these technologies would be a technical problem.
Of course, these are just the first ideas that come into my mind, and there very well may be social solutions that could do more than the technical solutions I mentioned, but I don’t think we should rule out the potential role of technical solutions, either.
Thanks for the comment! A few of my thoughts on this:
Presumably we want some people working on both of these problems, some people have skills more suited to one than the other, and some people are just going to be more passionate about one than the other.
If one is convinced non-extinction civilization is net positive, this seems true and important. Sorry if I framed the post too much as one or the other for the whole community.
Much of the work related to AIA so far has been about raising awareness about the problem (eg the book Superintelligence), and this is more a social solution than a technical one.
Maybe. My impression from people working on AIA is that they see it as mostly technical, and indeed they think much of the social work has been net negative. Perhaps not Superintelligence, but at least the work that’s been done to get media coverage and widespread attention without the technical attention to detail of Bostrom’s book.
I think the more important social work (from a pro-AIA perspective) is about convincing AI decision-makers to use the technical results of AIA research, but my impression is that AIA proponents still think getting those technical results is probably the more important projects.
There’s also social work in coordinating the AIA community.
First, I expect clean meat will lead to the moral circle expanding more to animals. I really don’t see any vegan social movement succeeding in ending factory farming anywhere near as much as I expect clean meat to.
Sure, though one big issue with technology is that it seems like we can do far less to steer its direction than we can do with social change. Clean meat tech research probably just helps us get clean meat sooner instead of making the tech progress happen when it wouldn’t otherwise. The direction of the far future (e.g. whether clean meat is ever adopted, whether the moral circle expands to artificial sentience) probably matters a lot more than the speed at which it arrives.
Of course, this gets very complicated very quickly, as we consider things like value lock-in. Sentience Institute has a bit of basic sketching on the topic on this page.
Second, I’d imagine that a mature science of consciousness would increase MCE significantly. Many people don’t think animals are conscious, and almost no one thinks anything besides animals can be conscious
I disagree that “many people don’t think animals are conscious.” I almost exclusively hear that view in from the rationalist/LessWrong community. A recent survey suggested that 87.3% of US adults agree with the statement, “Farmed animals have roughly the same ability to feel pain and discomfort as humans,” and presumably even more think they have at least some ability.
Advanced neurotechnologies could change that—they could allow us to potentially test hypotheses about consciousness.
I’m fairly skeptical of this personally, partly because I don’t think there’s a fact of the matter when it comes to whether a being is conscious. I think Brian Tomasik has written eloquently on this. (I know this is an unfortunate view for an animal advocate like me, but it seems to have the best evidence favoring it.)
I’m fairly skeptical of this personally, partly because I don’t think there’s a fact of the matter when it comes to whether a being is conscious.
I would guess that increasing understanding of cognitive science would generally increase people’s moral circles if only because people would think more about these kinds of questions. Of course, understanding cognitive science is no guarantee that you’ll conclude that animals matter, as we can see from people like Dennett, Yudkowsky, Peter Carruthers, etc.
Second, I’d imagine that a mature science of consciousness would increase MCE significantly. Many people don’t think animals are conscious, and almost no one thinks anything besides animals can be conscious. How would we even know if an AI was conscious, and if so, if it was experiencing joy or suffering? The only way would be if we develop theories of consciousness that we have high confidence in. But right now we’re very limited in studying consciousness, because our tools at interfacing with the brain are crude. Advanced neurotechnologies could change that—they could allow us to potentially test hypotheses about consciousness. Again, developing these technologies would be a technical problem.
I think that’s right. Specifically, I would advocate consciousness research as a foundation for principled moral circle expansion. I.e., if we do consciousness research correctly, the equations themselves will tell us how conscious insects are, whether algorithms can suffer, how much moral weight we should give animals, and so on.
On the other hand, if there is no fact of the matter as to what is conscious, we’re headed toward a very weird, very contentious future of conflicting/incompatible moral circles, with no ‘ground truth’ or shared principles to arbitrate disputes.
Edit: I’d also like to thank Jacy for posting this- I find it a notable contribution to the space, and clearly a product of a lot of hard work and deep thought.
I thought this piece was good. I agree that MCE work is likely quite high impact—perhaps around the same level as X-risk work—and that it has been generally ignored by EAs. I also agree that it would be good for there to be more MCE work going forward. Here’s my 2 cents:
You seem to be saying that AIA is a technical problem and MCE is a social problem. While I think there is something to this, I think there are very important technical and social sides to both of these. Much of the work related to AIA so far has been about raising awareness about the problem (eg the book Superintelligence), and this is more a social solution than a technical one. Also, avoiding a technological race for AGI seems important for AIA, and this also is more a social problem than a technical one.
For MCE, the 2 best things I can imagine (that I think are plausible) are both technical in nature. First, I expect clean meat will lead to the moral circle expanding more to animals. I really don’t see any vegan social movement succeeding in ending factory farming anywhere near as much as I expect clean meat to. Second, I’d imagine that a mature science of consciousness would increase MCE significantly. Many people don’t think animals are conscious, and almost no one thinks anything besides animals can be conscious. How would we even know if an AI was conscious, and if so, if it was experiencing joy or suffering? The only way would be if we develop theories of consciousness that we have high confidence in. But right now we’re very limited in studying consciousness, because our tools at interfacing with the brain are crude. Advanced neurotechnologies could change that—they could allow us to potentially test hypotheses about consciousness. Again, developing these technologies would be a technical problem.
Of course, these are just the first ideas that come into my mind, and there very well may be social solutions that could do more than the technical solutions I mentioned, but I don’t think we should rule out the potential role of technical solutions, either.
Thanks for the comment! A few of my thoughts on this:
If one is convinced non-extinction civilization is net positive, this seems true and important. Sorry if I framed the post too much as one or the other for the whole community.
Maybe. My impression from people working on AIA is that they see it as mostly technical, and indeed they think much of the social work has been net negative. Perhaps not Superintelligence, but at least the work that’s been done to get media coverage and widespread attention without the technical attention to detail of Bostrom’s book.
I think the more important social work (from a pro-AIA perspective) is about convincing AI decision-makers to use the technical results of AIA research, but my impression is that AIA proponents still think getting those technical results is probably the more important projects.
There’s also social work in coordinating the AIA community.
Sure, though one big issue with technology is that it seems like we can do far less to steer its direction than we can do with social change. Clean meat tech research probably just helps us get clean meat sooner instead of making the tech progress happen when it wouldn’t otherwise. The direction of the far future (e.g. whether clean meat is ever adopted, whether the moral circle expands to artificial sentience) probably matters a lot more than the speed at which it arrives.
Of course, this gets very complicated very quickly, as we consider things like value lock-in. Sentience Institute has a bit of basic sketching on the topic on this page.
I disagree that “many people don’t think animals are conscious.” I almost exclusively hear that view in from the rationalist/LessWrong community. A recent survey suggested that 87.3% of US adults agree with the statement, “Farmed animals have roughly the same ability to feel pain and discomfort as humans,” and presumably even more think they have at least some ability.
I’m fairly skeptical of this personally, partly because I don’t think there’s a fact of the matter when it comes to whether a being is conscious. I think Brian Tomasik has written eloquently on this. (I know this is an unfortunate view for an animal advocate like me, but it seems to have the best evidence favoring it.)
I would guess that increasing understanding of cognitive science would generally increase people’s moral circles if only because people would think more about these kinds of questions. Of course, understanding cognitive science is no guarantee that you’ll conclude that animals matter, as we can see from people like Dennett, Yudkowsky, Peter Carruthers, etc.
Agreed.
I think that’s right. Specifically, I would advocate consciousness research as a foundation for principled moral circle expansion. I.e., if we do consciousness research correctly, the equations themselves will tell us how conscious insects are, whether algorithms can suffer, how much moral weight we should give animals, and so on.
On the other hand, if there is no fact of the matter as to what is conscious, we’re headed toward a very weird, very contentious future of conflicting/incompatible moral circles, with no ‘ground truth’ or shared principles to arbitrate disputes.
Edit: I’d also like to thank Jacy for posting this- I find it a notable contribution to the space, and clearly a product of a lot of hard work and deep thought.