Thanks for the comment! A few of my thoughts on this:
Presumably we want some people working on both of these problems, some people have skills more suited to one than the other, and some people are just going to be more passionate about one than the other.
If one is convinced non-extinction civilization is net positive, this seems true and important. Sorry if I framed the post too much as one or the other for the whole community.
Much of the work related to AIA so far has been about raising awareness about the problem (eg the book Superintelligence), and this is more a social solution than a technical one.
Maybe. My impression from people working on AIA is that they see it as mostly technical, and indeed they think much of the social work has been net negative. Perhaps not Superintelligence, but at least the work that’s been done to get media coverage and widespread attention without the technical attention to detail of Bostrom’s book.
I think the more important social work (from a pro-AIA perspective) is about convincing AI decision-makers to use the technical results of AIA research, but my impression is that AIA proponents still think getting those technical results is probably the more important projects.
There’s also social work in coordinating the AIA community.
First, I expect clean meat will lead to the moral circle expanding more to animals. I really don’t see any vegan social movement succeeding in ending factory farming anywhere near as much as I expect clean meat to.
Sure, though one big issue with technology is that it seems like we can do far less to steer its direction than we can do with social change. Clean meat tech research probably just helps us get clean meat sooner instead of making the tech progress happen when it wouldn’t otherwise. The direction of the far future (e.g. whether clean meat is ever adopted, whether the moral circle expands to artificial sentience) probably matters a lot more than the speed at which it arrives.
Of course, this gets very complicated very quickly, as we consider things like value lock-in. Sentience Institute has a bit of basic sketching on the topic on this page.
Second, I’d imagine that a mature science of consciousness would increase MCE significantly. Many people don’t think animals are conscious, and almost no one thinks anything besides animals can be conscious
I disagree that “many people don’t think animals are conscious.” I almost exclusively hear that view in from the rationalist/LessWrong community. A recent survey suggested that 87.3% of US adults agree with the statement, “Farmed animals have roughly the same ability to feel pain and discomfort as humans,” and presumably even more think they have at least some ability.
Advanced neurotechnologies could change that—they could allow us to potentially test hypotheses about consciousness.
I’m fairly skeptical of this personally, partly because I don’t think there’s a fact of the matter when it comes to whether a being is conscious. I think Brian Tomasik has written eloquently on this. (I know this is an unfortunate view for an animal advocate like me, but it seems to have the best evidence favoring it.)
I’m fairly skeptical of this personally, partly because I don’t think there’s a fact of the matter when it comes to whether a being is conscious.
I would guess that increasing understanding of cognitive science would generally increase people’s moral circles if only because people would think more about these kinds of questions. Of course, understanding cognitive science is no guarantee that you’ll conclude that animals matter, as we can see from people like Dennett, Yudkowsky, Peter Carruthers, etc.
Thanks for the comment! A few of my thoughts on this:
If one is convinced non-extinction civilization is net positive, this seems true and important. Sorry if I framed the post too much as one or the other for the whole community.
Maybe. My impression from people working on AIA is that they see it as mostly technical, and indeed they think much of the social work has been net negative. Perhaps not Superintelligence, but at least the work that’s been done to get media coverage and widespread attention without the technical attention to detail of Bostrom’s book.
I think the more important social work (from a pro-AIA perspective) is about convincing AI decision-makers to use the technical results of AIA research, but my impression is that AIA proponents still think getting those technical results is probably the more important projects.
There’s also social work in coordinating the AIA community.
Sure, though one big issue with technology is that it seems like we can do far less to steer its direction than we can do with social change. Clean meat tech research probably just helps us get clean meat sooner instead of making the tech progress happen when it wouldn’t otherwise. The direction of the far future (e.g. whether clean meat is ever adopted, whether the moral circle expands to artificial sentience) probably matters a lot more than the speed at which it arrives.
Of course, this gets very complicated very quickly, as we consider things like value lock-in. Sentience Institute has a bit of basic sketching on the topic on this page.
I disagree that “many people don’t think animals are conscious.” I almost exclusively hear that view in from the rationalist/LessWrong community. A recent survey suggested that 87.3% of US adults agree with the statement, “Farmed animals have roughly the same ability to feel pain and discomfort as humans,” and presumably even more think they have at least some ability.
I’m fairly skeptical of this personally, partly because I don’t think there’s a fact of the matter when it comes to whether a being is conscious. I think Brian Tomasik has written eloquently on this. (I know this is an unfortunate view for an animal advocate like me, but it seems to have the best evidence favoring it.)
I would guess that increasing understanding of cognitive science would generally increase people’s moral circles if only because people would think more about these kinds of questions. Of course, understanding cognitive science is no guarantee that you’ll conclude that animals matter, as we can see from people like Dennett, Yudkowsky, Peter Carruthers, etc.
Agreed.