I’m not particularly well informed about current EA discourse on AI alignment, but I imagine that two possible strategies are
accelerating alignment research and staying friendly with the big AI companies
getting governments to slow AI development in a worldwide-coordinated way, even if this angers people at AI companies.
Yudkowsky’s article helps push on the latter approach. Making the public and governments more worried about AI risk does seem to me the most plausible way of slowing it down. If more people in the national-security community worry about AI risks, there could be a lot more attention to these issues, as well as the possibility of policies like limiting total computing power for AI training that only governments could pull off.
I expect a lot of AI developers would be angry about getting the public and governments more alarmed, but if the effort to raise alarm works well enough, then the AI developers will have to comply. OTOH, there’s also a possible “boy who cried wolf” situation in which AI progress continues, nothing that bad happens for a few years, and then people assume the doomsayers were overreacting—making it harder to ring alarm bells the next time.
Thanks. :)
I plausibly agree with your last paragraph, but I think illusionism as a way to (dis)solve the hard problem can be consistent with lots of different moral views about which brain processes we consider sentient. Some people take the approach I think you’re proposing, in which we have stricter criteria regarding what it takes for a mind to be sentient than we might have had before learning about illusionism. Others might feel that illusionism shows that the distinction between “conscious” and “unconscious” is less fundamental than we assumed and that therefore more things should count as sentient than we previously thought. (Susan Blackmore is one illusionist who concludes from illusionism that there’s less of a distinction between conscious and unconscious than we naively think, although I don’t know how this affects her moral circle.)
It’s not clear to me whether an illusion that “this rubber hand is part of my body” is more relevant to consciousness than a judgment that “this face is Jennifer Aniston”. I guess we’d have to propose detailed criteria for which judgments are relevant to consciousness and have better understandings of what these judgments look like in the brain.
I agree that such illusions seem important. :) But it’s plausible to me that it’s also at least somewhat important if something matters to the system, even if there’s no high-level illusion saying so. For example, a nematode clearly cares about avoiding bodily damage, even if its nervous system doesn’t contain any nontrivial representation that “I care about avoiding pain”. I think adding that higher-level representation increases the sentience of the brain, but it seems weird to say that without the higher-level representation, the brain doesn’t matter at all. I guess without that higher-level representation, it’s harder to imagine ourselves in the nematode’s place, because whenever we think about the badness of pain, we’re doing so using that higher level.