EA-style discussion about AI seems to dismiss out of hand the possibility that AI might be sentient. I can’t find an example, but the possibility seems generally scoffed at in the same tone people dismiss Skynet and killer robot scenarios. Bostrom’s simulation hypothesis, however, is broadly accepted as at the very least an interestingly plausible argument.
These two stances seem entirely incompatible—if silicon can create a whole world inside of which are sentient minds, why can’t it just create the minds with no need for the framing device? It is plausible that sentience does not emerge unless you very precisely mimic natural (or “natural”) evolutionary pressures, but this seems unlikely. It’s likewise possible that something about the process by which we expect to create AI doesn’t allow for sentience, but in that case I think the burden of proof is on the people making the argument to identify this feature and argue for their reasons.
The strongest argument I can think off of the top of my head is that, if we expect a chance of future-AI created by something resembling modern machine learning methods to have a chance at sentience, we should likewise expect, say, worm-equivalent AIs to have it to. Is c. elegans sentient? Is OpenWorm? If you answered yes to the first and no to the second, what is OpenWorm missing that c. elegans has?
I think there are many examples of EAs thinking about the possibility that AI might be sentient by default. Some examples I can think of off the top of my head:
I don’t think people are disputing that it would be theoretically possible for AIs to be conscious, I think that they’re making the claim that AI systems we find won’t be.
Thanks for the links, I googled briefly before I wrote this to check my memory and couldn’t find anything. I think what formed my impression was that even in very detailed conversations/writing on AI, as far as I could tell by default there was no mention or implicit acknowledgement of the possibility. On reflection I’m not sure if I would expect it to be even if people did think it was likely, though.
Many years ago, Eliezer Yudkowsky shared a short story I wrote (related to AI sentience) with his Facebook followers. The story isn’t great—I bring it up here only as an example of people being interested in these questions.
EA-style discussion about AI seems to dismiss out of hand the possibility that AI might be sentient. I can’t find an example, but the possibility seems generally scoffed at in the same tone people dismiss Skynet and killer robot scenarios. Bostrom’s simulation hypothesis, however, is broadly accepted as at the very least an interestingly plausible argument.
These two stances seem entirely incompatible—if silicon can create a whole world inside of which are sentient minds, why can’t it just create the minds with no need for the framing device? It is plausible that sentience does not emerge unless you very precisely mimic natural (or “natural”) evolutionary pressures, but this seems unlikely. It’s likewise possible that something about the process by which we expect to create AI doesn’t allow for sentience, but in that case I think the burden of proof is on the people making the argument to identify this feature and argue for their reasons.
The strongest argument I can think off of the top of my head is that, if we expect a chance of future-AI created by something resembling modern machine learning methods to have a chance at sentience, we should likewise expect, say, worm-equivalent AIs to have it to. Is c. elegans sentient? Is OpenWorm? If you answered yes to the first and no to the second, what is OpenWorm missing that c. elegans has?
I think there are many examples of EAs thinking about the possibility that AI might be sentient by default. Some examples I can think of off the top of my head:
Brian Tomasik has written about why we might get sentient computer programs by default, eg https://reducing-suffering.org/why-your-laptop-may-be-marginally-sentient/
I’ve referred to this occasionally in the past in various posts of mine, eg http://shlegeris.com/2016/11/26/research
Eliezer Yudkowsky is worried about the related concern that AI systems might simulate humans in morally relevant ways https://arbital.com/p/mindcrime/?l=18h
Paul Christiano has written about whether unaligned AI is morally valuable https://ai-alignment.com/sympathizing-with-ai-e11a4bf5ef6e
I don’t think people are disputing that it would be theoretically possible for AIs to be conscious, I think that they’re making the claim that AI systems we find won’t be.
Thanks for the links, I googled briefly before I wrote this to check my memory and couldn’t find anything. I think what formed my impression was that even in very detailed conversations/writing on AI, as far as I could tell by default there was no mention or implicit acknowledgement of the possibility. On reflection I’m not sure if I would expect it to be even if people did think it was likely, though.
Many years ago, Eliezer Yudkowsky shared a short story I wrote (related to AI sentience) with his Facebook followers. The story isn’t great—I bring it up here only as an example of people being interested in these questions.