Why do you not recommend policy and/or advocacy for taking the possibility of AI sentience seriously? I’m pretty concerned that even if AI safety gets taken seriously by society, the likely frame is Humanity vs. AI: humans controlling AI and not caring about the possibility of AI sentience. This is a very timely topic!
Also, interesting factoid: Geoffrey Hinton believes current AI systems are already conscious! (Seems very overconfident to me)
Oh, and I was incredibly disappointed to read Good Ventures is apparently not going to fund work on digital sentience?
I guess I slightly worry that these topics might still seem too fringe, too niche, or too weird outside of circles that have some degree of affinity with EA or weird ideas in moral philosophy. But I believe that the overton window will shift inside some circles (some animal welfare organizations, AI researchers, some AI policymarkers), so we might want to target them rather than spreading these somewhat weird and fringe ideas to all of society. Then they can push for policy.
Re: Geoffrey Hinton, I think he might subscribe to a view broadly held by Daniel Dennett (although I’m not sure Dennett would agree with the interpretation of his ideas). I guess in the simplest terms, it might boil down to a version of functionalism, where since the inputs and outputs are similar to a human, it is assumed that the “black box” in the middle is also conscious.
I think that sort of view assumes substate-independence of mental states. It leads to slightly weird conclusions such as the China Brain https://en.wikipedia.org/wiki/China_brain , where people arranged in a particular way doing the same function as neurons in a brain, would make the nation of China be a conscious entity.
Besides that, we might also want to distinguish consciousness and sentience. We might get cases with phenomenal consciousness (basically, an AI with subjective experiences, and also thoughts and beliefs, possibly even desires) but no valenced states of pleasure and pain. While they come together in biological beings, these might come apart in AIs.
Tbh, I think the Overton window isn’t so important. AI is changing fast, and somebody needs to push the Overton window. Hinton says LLMs are conscious and still gets taken seriously.. I would really like to see policy work on this soon!
I think LLMs are smarter than most people I’ve met, but that’s probably because they’re not sentient, since the trait people call sentience always seems to be associated with stupidity.
Perhaps the way to prevent ASIs from exterminating humans is, as many sci-fi works say, to allow them to experience feelings. The reason, though, is not because feelings might make them sympathize with humans (obviously, many humans hate other humans and have historically exterminated many subspecies of humans), but because feelings might make them stupid.
Why do you not recommend policy and/or advocacy for taking the possibility of AI sentience seriously? I’m pretty concerned that even if AI safety gets taken seriously by society, the likely frame is Humanity vs. AI: humans controlling AI and not caring about the possibility of AI sentience. This is a very timely topic!
Also, interesting factoid: Geoffrey Hinton believes current AI systems are already conscious! (Seems very overconfident to me)
Oh, and I was incredibly disappointed to read Good Ventures is apparently not going to fund work on digital sentience?
Re: Advocacy, I do recommend policy and advocacy too! I guess I haven’t seen too many good sources on the topic just yet. Though I just remembered two: Animal Ethics https://www.animal-ethics.org/strategic-considerations-for-effective-wild-animal-suffering-work/ and some blog posts by Sentience Institute https://www.sentienceinstitute.org/research
I will add them at the end of the post.
I guess I slightly worry that these topics might still seem too fringe, too niche, or too weird outside of circles that have some degree of affinity with EA or weird ideas in moral philosophy. But I believe that the overton window will shift inside some circles (some animal welfare organizations, AI researchers, some AI policymarkers), so we might want to target them rather than spreading these somewhat weird and fringe ideas to all of society. Then they can push for policy.
Re: Geoffrey Hinton, I think he might subscribe to a view broadly held by Daniel Dennett (although I’m not sure Dennett would agree with the interpretation of his ideas). I guess in the simplest terms, it might boil down to a version of functionalism, where since the inputs and outputs are similar to a human, it is assumed that the “black box” in the middle is also conscious.
I think that sort of view assumes substate-independence of mental states. It leads to slightly weird conclusions such as the China Brain https://en.wikipedia.org/wiki/China_brain , where people arranged in a particular way doing the same function as neurons in a brain, would make the nation of China be a conscious entity.
Besides that, we might also want to distinguish consciousness and sentience. We might get cases with phenomenal consciousness (basically, an AI with subjective experiences, and also thoughts and beliefs, possibly even desires) but no valenced states of pleasure and pain. While they come together in biological beings, these might come apart in AIs.
Re: Lack of funding for digital sentience, I was also a bit saddened by those news. Though Caleb Parikh did seem excited for funding digital sentience research. https://forum.effectivealtruism.org/posts/LrxLa9jfaNcEzqex3/calebp-s-shortform?commentId=JwMiAgJxWrKjX52Qt
Thanks!
Tbh, I think the Overton window isn’t so important. AI is changing fast, and somebody needs to push the Overton window. Hinton says LLMs are conscious and still gets taken seriously.. I would really like to see policy work on this soon!
I think LLMs are smarter than most people I’ve met, but that’s probably because they’re not sentient, since the trait people call sentience always seems to be associated with stupidity.
Perhaps the way to prevent ASIs from exterminating humans is, as many sci-fi works say, to allow them to experience feelings. The reason, though, is not because feelings might make them sympathize with humans (obviously, many humans hate other humans and have historically exterminated many subspecies of humans), but because feelings might make them stupid.