I guess I slightly worry that these topics might still seem too fringe, too niche, or too weird outside of circles that have some degree of affinity with EA or weird ideas in moral philosophy. But I believe that the overton window will shift inside some circles (some animal welfare organizations, AI researchers, some AI policymarkers), so we might want to target them rather than spreading these somewhat weird and fringe ideas to all of society. Then they can push for policy.
Re: Geoffrey Hinton, I think he might subscribe to a view broadly held by Daniel Dennett (although I’m not sure Dennett would agree with the interpretation of his ideas). I guess in the simplest terms, it might boil down to a version of functionalism, where since the inputs and outputs are similar to a human, it is assumed that the “black box” in the middle is also conscious.
I think that sort of view assumes substate-independence of mental states. It leads to slightly weird conclusions such as the China Brain https://en.wikipedia.org/wiki/China_brain , where people arranged in a particular way doing the same function as neurons in a brain, would make the nation of China be a conscious entity.
Besides that, we might also want to distinguish consciousness and sentience. We might get cases with phenomenal consciousness (basically, an AI with subjective experiences, and also thoughts and beliefs, possibly even desires) but no valenced states of pleasure and pain. While they come together in biological beings, these might come apart in AIs.
Tbh, I think the Overton window isn’t so important. AI is changing fast, and somebody needs to push the Overton window. Hinton says LLMs are conscious and still gets taken seriously.. I would really like to see policy work on this soon!
Re: Advocacy, I do recommend policy and advocacy too! I guess I haven’t seen too many good sources on the topic just yet. Though I just remembered two: Animal Ethics https://www.animal-ethics.org/strategic-considerations-for-effective-wild-animal-suffering-work/ and some blog posts by Sentience Institute https://www.sentienceinstitute.org/research
I will add them at the end of the post.
I guess I slightly worry that these topics might still seem too fringe, too niche, or too weird outside of circles that have some degree of affinity with EA or weird ideas in moral philosophy. But I believe that the overton window will shift inside some circles (some animal welfare organizations, AI researchers, some AI policymarkers), so we might want to target them rather than spreading these somewhat weird and fringe ideas to all of society. Then they can push for policy.
Re: Geoffrey Hinton, I think he might subscribe to a view broadly held by Daniel Dennett (although I’m not sure Dennett would agree with the interpretation of his ideas). I guess in the simplest terms, it might boil down to a version of functionalism, where since the inputs and outputs are similar to a human, it is assumed that the “black box” in the middle is also conscious.
I think that sort of view assumes substate-independence of mental states. It leads to slightly weird conclusions such as the China Brain https://en.wikipedia.org/wiki/China_brain , where people arranged in a particular way doing the same function as neurons in a brain, would make the nation of China be a conscious entity.
Besides that, we might also want to distinguish consciousness and sentience. We might get cases with phenomenal consciousness (basically, an AI with subjective experiences, and also thoughts and beliefs, possibly even desires) but no valenced states of pleasure and pain. While they come together in biological beings, these might come apart in AIs.
Re: Lack of funding for digital sentience, I was also a bit saddened by those news. Though Caleb Parikh did seem excited for funding digital sentience research. https://forum.effectivealtruism.org/posts/LrxLa9jfaNcEzqex3/calebp-s-shortform?commentId=JwMiAgJxWrKjX52Qt
Thanks!
Tbh, I think the Overton window isn’t so important. AI is changing fast, and somebody needs to push the Overton window. Hinton says LLMs are conscious and still gets taken seriously.. I would really like to see policy work on this soon!