It strikes me as very unlikely that a rudimentary Pong-playing AI running on biological wetware is more sentient than a modern LLM running on digital hardware.
I’d argue that the overwhelming majority of the voting populace would find it easier to visualize future, more advanced biological wetware as potentially sentient. And regulations for one domain, if framed correctly, will influence the other. It seems to me that political willpower will be much easier to build if we start where public intuition is strongest and expand from there.
It strikes me vastly more likely that the biologal wetware used to run a Pong-playing AI is sentient than that modern digital hardware used to run LLM inference is. That is, that running the Pong-playing AI on brain organelles causes/involves instantiation of phenomenally bound valenced moments of experience, which need not necessarily contain any sort of representational content or self-models related to Pong or the “Pong-playing AI”, or indeed in this case even be essentially used in the system seen as a “Pong-AI computer”. One’s likelihood estimates here (and what one thinks might be conscious!) are very sensitive to the sort of theories or models of consciousness considered plausible. Ideally we’d be able to do some sort of robust minmaxing over uncertainty/disagreement, applying the precautionary principle, and recalling that humanity has a pretty dismal track record, until hopefully we understand the situation better.
It strikes me as very unlikely that a rudimentary Pong-playing AI running on biological wetware is more sentient than a modern LLM running on digital hardware.
I’d argue that the overwhelming majority of the voting populace would find it easier to visualize future, more advanced biological wetware as potentially sentient. And regulations for one domain, if framed correctly, will influence the other. It seems to me that political willpower will be much easier to build if we start where public intuition is strongest and expand from there.
It strikes me vastly more likely that the biologal wetware used to run a Pong-playing AI is sentient than that modern digital hardware used to run LLM inference is. That is, that running the Pong-playing AI on brain organelles causes/involves instantiation of phenomenally bound valenced moments of experience, which need not necessarily contain any sort of representational content or self-models related to Pong or the “Pong-playing AI”, or indeed in this case even be essentially used in the system seen as a “Pong-AI computer”. One’s likelihood estimates here (and what one thinks might be conscious!) are very sensitive to the sort of theories or models of consciousness considered plausible. Ideally we’d be able to do some sort of robust minmaxing over uncertainty/disagreement, applying the precautionary principle, and recalling that humanity has a pretty dismal track record, until hopefully we understand the situation better.