With digital sentiences, we don’t have homology. They aren’t based in brains, and they evolved by a different kind of selective process.
This assumes that the digital sentiences we are discussing are LLM based. This is certainly a likely near-term possibility, maybe even occuring already. People are already experimenting with how conscious LLMs are and how they could be made more conscious.
In the future, however, many more things are possible. Digital people who are based on emulations of the human brain are being worked on. Within the next few years we’ll have to decide as a society what regulation to put in place around that. Such beings would have a great deal of homology with human brains, depending on the accuracy of the emulation.
Yes, brain emulation would be different than LLMs and I’d have a lot more confidence that, if we were doing it well, the experience inside would be like ours. I still worry about not realizing how we’re doing it slightly wrong and that creating private suffering that isn’t expressed and us being incentivized to ignore that possibility, but much less than with novel architectures. In order to be morally comfortable with this we’d also have to ensure that people didn’t experiment willy-nilly with new architectures until we understand what they would feel (if ever).
This assumes that the digital sentiences we are discussing are LLM based. This is certainly a likely near-term possibility, maybe even occuring already. People are already experimenting with how conscious LLMs are and how they could be made more conscious.
In the future, however, many more things are possible. Digital people who are based on emulations of the human brain are being worked on. Within the next few years we’ll have to decide as a society what regulation to put in place around that. Such beings would have a great deal of homology with human brains, depending on the accuracy of the emulation.
Yes, brain emulation would be different than LLMs and I’d have a lot more confidence that, if we were doing it well, the experience inside would be like ours. I still worry about not realizing how we’re doing it slightly wrong and that creating private suffering that isn’t expressed and us being incentivized to ignore that possibility, but much less than with novel architectures. In order to be morally comfortable with this we’d also have to ensure that people didn’t experiment willy-nilly with new architectures until we understand what they would feel (if ever).