Yes, brain emulation would be different than LLMs and I’d have a lot more confidence that, if we were doing it well, the experience inside would be like ours. I still worry about not realizing how we’re doing it slightly wrong and that creating private suffering that isn’t expressed and us being incentivized to ignore that possibility, but much less than with novel architectures. In order to be morally comfortable with this we’d also have to ensure that people didn’t experiment willy-nilly with new architectures until we understand what they would feel (if ever).
Yes, brain emulation would be different than LLMs and I’d have a lot more confidence that, if we were doing it well, the experience inside would be like ours. I still worry about not realizing how we’re doing it slightly wrong and that creating private suffering that isn’t expressed and us being incentivized to ignore that possibility, but much less than with novel architectures. In order to be morally comfortable with this we’d also have to ensure that people didn’t experiment willy-nilly with new architectures until we understand what they would feel (if ever).