FWIW, I completely agree with what you’re saying here and I think that if you seriously go into consciousness research and especially for what we westerners more label as a sense of self rather than anything else it quickly becomes infeasible to hold a position that the way we’re taking AI development, e.g towards AI agents will not lead to AIs having self-models.
For all matters and purposes this encompasses most theories of physicalist or non-dual theories of consciousness which are the only feasible ones unless you want to bite some really sour apples.
There’s a classic “what are we getting wrong” question in EA and I think it’s extremely likely that we will look back in 10 years and say, “wow, what are we doing here?”.
I think it’s a lot better to think of systemic alignment and look at properties that we want for the general collective intelligences that we’re engaging in such as our information networks or our institutional decision making procedures and think of how we can optimise these for resillience and truth-seeking. If certain AIs deserve moral patienthood then that truth will naturally arise from such structures.
(hot take) Individual AI alignment might honestly be counter-productive towards this view.
FWIW, I completely agree with what you’re saying here and I think that if you seriously go into consciousness research and especially for what we westerners more label as a sense of self rather than anything else it quickly becomes infeasible to hold a position that the way we’re taking AI development, e.g towards AI agents will not lead to AIs having self-models.
For all matters and purposes this encompasses most theories of physicalist or non-dual theories of consciousness which are the only feasible ones unless you want to bite some really sour apples.
There’s a classic “what are we getting wrong” question in EA and I think it’s extremely likely that we will look back in 10 years and say, “wow, what are we doing here?”.
I think it’s a lot better to think of systemic alignment and look at properties that we want for the general collective intelligences that we’re engaging in such as our information networks or our institutional decision making procedures and think of how we can optimise these for resillience and truth-seeking. If certain AIs deserve moral patienthood then that truth will naturally arise from such structures.
(hot take) Individual AI alignment might honestly be counter-productive towards this view.