clarifying “what should people who gain a huge amount of power through AI do with Earth, existing social structuers, and the universe?” seems like a good question to get agreement on for coordination reasons
we should be looking for tractable ways of answering this question
I think:
a) consciousness research will fail to clarify ethics enough to answer enough of (1) to achieve coordination (since I think human preferences on the relevant timescales are way more complicated than consciousness, conditioned on consciousness being simple). b) it is tractable to answer (1) without reaching agreement on object-level values, by doing something like designing a temporary global government structure that most people agree is pretty good (in that it will allow society to reflect appropriately and determine the next global government structure), but that this question hasn’t been answered well yet and that a better answer would improve coordination. E.g. perhaps society is run as a global federalist democratic-ish structure with centralized control of potentially destructive technology (taking into account “how voters would judge something if they thought longer” rather than “how voters actually judge something”; this might be possible if the AI alignment problem is solved). It seems quite possible to create proposals of this form and critique them.
It seems like we disagree about (a) and this disagreement has been partially hashed out elsewhere, and that it’s not clear we have a strong disagreement about (b).
I agree that:
clarifying “what should people who gain a huge amount of power through AI do with Earth, existing social structuers, and the universe?” seems like a good question to get agreement on for coordination reasons
we should be looking for tractable ways of answering this question
I think:
a) consciousness research will fail to clarify ethics enough to answer enough of (1) to achieve coordination (since I think human preferences on the relevant timescales are way more complicated than consciousness, conditioned on consciousness being simple).
b) it is tractable to answer (1) without reaching agreement on object-level values, by doing something like designing a temporary global government structure that most people agree is pretty good (in that it will allow society to reflect appropriately and determine the next global government structure), but that this question hasn’t been answered well yet and that a better answer would improve coordination. E.g. perhaps society is run as a global federalist democratic-ish structure with centralized control of potentially destructive technology (taking into account “how voters would judge something if they thought longer” rather than “how voters actually judge something”; this might be possible if the AI alignment problem is solved). It seems quite possible to create proposals of this form and critique them.
It seems like we disagree about (a) and this disagreement has been partially hashed out elsewhere, and that it’s not clear we have a strong disagreement about (b).