I have some objections to the idea that groups will be “immortal” in the future, in the sense of never changing, dying, or rotting, and persisting over time in a roughly unchanged form, exerting consistent levels of power over a very long time period. To be clear, I do think AGI can make some forms of value lock-in more likely, but I want to distinguish a few different claims:
(1) is a future value lock-in likely to occur at some point, especially not long after human labor has become ~obsolete?
(2) is lock-in more likely if we perform, say, a century more of technical AI alignment research, before proceeding forward?
(3) is it good to make lock-in more likely by, say, delaying AI by 100 years to do more technical alignment research, before proceeding forward? (i.e., will it be good or bad to do this type of thing?)
My quick and loose current answers to these questions are as follows:
This seems plausible but unlikely to me in a strong form. Some forms of lock-in seem likely; I’m more skeptical of the more radical scenarios people have talked about.
I suspect lock-in would become more likely in this case, but the marginal effect of more research would likely be pretty small.
I am pretty uncertain about this question, but I lean towards being against deliberately aiming for this type of lock-in. I am inclined to this view for a number of reasons, but one reason is that this policy seems to make it more likely that we restrict innovation and experience system rot on a large scale, causing the future to be much bleaker than it otherwise could be. See also Robin Hanson’s post on world government rot.
Where the main counterargument is that now the groups in power can be immortal and digital minds will be possible.
See also: AGI and Lock-in
I have some objections to the idea that groups will be “immortal” in the future, in the sense of never changing, dying, or rotting, and persisting over time in a roughly unchanged form, exerting consistent levels of power over a very long time period. To be clear, I do think AGI can make some forms of value lock-in more likely, but I want to distinguish a few different claims:
(1) is a future value lock-in likely to occur at some point, especially not long after human labor has become ~obsolete?
(2) is lock-in more likely if we perform, say, a century more of technical AI alignment research, before proceeding forward?
(3) is it good to make lock-in more likely by, say, delaying AI by 100 years to do more technical alignment research, before proceeding forward? (i.e., will it be good or bad to do this type of thing?)
My quick and loose current answers to these questions are as follows:
This seems plausible but unlikely to me in a strong form. Some forms of lock-in seem likely; I’m more skeptical of the more radical scenarios people have talked about.
I suspect lock-in would become more likely in this case, but the marginal effect of more research would likely be pretty small.
I am pretty uncertain about this question, but I lean towards being against deliberately aiming for this type of lock-in. I am inclined to this view for a number of reasons, but one reason is that this policy seems to make it more likely that we restrict innovation and experience system rot on a large scale, causing the future to be much bleaker than it otherwise could be. See also Robin Hanson’s post on world government rot.