I haven’t thought very deeply about this, but my first intuition is that the most compelling reason to expect to have an impact that predictably lasts longer than several hundred years without being washed out is because of the possibility of some sort of “lock-in”—technology that allows values and preferences to be more stably transmitted into the very long-term future than current technology allows. For example, the ability to program space probes with instructions for creating the type of “digital life” we would morally value, with error-correcting measures to prevent drift, would count as a technology that allows for effective lock-in in my mind.
A lot of people may act as if we can’t impact anything post-transformative AI because they believe technology that enables lock-in will be built very close in time after transformative AI (since TAI would likely cause R&D towards these types of tech to be greatly accelerated).
[Kind-of thinking aloud; bit of a tangent from your AMA]
Yeah, that basically matches my views.
I guess what I have in mind is that some people seem to:
round up “most compelling reason” to “only reason”
not consider the idea of trying to influence lock-in events that occur after a TAI transition, in ways other than influencing how the TAI transition itself occurs
Such ways could include things like influencing political systems in long-lasting ways
round up “substantial chance that technology that enables lock-in will be built very close in time after TAI” up to “it’s basically guaranteed that...”
I think what concerns me about this is that I get the impression many of people are doing this without noticing it. It seems like maybe some thought leaders recognised that there were questions to ask here, thought about the questions, and formed conclusions, but then other people just got a slightly simplified version of the conclusion without noticing there’s even a question to ask.
A counterpoint is that I think the ideas of “broad longtermism”, and some ideas that people like MacAskill have raised, kind-of highlight the questions I’m suggesting should be highlighted. But even those ideas seem to often be about what to do given the premise that a TAI transition won’t occur for a long time, or how to indirectly influence how a TAI transition occurs. So I think they’re still not exactly about the sort of thing I’m talking about.
To be clear, I do think we should put more longtermist resources towards influencing potential lock-in events prior to or right around the time of a TAI transition than towards non-TAI-focused ways of influencing events after a TAI transition. But it seems pretty plausible to me that some longtermist resources should go towards other things, and it also seems good for people to be aware that a debate could be had on this.
(I should probably think more about this, check whether similar points are already covered well in some existing writings, and if not write something more coherent that these comments.)
I haven’t thought very deeply about this, but my first intuition is that the most compelling reason to expect to have an impact that predictably lasts longer than several hundred years without being washed out is because of the possibility of some sort of “lock-in”—technology that allows values and preferences to be more stably transmitted into the very long-term future than current technology allows. For example, the ability to program space probes with instructions for creating the type of “digital life” we would morally value, with error-correcting measures to prevent drift, would count as a technology that allows for effective lock-in in my mind.
A lot of people may act as if we can’t impact anything post-transformative AI because they believe technology that enables lock-in will be built very close in time after transformative AI (since TAI would likely cause R&D towards these types of tech to be greatly accelerated).
[Kind-of thinking aloud; bit of a tangent from your AMA]
Yeah, that basically matches my views.
I guess what I have in mind is that some people seem to:
round up “most compelling reason” to “only reason”
not consider the idea of trying to influence lock-in events that occur after a TAI transition, in ways other than influencing how the TAI transition itself occurs
Such ways could include things like influencing political systems in long-lasting ways
round up “substantial chance that technology that enables lock-in will be built very close in time after TAI” up to “it’s basically guaranteed that...”
I think what concerns me about this is that I get the impression many of people are doing this without noticing it. It seems like maybe some thought leaders recognised that there were questions to ask here, thought about the questions, and formed conclusions, but then other people just got a slightly simplified version of the conclusion without noticing there’s even a question to ask.
A counterpoint is that I think the ideas of “broad longtermism”, and some ideas that people like MacAskill have raised, kind-of highlight the questions I’m suggesting should be highlighted. But even those ideas seem to often be about what to do given the premise that a TAI transition won’t occur for a long time, or how to indirectly influence how a TAI transition occurs. So I think they’re still not exactly about the sort of thing I’m talking about.
To be clear, I do think we should put more longtermist resources towards influencing potential lock-in events prior to or right around the time of a TAI transition than towards non-TAI-focused ways of influencing events after a TAI transition. But it seems pretty plausible to me that some longtermist resources should go towards other things, and it also seems good for people to be aware that a debate could be had on this.
(I should probably think more about this, check whether similar points are already covered well in some existing writings, and if not write something more coherent that these comments.)