My comment on Ajeya Cotraâs AMA, from Feb 2021 (so probably Iâd write it differently today):
â[Iâm not sure if youâve thought about the following sort of question much. Also, I havenât properly read your reportâlet me know if this is covered in there.]
Iâm interested in a question along the lines of âDo you think some work done before TAI is developed matters in a predictable wayâi.e., better than 0 value in expectationâfor its effects on the post-TAI world, in ways that donât just flow through how the work affects the pre-TAI world or how the TAI transition itself plays out? If so, to what extent? And what sort of work?â
An example to illustrate: âLetâs say TAI is developed in 2050, and the âTAI transitionâ is basically âdoneâ by 2060. Could some work to improve institutional decision-making be useful in terms of how it affects what happens from 2060 onwards, and not just via reducing x-risk (or reducing suffering etc.) before 2060 and improving how the TAI transition goes?â
But Iâm not sure itâs obvious what I mean by the above, so hereâs my attempt to explain:
The question of when TAI will be developed[1] is clearly very important to a whole bunch of prioritisation questions. One reason is that TAIâand probably the systems leading up to itâwill very substantially change how many aspects of how society works. Specifically, Open Phil has defined TAI as âAI that precipitates a transition comparable to (or more significant than) the agricultural or industrial revolutionâ (and Muehlhauser has provided some more detail on what is meant by that).
But I think some EAs implicitly assume something stronger, along the lines of:
The expected moral value of actions we take now is entirely based on those actionsâ effects on what happens before TAI is developed and those actionsâ effects on the development, deployment, etc. of TAI. That is, the expected value of the actions we take now is not partly based on how the actions affect aspects of the post-TAI world in ways unrelated to how TAI is developed, deployed, etc. This is either because we just canât at all predict those effects or because those effects wouldnât be important; the world will just be very shaken up and perhaps unrecognisable, and any effects of pre-TAI actions will be washed out unless they affect how the TAI transition occurs.
E.g., things we do now to improve institutional decision-making or reduce risks of war can matter inasmuch as they reduce risks before TAI and reduce risks from TAI (and maybe also reduce actual harms, increase benefits, etc.). But theyâll have no even-slightly-predictable or substantial effect on decision-making or risks of war in the post-TAI world.
But I donât think that necessarily follows from how TAI is defined. E.g., various countries, religious, ideologies, political systems, technologies, etc., existed both before the Industrial Revolution and for decades/âcenturies afterwards. And it seems like some pre-Industrial-Revolution actionsâe.g. people who pushed for democracy or the abolition of slaveryâhad effects on the post-Industrial-Revolution world that were probably predictably positive in advance and that werenât just about affecting how the Industrial Revolution itself occurred.
(Though it may have still been extremely useful for people taking those actions to know that, when, where, and how the IR would occur, e.g. because then they could push for democracy and abolition in the countries that were about to become much more influential and powerful.)
So Iâm tentatively inclined to think that some EAs are assuming that short timelines pushes against certain types of work more than it really does, and that certain (often âbroadâ) interventions could be in expectation useful for influencing the post-TAI world in a relatively âcontinuousâ way. In other words, Iâm inclined to thinks there might be less of an extremely abrupt âbreakâ than some people seem to think, even if TAI occurs. (Though itâd still be quite extreme by many standards, just as the Industrial Revolution was.)
[1] Here Iâm assuming TAI will be developed, which is questionable, though it seems to me pretty much guaranteed unless some existential catastrophe occurs beforehand.â
My comment on Ajeya Cotraâs AMA, from Feb 2021 (so probably Iâd write it differently today):
â[Iâm not sure if youâve thought about the following sort of question much. Also, I havenât properly read your reportâlet me know if this is covered in there.]
Iâm interested in a question along the lines of âDo you think some work done before TAI is developed matters in a predictable wayâi.e., better than 0 value in expectationâfor its effects on the post-TAI world, in ways that donât just flow through how the work affects the pre-TAI world or how the TAI transition itself plays out? If so, to what extent? And what sort of work?â
An example to illustrate: âLetâs say TAI is developed in 2050, and the âTAI transitionâ is basically âdoneâ by 2060. Could some work to improve institutional decision-making be useful in terms of how it affects what happens from 2060 onwards, and not just via reducing x-risk (or reducing suffering etc.) before 2060 and improving how the TAI transition goes?â
But Iâm not sure itâs obvious what I mean by the above, so hereâs my attempt to explain:
The question of when TAI will be developed[1] is clearly very important to a whole bunch of prioritisation questions. One reason is that TAIâand probably the systems leading up to itâwill very substantially change how many aspects of how society works. Specifically, Open Phil has defined TAI as âAI that precipitates a transition comparable to (or more significant than) the agricultural or industrial revolutionâ (and Muehlhauser has provided some more detail on what is meant by that).
But I think some EAs implicitly assume something stronger, along the lines of:
But I donât think that necessarily follows from how TAI is defined. E.g., various countries, religious, ideologies, political systems, technologies, etc., existed both before the Industrial Revolution and for decades/âcenturies afterwards. And it seems like some pre-Industrial-Revolution actionsâe.g. people who pushed for democracy or the abolition of slaveryâhad effects on the post-Industrial-Revolution world that were probably predictably positive in advance and that werenât just about affecting how the Industrial Revolution itself occurred.
(Though it may have still been extremely useful for people taking those actions to know that, when, where, and how the IR would occur, e.g. because then they could push for democracy and abolition in the countries that were about to become much more influential and powerful.)
So Iâm tentatively inclined to think that some EAs are assuming that short timelines pushes against certain types of work more than it really does, and that certain (often âbroadâ) interventions could be in expectation useful for influencing the post-TAI world in a relatively âcontinuousâ way. In other words, Iâm inclined to thinks there might be less of an extremely abrupt âbreakâ than some people seem to think, even if TAI occurs. (Though itâd still be quite extreme by many standards, just as the Industrial Revolution was.)
[1] Here Iâm assuming TAI will be developed, which is questionable, though it seems to me pretty much guaranteed unless some existential catastrophe occurs beforehand.â