Another weakness of the model is that it doesnât seem particularly appropriate for modelling some types of longtermist interventions. In particular, itâs not ideal for capturing the dynamics of interventions that aim to push the world into âattractor statesâ (states of the world such that once the world enters that state, it tends to stay in that state for an extremely long time). Since these are possibly the best candidates for interventions that manage to avoid the âwashing outâ trap, it would be useful to explore other models to understand these interventions in more depth.
Itâs worth noting that Tarsneyâs The epistemic challenge to longtermism, which you mention, deals explicitly with attractor states, and so I think better captures this stronger case for longtermism.
I agree! I think youâre pointing towards a useful way of carving up this landscape. My framework is good for modelling âordinaryâ actions that donât involve attractor states, where actions are more likely to wash out and longtermism becomes harder to defend (but may still win out over neartermist interventions under the right conditions). Then, Tarsneyâs framework is a useful way of thinking about attractor states, where the case for longtermism becomes stronger but is still not a given.
Iâm unsure how many proposed longtermist interventions donât rely on the concept of attractor states. For example, in Greaves and MacAskillâs The Case for Strong Longtermism, they class mitigating (fairly extreme) climate change as an intervention that steers away from a ânon-extinctionâ attractor state:
A sufficiently warmer climate could result in a slower long-run growth rate (Pindyck 2013, Stern 2006), making future civilisation poorer indefinitely; or it could mean that the planet cannot in the future sustain as large a human population (Aral 2014); or it could cause unrecoverable ecosystem losses, such as species extinction and destruction of coral reefs (IPCC 2014, pp.1052-54)
Perhaps Nick Becksteadâs work deviates from the concept of attractor states? I havenât looked at his work very closely so am not too sure. Do you feel that âordinaryâ (non-attractor state) longtermist interventions are commonly put forward in the longtermist community?
The only intervention in Greaves and MacAskillâs paper that doesnât rely on an attractor state is âspeeding up progressâ:
Suppose, for instance, we bring it about that the progress level that would otherwise have been realised in 2030 is instead realised in 2029 (say, by hastening the advent of some beneficial new technology), and that progress then continues from that point on just as it would have if the point in question had been reached one year later. Then, for as long as the progress curve retains a positive slope, people living at every future time will be a little bit better off than they would have been without the intervention. In principle, these small benefits at each of an enormous number of future times could add up to a very large aggregate benefit.
Iâd be interested to hear your thoughts on what you think the forecasting error function would be in this case. My (very immediate and perhaps incorrect) thought is that speeding up progress doesnât fall prey to a noisier signal over time. Instead Iâm thinking it would be constant noisiness, although Iâm struggling to articulate why. I guess itâs something along the lines of âprogress is predictable, and weâre just bringing it forward in time which makes it no less predictableâ.
Overall thanks for writing this post, I found it interesting!
Iâd be interested to hear your thoughts on what you think the forecasting error function would be in this case. My (very immediate and perhaps incorrect) thought is that speeding up progress doesnât fall prey to a noisier signal over time. Instead Iâm thinking it would be constant noisiness, although Iâm struggling to articulate why. I guess itâs something along the lines of âprogress is predictable, and weâre just bringing it forward in time which makes it no less predictableâ.
My intuition is that thereâd be increasing noisiness over time (in line with dwebbâs model). I can think of several reasons why this might be the case. (But I wrote this comment quickly, so I have low confidence that what Iâm saying makes sense or is explained clearly.)
1) The noisiness could increase because of a speeding-up-progress-related version of the âexogenous nullifying eventsâ described in Tarsneyâs paper. (Tarsneyâs version focused on nullifying existential catastrophes or their prevention.) To copy and then adapt Tarsneyâs descriptions to this situation, we could say:
Negative Progress-Related ENEs are events in the far future (i.e., after t = 0) thatâif a âspeeding upâ action has occurred, -would put the world into back onto a slower progress trajectory (as if the initial âspeeding upâ action hadnât occurred). An example could be a war or a progress-slowing pathogen or meme. (If the âspeeding upâ action hasnât occurred, the Negative Progress-Related ENE has no effect.)
Positive Progress-Related ENEs are events in the far future thatâif a âspeeding upâ action hasnât occurredâwould put the world onto a faster progress trajectory (as if the initial âspeeding upâ action had occurred). An example could be someone doing the same âspeeding upâ action that had been considered or something that achieves similar results, analogous to how the counterfactual impact of you inventing something is probably smaller than the total impact of the inventionâs existence. (If the âspeeding upâ action has occurred, the Positive Progress-Related ENE has no effect.)
As Tarsney writes:
What negative and positive ENEs have in common is that they ânullifyâ the intended effect of the longtermist intervention. After the first ENE occurs, it no longer matters (at least in expectation) whether the world was in state S at t = 0, since the current state of the world no longer depends on its state at t = 0.
2) The noisiness could also increase because, the further we go into the future, the less we know about whatâs happening, and so the more likely it is that speeding up progress (or making any other change) would actually have bad effects.
Analogously, I feel fairly confident that me making myself a sandwich wonât cause major negative ripple effects within 2 minutes, but not within 1000 years.
Do you feel that âordinaryâ (non-attractor state) longtermist interventions are commonly put forward in the longtermist community? [...] The only intervention in Greaves and MacAskillâs paper that doesnât rely on an attractor state is âspeeding up progressâ
As mentioned in another comment, my impression is that, in line with this, the most commonly proposed longtermist priority other than changing the likelihood of various attractor states is speeding up progress. Last year, I drafted a post that touched on this and related issues, and I really really plan to finally publish it soon (itâs sat around un-edited for a long time), but hereâs the most relevant section in the meantime, in case itâs of interest to people:
---
Beckstead writes that our actions might, instead of or in addition to âslightly or significantly alter[ing] the worldâs development trajectoryâ, speed up development:
In many cases, ripple effects from good ordinary actions speed up development. For example, saving some childâs life might cause his countryâs economy to develop very slightly more quickly, or make certain technological or cultural innovations arrive more quickly.
Technically, I think that increases in the pace of development are trajectory changes. At the least, they would change the steepness of one part of the curve. We can illustrate this with the following graph, where actions aimed at speeding up development would be intended to increase the likelihood of the green trajectory relative to the navy one:
This seems to be the sort of picture Benjamin Todd has in mind when he writes:
One way to help the future we donât think is a contender is speeding it up. Some people who want to help the future focus on bringing about technological progress, like developing new vaccines, and itâs true that these create long-term benefits. However, we think what most matters from a long-term perspective is where we end up, rather than how fast we get there. Discovering a new vaccine probably means we get it earlier, rather than making it happen at all.
However, I think speeding up development could also affect âwhere we end upâ, for two reasons.
Firstly, if it makes us spread to the stars earlier and faster, this may increase the amount of resources we can ultimately use. We can illustrate this with the following graph, where again actions aimed at speeding up development would be intended to increase the likelihood of the green trajectory relative to the navy one:
Secondly, more generally, speeding up development could affect which trajectory weâre likely to take. For example, faster economic growth might decrease existential risk by reducing international tensions, or increase it by allowing us less time to prepare for and adjust to each new risky technology. Arguably, this might be best thought of as a way in which speeding up development could, as a side effect, affect other types of trajectory change.
---
(The draft post was meant to be just âA typology of strategies for influencing the futureâ, rather than an argument for one strategy over another, so I just tried to clarify possibilities and lay out possible arguments. If I was instead explaining my own views, Iâd give more space to arguments along the lines of Benjamin Toddâs.)
Thanks for this Michael, Iâd be very interested to read this post when you publish it. Especially as my career has taken a (potentially temporary) turn in the general direction of speeding up progress, rather than towards safety. I still feel that Ben Todd and co are probably right, but I want to read more.
Also, relevant part from Greaves and MacAskillâs paper:
Just how much of an improvement [speeding up progress] amounts to depends, however, on the shape of the progress curve. In a discrete-time model, the benefit of advancing progress by one time period (assuming that at the end of history, one thereby gets one additional time period spent in the âend stateâ) is equal to the duration of that period multiplied by the difference between the amounts of value that are contained in the first and last periods. Therefore, if value per unit time is set to plateau off at a relatively modest level, then the gains from advancing progress are correspondingly modest. Similarly, if value per unit time eventually rises to a level enormously higher than that of today, then the gains from advancing progress are correspondingly enormous.
I think this is a very good point, and itâs helping shape my ideas on this topic, thank you!
I guess itâs true that most/âall candidates for longtermist interventions that Iâve seen are based on attractor states. At the same time, itâs useful to think about whether we might be missing any potential longtermist interventions by focusing on these attractor state cases. One such example that plausibly might fit into this category is an intervention that broadly improves institutional decision-making. Perhaps here, interventions plausibly have a long run positive impact on future value but we are worried that this will be âwashed outâ by other factors. Itâs not clear that thereâs an obvious attractor state involved. (Note that Iâm not very confident in this; I could easily be persuaded otherwise. Maybe people advocate for improving institutional decision-making on the basis that it reduces the risk of many different bad attractor states.)
Thinking about this type of intervention, the results of my model can be read either pessimistically or optimistically from the longtermistâs perspective (depending on your beliefs about the nature of the parameters):
Optimistic: there are potentially cases where a longtermist intervention thatâs not based on an attractor state can have very large long-run benefits. If forecasting error increases sub-linearly or just relatively slowly, then an intervention can be good from a longtermist perspective even if thereâs no attractor state involved.
Pessimistic: for lots of plausible parameter values (e.g. high alpha, linearly increasing forecasting error), long run benefits wash out. If this is true across a wide range of potential interventions, then attractor states are perhaps the only way out of this trap.
I think we could characterise this not as âdeviating from the concept of attractor statesâ, but as highlighting that the best actions to take for improving the long-term future wonât necessarily focus on existential catastrophes or even any form of attractor state. [Update: I now think this comment was misleading/âinaccurate, for the reason dwebb points out in a reply.]
I.e., not as arguing against ultimately focusing on attractor states, but as arguing against focusing on them without even considering alternatives
But note that this post predates the âattractor statesâ concept, so the above is something Iâm reading into the post rather than something the post directly says
I think this is a really good post
Maybe you (Jack) already read this and had it in mind
I hadnât seen this post before, but to me it sounds like Becksteadâs arguments are very much in line with the idea of attractor states, rather than deviating from it. A path-dependent trajectory change is roughly the same as moving from one attractor state to another, if Iâve understood correctly.
The argument he is making is that extinction /â existential risks are not the only form of attractor state, which I agree with.
Whoops, yeah, having just re-skimmed the post, I now think that your comment is a more accurate portrayal of Becksteadâs
post than mine was. Hereâs a key quote from that post:
Bostrom does have arguments that speeding up development and providing proximate benefits are not as important, in themselves, as reducing existential risk. And these arguments, I believe, have some plausibility. Since we donât have an argument that reducing existential risk is better than trying to create other positive [path dependent] trajectory changes and an existential catastrophe is one type of [path dependent] trajectory change, it seems more reasonable for defenders of the astronomical waste argument to focus on [path dependent] trajectory changes in general.
Iâd also be interested in dwebbâs thoughts on this.
Iâll share my own reactions in a few comment replies.
I think itâs true that both Greaves and MacAskill and other longtermists mostly focus on attractor states (and even more specifically on existential catastrophes), and that the second leading contender is something like âspeeding up progressâ.
I think this is an important enough point that it wouldâve been better if dwebbâs final paragraph had instead been right near the start. It seems to me like that caveat is key to understanding how the rest of the post connects to other arguments for, against, and within longtermism. (But as I noted elsewhere, I still think this post is useful!)
Itâs worth noting that Tarsneyâs The epistemic challenge to longtermism, which you mention, deals explicitly with attractor states, and so I think better captures this stronger case for longtermism.
I agree! I think youâre pointing towards a useful way of carving up this landscape. My framework is good for modelling âordinaryâ actions that donât involve attractor states, where actions are more likely to wash out and longtermism becomes harder to defend (but may still win out over neartermist interventions under the right conditions). Then, Tarsneyâs framework is a useful way of thinking about attractor states, where the case for longtermism becomes stronger but is still not a given.
Iâm unsure how many proposed longtermist interventions donât rely on the concept of attractor states. For example, in Greaves and MacAskillâs The Case for Strong Longtermism, they class mitigating (fairly extreme) climate change as an intervention that steers away from a ânon-extinctionâ attractor state:
Perhaps Nick Becksteadâs work deviates from the concept of attractor states? I havenât looked at his work very closely so am not too sure. Do you feel that âordinaryâ (non-attractor state) longtermist interventions are commonly put forward in the longtermist community?
The only intervention in Greaves and MacAskillâs paper that doesnât rely on an attractor state is âspeeding up progressâ:
Iâd be interested to hear your thoughts on what you think the forecasting error function would be in this case. My (very immediate and perhaps incorrect) thought is that speeding up progress doesnât fall prey to a noisier signal over time. Instead Iâm thinking it would be constant noisiness, although Iâm struggling to articulate why. I guess itâs something along the lines of âprogress is predictable, and weâre just bringing it forward in time which makes it no less predictableâ.
Overall thanks for writing this post, I found it interesting!
My intuition is that thereâd be increasing noisiness over time (in line with dwebbâs model). I can think of several reasons why this might be the case. (But I wrote this comment quickly, so I have low confidence that what Iâm saying makes sense or is explained clearly.)
1) The noisiness could increase because of a speeding-up-progress-related version of the âexogenous nullifying eventsâ described in Tarsneyâs paper. (Tarsneyâs version focused on nullifying existential catastrophes or their prevention.) To copy and then adapt Tarsneyâs descriptions to this situation, we could say:
As Tarsney writes:
2) The noisiness could also increase because, the further we go into the future, the less we know about whatâs happening, and so the more likely it is that speeding up progress (or making any other change) would actually have bad effects.
Analogously, I feel fairly confident that me making myself a sandwich wonât cause major negative ripple effects within 2 minutes, but not within 1000 years.
Yeah this all makes sense, thanks.
As mentioned in another comment, my impression is that, in line with this, the most commonly proposed longtermist priority other than changing the likelihood of various attractor states is speeding up progress. Last year, I drafted a post that touched on this and related issues, and I really really plan to finally publish it soon (itâs sat around un-edited for a long time), but hereâs the most relevant section in the meantime, in case itâs of interest to people:
---
Beckstead writes that our actions might, instead of or in addition to âslightly or significantly alter[ing] the worldâs development trajectoryâ, speed up development:
Technically, I think that increases in the pace of development are trajectory changes. At the least, they would change the steepness of one part of the curve. We can illustrate this with the following graph, where actions aimed at speeding up development would be intended to increase the likelihood of the green trajectory relative to the navy one:
This seems to be the sort of picture Benjamin Todd has in mind when he writes:
However, I think speeding up development could also affect âwhere we end upâ, for two reasons.
Firstly, if it makes us spread to the stars earlier and faster, this may increase the amount of resources we can ultimately use. We can illustrate this with the following graph, where again actions aimed at speeding up development would be intended to increase the likelihood of the green trajectory relative to the navy one:
Secondly, more generally, speeding up development could affect which trajectory weâre likely to take. For example, faster economic growth might decrease existential risk by reducing international tensions, or increase it by allowing us less time to prepare for and adjust to each new risky technology. Arguably, this might be best thought of as a way in which speeding up development could, as a side effect, affect other types of trajectory change.
---
(The draft post was meant to be just âA typology of strategies for influencing the futureâ, rather than an argument for one strategy over another, so I just tried to clarify possibilities and lay out possible arguments. If I was instead explaining my own views, Iâd give more space to arguments along the lines of Benjamin Toddâs.)
Thanks for this Michael, Iâd be very interested to read this post when you publish it. Especially as my career has taken a (potentially temporary) turn in the general direction of speeding up progress, rather than towards safety. I still feel that Ben Todd and co are probably right, but I want to read more.
Also, relevant part from Greaves and MacAskillâs paper:
I think this is a very good point, and itâs helping shape my ideas on this topic, thank you!
I guess itâs true that most/âall candidates for longtermist interventions that Iâve seen are based on attractor states. At the same time, itâs useful to think about whether we might be missing any potential longtermist interventions by focusing on these attractor state cases. One such example that plausibly might fit into this category is an intervention that broadly improves institutional decision-making. Perhaps here, interventions plausibly have a long run positive impact on future value but we are worried that this will be âwashed outâ by other factors. Itâs not clear that thereâs an obvious attractor state involved. (Note that Iâm not very confident in this; I could easily be persuaded otherwise. Maybe people advocate for improving institutional decision-making on the basis that it reduces the risk of many different bad attractor states.)
Thinking about this type of intervention, the results of my model can be read either pessimistically or optimistically from the longtermistâs perspective (depending on your beliefs about the nature of the parameters):
Pessimistic: for lots of plausible parameter values (e.g. high alpha, linearly increasing forecasting error), long run benefits wash out. If this is true across a wide range of potential interventions, then attractor states are perhaps the only way out of this trap.
A relevant Beckstead post is A Proposed Adjustment to the Astronomical Waste Argument
I think we could characterise this not as âdeviating from the concept of attractor statesâ, but as highlighting that the best actions to take for improving the long-term future wonât necessarily focus on existential catastrophes or even any form of attractor state. [Update: I now think this comment was misleading/âinaccurate, for the reason dwebb points out in a reply.]
I.e., not as arguing against ultimately focusing on attractor states, but as arguing against focusing on them without even considering alternatives
But note that this post predates the âattractor statesâ concept, so the above is something Iâm reading into the post rather than something the post directly says
I think this is a really good post
Maybe you (Jack) already read this and had it in mind
I hadnât seen this post before, but to me it sounds like Becksteadâs arguments are very much in line with the idea of attractor states, rather than deviating from it. A path-dependent trajectory change is roughly the same as moving from one attractor state to another, if Iâve understood correctly.
The argument he is making is that extinction /â existential risks are not the only form of attractor state, which I agree with.
Whoops, yeah, having just re-skimmed the post, I now think that your comment is a more accurate portrayal of Becksteadâs post than mine was. Hereâs a key quote from that post:
I havenât read that post but will definitely have a look, thanks.
Iâd also be interested in dwebbâs thoughts on this.
Iâll share my own reactions in a few comment replies.
I think itâs true that both Greaves and MacAskill and other longtermists mostly focus on attractor states (and even more specifically on existential catastrophes), and that the second leading contender is something like âspeeding up progressâ.
I think this is an important enough point that it wouldâve been better if dwebbâs final paragraph had instead been right near the start. It seems to me like that caveat is key to understanding how the rest of the post connects to other arguments for, against, and within longtermism. (But as I noted elsewhere, I still think this post is useful!)