I agree! I think you’re pointing towards a useful way of carving up this landscape. My framework is good for modelling “ordinary” actions that don’t involve attractor states, where actions are more likely to wash out and longtermism becomes harder to defend (but may still win out over neartermist interventions under the right conditions). Then, Tarsney’s framework is a useful way of thinking about attractor states, where the case for longtermism becomes stronger but is still not a given.
I’m unsure how many proposed longtermist interventions don’t rely on the concept of attractor states. For example, in Greaves and MacAskill’s The Case for Strong Longtermism, they class mitigating (fairly extreme) climate change as an intervention that steers away from a “non-extinction” attractor state:
A sufficiently warmer climate could result in a slower long-run growth rate (Pindyck 2013, Stern 2006), making future civilisation poorer indefinitely; or it could mean that the planet cannot in the future sustain as large a human population (Aral 2014); or it could cause unrecoverable ecosystem losses, such as species extinction and destruction of coral reefs (IPCC 2014, pp.1052-54)
Perhaps Nick Beckstead’s work deviates from the concept of attractor states? I haven’t looked at his work very closely so am not too sure. Do you feel that “ordinary” (non-attractor state) longtermist interventions are commonly put forward in the longtermist community?
The only intervention in Greaves and MacAskill’s paper that doesn’t rely on an attractor state is “speeding up progress”:
Suppose, for instance, we bring it about that the progress level that would otherwise have been realised in 2030 is instead realised in 2029 (say, by hastening the advent of some beneficial new technology), and that progress then continues from that point on just as it would have if the point in question had been reached one year later. Then, for as long as the progress curve retains a positive slope, people living at every future time will be a little bit better off than they would have been without the intervention. In principle, these small benefits at each of an enormous number of future times could add up to a very large aggregate benefit.
I’d be interested to hear your thoughts on what you think the forecasting error function would be in this case. My (very immediate and perhaps incorrect) thought is that speeding up progress doesn’t fall prey to a noisier signal over time. Instead I’m thinking it would be constant noisiness, although I’m struggling to articulate why. I guess it’s something along the lines of “progress is predictable, and we’re just bringing it forward in time which makes it no less predictable”.
Overall thanks for writing this post, I found it interesting!
I’d be interested to hear your thoughts on what you think the forecasting error function would be in this case. My (very immediate and perhaps incorrect) thought is that speeding up progress doesn’t fall prey to a noisier signal over time. Instead I’m thinking it would be constant noisiness, although I’m struggling to articulate why. I guess it’s something along the lines of “progress is predictable, and we’re just bringing it forward in time which makes it no less predictable”.
My intuition is that there’d be increasing noisiness over time (in line with dwebb’s model). I can think of several reasons why this might be the case. (But I wrote this comment quickly, so I have low confidence that what I’m saying makes sense or is explained clearly.)
1) The noisiness could increase because of a speeding-up-progress-related version of the “exogenous nullifying events” described in Tarsney’s paper. (Tarsney’s version focused on nullifying existential catastrophes or their prevention.) To copy and then adapt Tarsney’s descriptions to this situation, we could say:
Negative Progress-Related ENEs are events in the far future (i.e., after t = 0) that—if a “speeding up” action has occurred, -would put the world into back onto a slower progress trajectory (as if the initial “speeding up” action hadn’t occurred). An example could be a war or a progress-slowing pathogen or meme. (If the “speeding up” action hasn’t occurred, the Negative Progress-Related ENE has no effect.)
Positive Progress-Related ENEs are events in the far future that—if a “speeding up” action hasn’t occurred—would put the world onto a faster progress trajectory (as if the initial “speeding up” action had occurred). An example could be someone doing the same “speeding up” action that had been considered or something that achieves similar results, analogous to how the counterfactual impact of you inventing something is probably smaller than the total impact of the invention’s existence. (If the “speeding up” action has occurred, the Positive Progress-Related ENE has no effect.)
As Tarsney writes:
What negative and positive ENEs have in common is that they “nullify” the intended effect of the longtermist intervention. After the first ENE occurs, it no longer matters (at least in expectation) whether the world was in state S at t = 0, since the current state of the world no longer depends on its state at t = 0.
2) The noisiness could also increase because, the further we go into the future, the less we know about what’s happening, and so the more likely it is that speeding up progress (or making any other change) would actually have bad effects.
Analogously, I feel fairly confident that me making myself a sandwich won’t cause major negative ripple effects within 2 minutes, but not within 1000 years.
Do you feel that “ordinary” (non-attractor state) longtermist interventions are commonly put forward in the longtermist community? [...] The only intervention in Greaves and MacAskill’s paper that doesn’t rely on an attractor state is “speeding up progress”
As mentioned in another comment, my impression is that, in line with this, the most commonly proposed longtermist priority other than changing the likelihood of various attractor states is speeding up progress. Last year, I drafted a post that touched on this and related issues, and I really really plan to finally publish it soon (it’s sat around un-edited for a long time), but here’s the most relevant section in the meantime, in case it’s of interest to people:
---
Beckstead writes that our actions might, instead of or in addition to “slightly or significantly alter[ing] the world’s development trajectory”, speed up development:
In many cases, ripple effects from good ordinary actions speed up development. For example, saving some child’s life might cause his country’s economy to develop very slightly more quickly, or make certain technological or cultural innovations arrive more quickly.
Technically, I think that increases in the pace of development are trajectory changes. At the least, they would change the steepness of one part of the curve. We can illustrate this with the following graph, where actions aimed at speeding up development would be intended to increase the likelihood of the green trajectory relative to the navy one:
This seems to be the sort of picture Benjamin Todd has in mind when he writes:
One way to help the future we don’t think is a contender is speeding it up. Some people who want to help the future focus on bringing about technological progress, like developing new vaccines, and it’s true that these create long-term benefits. However, we think what most matters from a long-term perspective is where we end up, rather than how fast we get there. Discovering a new vaccine probably means we get it earlier, rather than making it happen at all.
However, I think speeding up development could also affect “where we end up”, for two reasons.
Firstly, if it makes us spread to the stars earlier and faster, this may increase the amount of resources we can ultimately use. We can illustrate this with the following graph, where again actions aimed at speeding up development would be intended to increase the likelihood of the green trajectory relative to the navy one:
Secondly, more generally, speeding up development could affect which trajectory we’re likely to take. For example, faster economic growth might decrease existential risk by reducing international tensions, or increase it by allowing us less time to prepare for and adjust to each new risky technology. Arguably, this might be best thought of as a way in which speeding up development could, as a side effect, affect other types of trajectory change.
---
(The draft post was meant to be just “A typology of strategies for influencing the future”, rather than an argument for one strategy over another, so I just tried to clarify possibilities and lay out possible arguments. If I was instead explaining my own views, I’d give more space to arguments along the lines of Benjamin Todd’s.)
Thanks for this Michael, I’d be very interested to read this post when you publish it. Especially as my career has taken a (potentially temporary) turn in the general direction of speeding up progress, rather than towards safety. I still feel that Ben Todd and co are probably right, but I want to read more.
Also, relevant part from Greaves and MacAskill’s paper:
Just how much of an improvement [speeding up progress] amounts to depends, however, on the shape of the progress curve. In a discrete-time model, the benefit of advancing progress by one time period (assuming that at the end of history, one thereby gets one additional time period spent in the “end state”) is equal to the duration of that period multiplied by the difference between the amounts of value that are contained in the first and last periods. Therefore, if value per unit time is set to plateau off at a relatively modest level, then the gains from advancing progress are correspondingly modest. Similarly, if value per unit time eventually rises to a level enormously higher than that of today, then the gains from advancing progress are correspondingly enormous.
I think this is a very good point, and it’s helping shape my ideas on this topic, thank you!
I guess it’s true that most/all candidates for longtermist interventions that I’ve seen are based on attractor states. At the same time, it’s useful to think about whether we might be missing any potential longtermist interventions by focusing on these attractor state cases. One such example that plausibly might fit into this category is an intervention that broadly improves institutional decision-making. Perhaps here, interventions plausibly have a long run positive impact on future value but we are worried that this will be “washed out” by other factors. It’s not clear that there’s an obvious attractor state involved. (Note that I’m not very confident in this; I could easily be persuaded otherwise. Maybe people advocate for improving institutional decision-making on the basis that it reduces the risk of many different bad attractor states.)
Thinking about this type of intervention, the results of my model can be read either pessimistically or optimistically from the longtermist’s perspective (depending on your beliefs about the nature of the parameters):
Optimistic: there are potentially cases where a longtermist intervention that’s not based on an attractor state can have very large long-run benefits. If forecasting error increases sub-linearly or just relatively slowly, then an intervention can be good from a longtermist perspective even if there’s no attractor state involved.
Pessimistic: for lots of plausible parameter values (e.g. high alpha, linearly increasing forecasting error), long run benefits wash out. If this is true across a wide range of potential interventions, then attractor states are perhaps the only way out of this trap.
I think we could characterise this not as “deviating from the concept of attractor states”, but as highlighting that the best actions to take for improving the long-term future won’t necessarily focus on existential catastrophes or even any form of attractor state. [Update: I now think this comment was misleading/inaccurate, for the reason dwebb points out in a reply.]
I.e., not as arguing against ultimately focusing on attractor states, but as arguing against focusing on them without even considering alternatives
But note that this post predates the “attractor states” concept, so the above is something I’m reading into the post rather than something the post directly says
I think this is a really good post
Maybe you (Jack) already read this and had it in mind
I hadn’t seen this post before, but to me it sounds like Beckstead’s arguments are very much in line with the idea of attractor states, rather than deviating from it. A path-dependent trajectory change is roughly the same as moving from one attractor state to another, if I’ve understood correctly.
The argument he is making is that extinction / existential risks are not the only form of attractor state, which I agree with.
Whoops, yeah, having just re-skimmed the post, I now think that your comment is a more accurate portrayal of Beckstead’s
post than mine was. Here’s a key quote from that post:
Bostrom does have arguments that speeding up development and providing proximate benefits are not as important, in themselves, as reducing existential risk. And these arguments, I believe, have some plausibility. Since we don’t have an argument that reducing existential risk is better than trying to create other positive [path dependent] trajectory changes and an existential catastrophe is one type of [path dependent] trajectory change, it seems more reasonable for defenders of the astronomical waste argument to focus on [path dependent] trajectory changes in general.
I’d also be interested in dwebb’s thoughts on this.
I’ll share my own reactions in a few comment replies.
I think it’s true that both Greaves and MacAskill and other longtermists mostly focus on attractor states (and even more specifically on existential catastrophes), and that the second leading contender is something like “speeding up progress”.
I think this is an important enough point that it would’ve been better if dwebb’s final paragraph had instead been right near the start. It seems to me like that caveat is key to understanding how the rest of the post connects to other arguments for, against, and within longtermism. (But as I noted elsewhere, I still think this post is useful!)
I agree! I think you’re pointing towards a useful way of carving up this landscape. My framework is good for modelling “ordinary” actions that don’t involve attractor states, where actions are more likely to wash out and longtermism becomes harder to defend (but may still win out over neartermist interventions under the right conditions). Then, Tarsney’s framework is a useful way of thinking about attractor states, where the case for longtermism becomes stronger but is still not a given.
I’m unsure how many proposed longtermist interventions don’t rely on the concept of attractor states. For example, in Greaves and MacAskill’s The Case for Strong Longtermism, they class mitigating (fairly extreme) climate change as an intervention that steers away from a “non-extinction” attractor state:
Perhaps Nick Beckstead’s work deviates from the concept of attractor states? I haven’t looked at his work very closely so am not too sure. Do you feel that “ordinary” (non-attractor state) longtermist interventions are commonly put forward in the longtermist community?
The only intervention in Greaves and MacAskill’s paper that doesn’t rely on an attractor state is “speeding up progress”:
I’d be interested to hear your thoughts on what you think the forecasting error function would be in this case. My (very immediate and perhaps incorrect) thought is that speeding up progress doesn’t fall prey to a noisier signal over time. Instead I’m thinking it would be constant noisiness, although I’m struggling to articulate why. I guess it’s something along the lines of “progress is predictable, and we’re just bringing it forward in time which makes it no less predictable”.
Overall thanks for writing this post, I found it interesting!
My intuition is that there’d be increasing noisiness over time (in line with dwebb’s model). I can think of several reasons why this might be the case. (But I wrote this comment quickly, so I have low confidence that what I’m saying makes sense or is explained clearly.)
1) The noisiness could increase because of a speeding-up-progress-related version of the “exogenous nullifying events” described in Tarsney’s paper. (Tarsney’s version focused on nullifying existential catastrophes or their prevention.) To copy and then adapt Tarsney’s descriptions to this situation, we could say:
As Tarsney writes:
2) The noisiness could also increase because, the further we go into the future, the less we know about what’s happening, and so the more likely it is that speeding up progress (or making any other change) would actually have bad effects.
Analogously, I feel fairly confident that me making myself a sandwich won’t cause major negative ripple effects within 2 minutes, but not within 1000 years.
Yeah this all makes sense, thanks.
As mentioned in another comment, my impression is that, in line with this, the most commonly proposed longtermist priority other than changing the likelihood of various attractor states is speeding up progress. Last year, I drafted a post that touched on this and related issues, and I really really plan to finally publish it soon (it’s sat around un-edited for a long time), but here’s the most relevant section in the meantime, in case it’s of interest to people:
---
Beckstead writes that our actions might, instead of or in addition to “slightly or significantly alter[ing] the world’s development trajectory”, speed up development:
Technically, I think that increases in the pace of development are trajectory changes. At the least, they would change the steepness of one part of the curve. We can illustrate this with the following graph, where actions aimed at speeding up development would be intended to increase the likelihood of the green trajectory relative to the navy one:
This seems to be the sort of picture Benjamin Todd has in mind when he writes:
However, I think speeding up development could also affect “where we end up”, for two reasons.
Firstly, if it makes us spread to the stars earlier and faster, this may increase the amount of resources we can ultimately use. We can illustrate this with the following graph, where again actions aimed at speeding up development would be intended to increase the likelihood of the green trajectory relative to the navy one:
Secondly, more generally, speeding up development could affect which trajectory we’re likely to take. For example, faster economic growth might decrease existential risk by reducing international tensions, or increase it by allowing us less time to prepare for and adjust to each new risky technology. Arguably, this might be best thought of as a way in which speeding up development could, as a side effect, affect other types of trajectory change.
---
(The draft post was meant to be just “A typology of strategies for influencing the future”, rather than an argument for one strategy over another, so I just tried to clarify possibilities and lay out possible arguments. If I was instead explaining my own views, I’d give more space to arguments along the lines of Benjamin Todd’s.)
Thanks for this Michael, I’d be very interested to read this post when you publish it. Especially as my career has taken a (potentially temporary) turn in the general direction of speeding up progress, rather than towards safety. I still feel that Ben Todd and co are probably right, but I want to read more.
Also, relevant part from Greaves and MacAskill’s paper:
I think this is a very good point, and it’s helping shape my ideas on this topic, thank you!
I guess it’s true that most/all candidates for longtermist interventions that I’ve seen are based on attractor states. At the same time, it’s useful to think about whether we might be missing any potential longtermist interventions by focusing on these attractor state cases. One such example that plausibly might fit into this category is an intervention that broadly improves institutional decision-making. Perhaps here, interventions plausibly have a long run positive impact on future value but we are worried that this will be “washed out” by other factors. It’s not clear that there’s an obvious attractor state involved. (Note that I’m not very confident in this; I could easily be persuaded otherwise. Maybe people advocate for improving institutional decision-making on the basis that it reduces the risk of many different bad attractor states.)
Thinking about this type of intervention, the results of my model can be read either pessimistically or optimistically from the longtermist’s perspective (depending on your beliefs about the nature of the parameters):
Pessimistic: for lots of plausible parameter values (e.g. high alpha, linearly increasing forecasting error), long run benefits wash out. If this is true across a wide range of potential interventions, then attractor states are perhaps the only way out of this trap.
A relevant Beckstead post is A Proposed Adjustment to the Astronomical Waste Argument
I think we could characterise this not as “deviating from the concept of attractor states”, but as highlighting that the best actions to take for improving the long-term future won’t necessarily focus on existential catastrophes or even any form of attractor state. [Update: I now think this comment was misleading/inaccurate, for the reason dwebb points out in a reply.]
I.e., not as arguing against ultimately focusing on attractor states, but as arguing against focusing on them without even considering alternatives
But note that this post predates the “attractor states” concept, so the above is something I’m reading into the post rather than something the post directly says
I think this is a really good post
Maybe you (Jack) already read this and had it in mind
I hadn’t seen this post before, but to me it sounds like Beckstead’s arguments are very much in line with the idea of attractor states, rather than deviating from it. A path-dependent trajectory change is roughly the same as moving from one attractor state to another, if I’ve understood correctly.
The argument he is making is that extinction / existential risks are not the only form of attractor state, which I agree with.
Whoops, yeah, having just re-skimmed the post, I now think that your comment is a more accurate portrayal of Beckstead’s post than mine was. Here’s a key quote from that post:
I haven’t read that post but will definitely have a look, thanks.
I’d also be interested in dwebb’s thoughts on this.
I’ll share my own reactions in a few comment replies.
I think it’s true that both Greaves and MacAskill and other longtermists mostly focus on attractor states (and even more specifically on existential catastrophes), and that the second leading contender is something like “speeding up progress”.
I think this is an important enough point that it would’ve been better if dwebb’s final paragraph had instead been right near the start. It seems to me like that caveat is key to understanding how the rest of the post connects to other arguments for, against, and within longtermism. (But as I noted elsewhere, I still think this post is useful!)