Perhaps that could have been worded better in my summary. It is not that we cannot predict what could boost medical research in the far future. Rather it is that we cannot predict the effect that medical research will have on the far future. For example the magnitude of the effect may be so incredibly large that it might out prioritize traditional existential risks, either because it leads to a good future, or perhaps to a bad future. Or perhaps further investments in medical research will not lead to any significant gains in the things we care about. Either way we don’t have a means of predicting how our current actions will influence the far future.
With regards to value, being alive, having the ability to do what we want, and minimizing suffering, might very well be things that people in the far future value, but they are also things that we currently value now. On the authors account therefore, these values can guide our moral decision making by virtue of being things we value now and into the near future and referencing that they will also be valued by the far future is an irrelevant extra piece of information, i.e it does no additional work in guiding our moral decision making.
FWIW I think it’s pretty unclear that something like reducing existential risk should be prioritised just based on near-term effects (e.g. see here). So I think factoring in that future people may value being alive and that they won’t want to be disempowered can shift the balance to reducing existential risk.
If future people don’t want to be alive they can in theory go extinct (this is the option value argument for reducing existential risk). The idea that future generations will want to be disempowered is pretty barmy, but again they can disempower themselves if they want to so it seems good to at least give them the option.
Thanks for linking to that research by Laura Duffy, that’s really interesting. It would have been relevant for the authors of the current article as well.
According to their analysis, spending on conservative existential risk interventions are cost competitive (within an order of magnitude) to spending on AMF. Further, compared to plausible less conservative existential risk interventions, AMF is “probably” an order of magnitude less cost-effective. Under Rethink Priorities’ estimates for welfare ranges, for cage-free campaigns and the hypothetical shrimp welfare intervention, existential risk interventions are either cost competitive, or an order of magnitude less cost-effective.
I think that actually gives some reasonable weight to the idea that existential risk can be justified without reference to the far future. Duffy used a timeline of <200 years and even then a case can be made that interventions focussing on existential risk should be prioritised. At the very least it adds level of uncertainty about the relevance of the far future in moral-decision making.
existential risk can be justified without reference to the far future
This is pretty vague. If existential risk is roughly on par with other cause areas then we would be justified in giving any amount of resources to it. If existential risk is orders of magnitude more important then we should greatly prioritize it over other areas (at least on the current margin). So factoring in the far future does seem to be very consequential here.
According to the authors of the linked article, longtermists have not convincingly shown that taking the far future in account impacts decision-making in practice. Their claim is that the burden of proof here lies for the longtermist. If the far future is important for moral decision-making then this claim needs to be justified. A surface level justification that people in the far future would want to be alive, is equally justified by reference to the near future.
You linked a quantitative attempt at answering the question of whether focus on existential risk requires priority if we consider <200 years, and the answer appears to be in the affirmative (depending on weightings). Is there a corresponding attempt at making this case using the far future as a reference point?
In order to provide a justification for preventative x-risk policies with reference to their impact on the far future we would need to compare it with the impact of other focus areas and how they would influence the far future. That is in part where the ‘We Are Not in a Position to Predict the Best Actions for the Far Future’ claim fits in because how are we supposed to do an analysis of the influence of any intervention (such as medical research, but including x-risk interventions) on people living millions of years into the future. It’s possible that if we did have that kind of predictive power, many other focus areas may turn out to be orders of magnitude more important than focus on existential risks.
The analysis I linked to isn’t conclusive on longtermism being the clear winner if only considering the short-term. Under certain assumptions it won’t be the best. Therefore if only considering the short-term, many may choose not to give to longtermist interventions. Indeed this is what we see in the EA movement where global health still reigns supreme as the highest priority cause area.
What most longtermist analysis does is argue that if you consider the far future, longtermism then becomes the clear winner (e.g. here). In short, significantly more value is at stake with reducing existential risk because now you care about enabling far future beings to live and thrive. If longtermism is the clear winner then we shouldn’t see a movement that clearly prioritises global health, we should see a movement that clearly prioritises longtermist causes. This would be a big shift from the status quo.
As for your final point, I think I understand what you / the authors were saying now. I don’t think we have no idea what the far future effects of interventions like medical research are. We can make a general argument it will be good in expectation because it will help us deal with future disease which will help us reduce future suffering. Could that be wrong—sure—but we’re just talking about expectational value. With longtermist interventions, the argument is the far future effects are significantly positive and large in expectation. The simplest explanation is that future wellbeing matters, so reducing extinction risk seems good because we increase the probability of there being some welfare in the future rather than none.
It isn’t a clear winner but neither were any of the other options and it was cost competitive.
What most longtermist analysis does is argue that if you consider the far future, longtermism then becomes the clear winner (e.g. here).
In this thread Toby Ord has said that he and most longtermists don’t support ‘strong determinism’. Although he hasn’t elucidated what the mainstream view of longtermism is.
We can make a general argument it will be good in expectation because it will help us deal with future disease which will help us reduce future suffering.
With longtermist interventions, the argument is the far future effects are significantly positive and large in expectation.
If all the argument amounts to is that it will be good in expectation, well we can say that about a lot of cause areas. What we need is an argument for why it would be good in expectation, compared to all these other cause areas.
The simplest explanation is that future wellbeing matters, so reducing extinction risk seems good because we increase the probability of there being some welfare in the future rather than none.
Future well being does matter but focusing on existential risk doesn’t lead to greater future well-being necessarily. It leads to humans being alive. If the future is filled with human suffering, then focus on existential risk could be one of the worst focus areas.
What we need is an argument for why it would be good in expectation, compared to all these other cause areas.
Yeah the strong longtermism paper elucidates this argument. I also provide a short sketch of the argument here. At its core is the expected vastness of the future that allows longtermism to beat other areas. The argument for “normal” longtermism i.e. not “strong” is pretty much the same structure.
Future well being does matter but focusing on existential risk doesn’t lead to greater future well-being necessarily. It leads to humans being alive. If the future is filled with human suffering, then focus on existential risk could be one of the worst focus areas.
Yes that’s true. Again we’re dealing with expectations and most people expect the future to be good if we manage not to go extinct. But it’s also worth noting that reducing extinction risk is just one class of reducing existential risk. If you think the future will be bad, you can work to improve the future conditional on us being alive or, in theory, you can work to make us go extinct (but this is of course a bit out there). Improving the future conditional on us being alive might involve tackling climate change, improving institutions, or aligning AI.
And, to reiterate, while we focus on these areas to some extent now, I don’t think we focus on them as much as we would in a world where society at large accepts longtermism.
Yeah the strong longtermism paper elucidates this argument. I also provide a short sketch of the argument here. At its core is the expected vastness of the future that allows longtermism to beat other areas.
Where is this comparison? I feel like i’m repeating myself here but in order to argue that focus on existential risk is one of the best things we can do for the far future, it needs to be compared with the effect of other focus areas on the far future. But if we are not in a position to predict how cause areas will effect the far future (including focus on x-risk), then how can we make the comparison to say that focussing on existential risk is better than any of the other causes.
Put another way, if focus on existential risk is better for the far future than medical research, we need to show that focus on medical research is worse for the far future than focus on existential risk. Since we aren’t in a position to predict the impact medical research will have on the far future, we aren’t in a position to make such a comparison. Otherwise the argument just collapses to, existential risk is probably good for the far future, so let’s focus on it.
To be clear, if every focus area received equal priority including existential risk, I don’t see how we can justify a greater investment into existential risk by arguing that it will have the highest expected value for the far future.
However all focus areas don’t receive equal priority. The analysis you linked by Duffy shows that focusing on existential risk, at least in the short term, is cost-competitive with some of the more effective focus like the AMF and perhaps in virtue of this it is more neglected than other areas. Therefore, it appears to be a worthy cause area. My concern is with how reference to the far future is being used as a justification for this cause.
It is commonly assumed a lot of interventions will likely fall prey to the “washing-out hypothesis” where the impact of the intervention becomes less significant as time goes on, meaning that the effects of actions in the near future matter more than their long-term consequences. In other words, over time, the differences between the outcomes of various actions tend to fade or “wash out.” So in practice most people would assume the long-term impact of something like medical research is, in expectation, zero.
Longtermists aim to avoid “washing out”. One way is to find interventions that steer between “attractor states”. For example, extinction is an attractor state in that, once humans go extinct, they will stay that way forever (assuming humans don’t re-evolve). Non-extinction is also an attractor state, although to a lesser extent. Increasing the probability of achieving the better attractor state (probably non-extinction by a large margin, if we make certain foundational assumptions) has high expected value that stretches into the far future. This is because the persistence of the attractor states allows the expected value of reducing extinction risk not to “wash out” over time.
The washing out hypothesis is a different concern to what we are talking about here. The idea I have been discussing here is not that an intervention might become less significant as time goes on. An intervention could be extremely significant for the far future, or not significant at all. However, our ability to predict the impact of that intervention on the far future is outside our purview.
From the article:
The far-future effects of one’s actions are usually harder to predict than their near-future effects. Might it be that the expected instantaneous value differences between available actions decay with time from the point of action, and decay sufficiently fast that in fact the near-future effects tend to be the most important contributor to expected value?
Or perhaps the difficulty lies in the high number of causal possibilities the further we reach into the future.
As a result, although precise estimates of the relevant numbers are difficult, the far-future benefits of some such interventions seem to compare very favourably, by total utilitarian lights, to the highest available near-future benefits
In the article they compare the impact of an intervention (malaria bed nets) on the near future with the impact of an intervention (reducing x-risk from asteroids, global pandemics, AI risk) on the far future. As I said earlier, not an adequate comparison.
If we compare the positive impact of an intervention on quadrillions of people to a positive impact of an intervention on only billions of people, should we be surprised that the intervention that considers the impact on more people has a greater effect? Put another way, should we be surprised the bed net intervention has a smaller impact when we reduce the time horizon of its impact to the near future?
To this you might say, well interventions focused on malaria might have this ‘washing out’ effect. But so might interventions for reducing existential risk. For example, the intervention discussed in the paper to reduce extinction-level pandemics is to spend money on strengthening the healthcare system. Something that could easily be subject to the ‘washing out’ effect.
Nevertheless, the bed net intervention is only one intervention, and there are other interventions that could have more plausible effects on the far future which would be more adequate comparisons (if such comparisons were feasible in the first place), for example, medical research.
One way is to find interventions that steer between “attractor states”.
If extinction and non-extinction are “attractor states”, from what I gather, a state that is expected to last an extremely long time, what exactly isn’t an attractor state?
Increasing the probability of achieving the better attractor state (probably non-extinction by a large margin, if we make certain foundational assumptions) has high expected value that stretches into the far future.
Let me translate that sentence: Focusing on existential risk is more beneficial for the far future than other cause areas because it increases the probability of humans being alive for an extremely long time. If it’s more beneficial, we need the relevant comparison, as per above, the relevant comparison is lacking.
If extinction and non-extinction are “attractor states”, from what I gather, a state that is expected to last an extremely long time, what exactly isn’t an attractor state?
Any state that isn’t very persistent. For example, an Israel-Gaza ceasefire. We could achieve it, but from history we know it’s unlikely to last very long. The fact that it is unlikely to last makes it less desirable to work towards than if we were confident it would last a long time.
The extinction vs non-extinction example is the classic attractor state example, but not the only one. Another one people talk about is stable totalitarianism. Imagine China or the US can win the race to superintelligence. Which country wins the race essentially controls the world for a very long time given how powerful superintelligence would be. So we have two different attractor states—one where China wins and has long-term control and one where the US wins and has long-term control. Longtermist EAs generally think the state where the US wins is the much better one—the US is a liberal democracy whereas China is an authoritarian state. So if we just manage to ensure the US wins we would experience the better state for a very long time, which seems very high value.
There are ways to counter this. You can argue the states aren’t actually that persistent e.g. you don’t think superintelligence is that powerful or even realistic in the first place. Or you can argue one isn’t clearly better than the other. Or you can argue that there’s not much we can do to achieve one state over other. You touch on this last point when you say that longtermist interventions may be subject to washing out themselves, but it’s important to note that longtermist interventions often aim to achieve short-term outcomes that persist into the long-term, as opposed to long-term outcomes (I explain this better here).
Saving a life through bed nets just doesn’t seem to me to put the world in a better attractor state which makes it vulnerable to washing out. Medical research doesn’t either.
a) We should take actions such that we will enter into a state in the near future that will endure and last a very long time.
b) These near future states that will endure for a long time will be the best states for the beings in the far future.
If so, by design actions now need to lead, relatively quickly, into such an ‘attractor state’ otherwise these actions are subject to the same ‘washing out’ criticism that they were designed to avoid. That means that the state that is hypothesised to last for an extremely long time, is a state that is close to the present state. But then we are left with the somewhat surprising claim that a state that we can establish in the near future which lasts an extremely long time is one of the best things we can do for the far future. On the face of it I find the idea very implausible.
You can argue the states aren’t actually that persistent e.g. you don’t think superintelligence is that powerful or even realistic in the first place. Or you can argue one isn’t clearly better than the other. Or you can argue that there’s not much we can do to achieve one state over other.
From the perspective of the linked article unfortunately these points suffer from the same limitation of our inability to make predictions about the effect of our actions on the far future. Therefore, we lack the ability to predict which states would persist into the far future and which wouldn’t. And we lack the ability to predict which persistent states would be better than others for the far future.
For example, how exactly does the US winning the race to superintelligence lead to one of the best possible futures for quadrillions of people in the far future? How long is this state expected to last? What will happen once we are no longer in this state?
There is a related issue of ambiguity about what is included in “the state of things that last an extremely long time”. To adapt the terminology from The Case for Strong Longtermism paper, the space of all possible microstates that the world could be in at a single moment of time is S. Call the state that we are in at any given moment in time, subset of S, X. Call the subset of X that persists for an extremely long period of time A.
The world is continually moving through different states of X and the claim from the ‘attractor’ argument is that we should move towards versions of A that will positively influence the far future. However, obviously there are other subsets of X that continue to change, i.e. there are microstates of the world that continue to change, otherwise the world is frozen in time. The issue of ambiguity is, what set of microstates makes up A?
One version of A you claim is the US winning the race to superintelligence. When we are in the state of the US having won the race to superintelligence, what exactly is in A? What exactly is claimed to be persisting for a very long time? The US having dominance over the world? Liberty? Democracy? Wellbeing? And whatever it is, how is that influencing the quadrillions of lives in the far future, given that there is still a large subset of X which is changing.
Saving a life through bed nets just doesn’t seem to me to put the world in a better attractor state which makes it vulnerable to washing out. Medical research doesn’t either.
My comments in the previous post were not that bed nets and medical research are in fact better focus areas for influencing the far future, but rather in order to say they are not better focus areas for influencing the far future you need to show this. In order to show this you need to be able to predict how focus on these areas impacts the far future. We aren’t in a position to predict how focus on these areas effects the far future. Therefore we aren’t in a position to say that medical research/bed nets/‘insert other’ are better or worse focus areas for influencing the far future.
I don’t think I have explained this well enough. I’d be happy to have a call sometime if you want as that might be more efficient than this back and forth. But I’ll reply for now.
b) These near future states that will endure for a long time will be the best states for the beings in the far future.
No. This is not what I’m saying.
The key thing is that there are two attractor states that differ in value and you can affect if you end up in one or the other. The better one does not have to be the best possible state of the world, it just has to be better than the other attractor state.
So if you achieve the better one you persist at that higher expected value for a very long time compared to the counterfactual of persisting at the lower value for a very long time. So even if the difference in value (at any given time) is kind of small, the fact that this difference persists for a very long time is what gives you the very large counterfactual impact.
That means that the state that is hypothesised to last for an extremely long time, is a state that is close to the present state.
Not necessarily. To use the superintelligence example, the world will look radically different under either the US or China having superintelligence than it does now.
For example, how exactly does the US winning the race to superintelligence lead to one of the best possible futures for quadrillions of people in the far future? How long is this state expected to last?
As I said earlier it doesn’t necessarily lead to one of the best futures, but to cover the persistence point—this is a potentially fair push back. Some people doubt the persistence of longtermist interventions/attractor states, which would then dampen the value of longtermist interventions. We can still debate the persistence of different states of the world though and many think that a government controlling superintelligence would become very powerful and so be able to persist for a long time (exactly how long I don’t know but “long time” is all we really need for it to become an important question).
What exactly is claimed to be persisting for a very long time? The US having dominance over the world? Liberty? Democracy? Wellbeing? And whatever it is, how is that influencing the quadrillions of lives in the far future, given that there is still a large subset of X which is changing.
Yeah I guess in this case I’m talking about the US having dominance over the world as opposed to China having dominance over the world. Remember I’m just saying one attractor state is better than the other in expectation, not that one of them is so great. I think it’s fair to say I’d rather the US control the world than China control the world given the different values the two countries hold. Leopold Aschenbrenner talks more about this here. Of course I can’t predict the future precisely, but we can talk about expectations.
I’d be happy to have a call sometime if you want as that might be more efficient than this back and forth.
For now i’m finding the gaps in between useful for reflecting, thanks though. Perhaps in the future!
Not necessarily. To use the superintelligence example, the world will look radically different under either the US or China having superintelligence than it does now.
The world will be radically different, yet you feel confident in predicting that some element of this radically different world will remain constant for a very long time and this being so, moving towards this state is one of the best options for the far future.
I’m just saying one attractor state is better than the other in expectation, not that one of them is so great.
I think you may be departing from strong longtermism. The first proposition for ASL is “Every option that is near-best overall is near-best for the far future.” We are talking about making decisions whose outcome is one of the best things we can do for the far future. It’s not merely something that is better than something deemed terrible.
Yeah I guess in this case I’m talking about the US having dominance over the world as opposed to China having dominance over the world.
Perhaps I didn’t explain the point about ambiguity well enough. Of all possible states, S, there is some possible state X, that is ‘near-best’, ‘best-possible’, ‘close to best’, what have you, for the far future. Call the ‘near-best’ state for the far future n-bX. There are microstates of n-bX that make it such that it is this ‘near-best’ state. Presumably you need to have some idea of what these microstates are, in order to make predictions regarding what we can do today that will lead towards them.
Therefore, there must be something about the state of the US having dominance over the world as opposed to China, that will presumably lead to the instantiation of some of these microstates of n-bX. Presumably, these beneficial microstates of n-bX don’t involve a country called “the US” and a country called “China”, and arguably lack the property of “dominance”.
So there must be some other thing, state, or property, call it n-bP, whose long term instantiation in the near-present world, is linked to n-bX. So the questions are, what is n-bP, and how is n-bP hypothesised to be linked to “US dominance..”, and how is it hypothesised to be instantiated for a very long time, and how is it hypothesised to be linked to n-bX. It’s ambiguous on all these questions.
I think it’s fair to say I’d rather the US control the world than China control the world given the different values the two countries hold.
We are not talking about what you would rather, we’re talking about what the far future would rather. I get the sense that what you are really defending are ways to incrementally improve the world that are currently under-appreciated. I don’t have an issue with that. What I am unconvinced by, is how reference to the lives of beings quadrillions of years into the future, can meaningfully guide our decisions.
I think you may be departing from strong longtermism. The first proposition for ASL is “Every option that is near-best overall is near-best for the far future.” We are talking about making decisions whose outcome is one of the best things we can do for the far future. It’s not merely something that is better than something deemed terrible.
I think you have misunderstood this. An option can be the best thing you can do because it averts a terrible outcome, as opposed to achieving the best possible outcome. For example, if we are at risk of entering a hellscape that will last for eternity and you can press a button to simply stop that from happening, that seems to me like it would be the single best thing anyone can do (overall or for the far future). The end result however would just be a continuation of the status quo. This is the concept of counterfactual impact—we compare the world after our intervention to the world that would have happened in the absence of the intervention and the difference in value is essentially how good the intervention was. Indeed a lot of longtermists simply want to avert s-risks (risks of astronomical suffering).
I don’t understand some of what you’re saying including on ambiguity. I don’t find it problematic to say that the US winning the race to superintelligence is better in expectation than China winning. China has authoritarian values, so if they control the world using superintelligence they are more likely to control it according to authoritarian values, which means less freedom, but freedom is important for wellbeing etc. etc. I think we can say, if we assume persistence, that future people would more likely be thankful the US won the race to superintelligence than China did. I am extrapolating that future people will also like freedom. Could I be wrong, sure, but we are doing things based on expectation.
I would say that your doubts about persistence are the best counter to longtermism. The claim that superintelligence may allow a state to control the world for a very long time is perhaps a more controversial one, but not one I am willing to discount. If you want to engage with object-level arguments on this point check out this document: Artificial General Intelligence and Lock-In.
We are talking about making decisions whose outcome is one of the best things we can do for the far future.
An option can be the best thing you can do because it averts a terrible outcome, as opposed to achieving the best possible outcome.
This is probably a semantic disagreement but averting a terrible outcome could be viewed as one of the best things we can do for the far future. The part I was disagreeing with was when you said “I’m just saying one attractor state is better than the other in expectation, not that one of them is so great.”. This gives the impression that longtermism is satisfied with prioritising one option in comparison to another, regardless of the context of other options which if considered would produce outcomes that are “near-best overall”. And as such it’s a somewhat strange claim that one of the best things you could do for the far future is in actuality “not so great”.
I don’t understand some of what you’re saying including on ambiguity.
My point could be ultimately be summarised by saying, how do you know that freedom (or any other value), will even makes sense in the far future, let alone valued? You don’t. You’re just assuming it makes sense and will be valued, because it makes sense and is valued now. While that may be sufficient for an argument in reference to the near future, I think it’s a very weak argument to defend its relevance for the far future.
At it’s heart, the “inability to predict” arguments really hold strongly onto the sense that the far future is likely to be radically different and therefore you are making a claim to having knowledge of what is ‘good’ in this radically different future.
Could I be wrong, sure, but we are doing things based on expectation.
I feel like “expectation” is doing far too much work in these arguments. It’s not convincing to just claim something is likely or expected, that just begs the question, why is it likely or expected.
Nevertheless I think the focus on non-existential risk examples like the US having dominance over China is a red herring for defending longtermism. I think the strongest claims are those for taking action on preventing existential risk. But there the action’s are still subject to the same criticisms regarding the inability to predict how they will actually positively influence the far future.
For example, take reducing exitential risk by developing some sort of asteroid defense system. While in the short term developing an asteroid defense system might seem to adequately contribute to the goal of reducing existential risk. It’s unclear how asteroid defense systems or other mitigation policies might interact with other technologies or societal developments in the far future. For example, advanced asteroid deflection technologies could have dual-use potentials (like space weaponization) that could create new risks or unforeseen consequences. Thus, while reducing risk associated with asteroid impacts has immediate positive effects, the net effect on the far future is more ambiguous.
There is also an accounting issue that distorts the estimates of the impact of particular actions on the far future. Calculating the expected value of minimising the existential risk associated with an asteroid impact for example, doesn’t take into account changes in expected value over time. For a simple example, as soon as humans start living comfortably, in addition to but beyond Earth (for example on Mars), the existential risk from an asteroid impact declines dramatically, and further declines are made as we extend out further through the solar system and beyond. Yet the expected value is calculated on the time horizon whereby the value of this action, reducing risk from asteroid impact, will endure for the rest of time, when in reality, the value of this action, as originally calculated, will only endure for probably less than 50 years.
This gives the impression that longtermism is satisfied with prioritising one option in comparison to another, regardless of the context of other options which if considered would produce outcomes that are “near-best overall”. And as such it’s a somewhat strange claim that one of the best things you could do for the far future is in actuality “not so great”.
Longtermism should certainly prioritise the best persistent state possible. If we could lock-in a state of the world where there were the maximum number of beings with maximum wellbeing of course I would do that, but we probably can’t.
Ultimately the great value from a longtermist intervention does comes from comparing it to the state of the world that would have happened otherwise. If we can lock-in value 5 instead of locking in value 3, that is better than if we can lock-in value 9 instead of locking value 8.
At it’s heart, the “inability to predict” arguments really hold strongly onto the sense that the far future is likely to be radically different and therefore you are making a claim to having knowledge of what is ‘good’ in this radically different future.
I think we just have different intuitions here. The future will be different, but I think we can make reasonable guesses about what will be good. For example, I don’t have a problem with a claim that a future where people care about the wellbeing of sentient creatures is likely to be better than one where they don’t. If so, expanding our moral circle seems important in expectation. If you’re asking “why”—it’s because people who care about the wellbeing of sentient creatures are more likely to treat them well and therefore more likely to promote happiness over suffering. They are also therefore less likely to lock-in suffering. And fundamentally I think happiness is inherently good and suffering inherently bad and this is independent of what future people think. I don’t have a problem with reasoning like this, but if you do then I just think our intuitions diverge too much here.
Thus, while reducing risk associated with asteroid impacts has immediate positive effects, the net effect on the far future is more ambiguous.
Maybe fair, but if that’s the case I think we need to find those interventions that are not very ambiguous. Moral circle expansion seems one of those that is very hard to argue against. (I know I’m changing my interventions—it doesn’t mean I don’t think the previous ones I said are still good, I’m just trying to see how far your scepticism goes).
For a simple example, as soon as humans start living comfortably, in addition to but beyond Earth (for example on Mars), the existential risk from an asteroid impact declines dramatically, and further declines are made as we extend out further through the solar system and beyond. Yet the expected value is calculated on the time horizon whereby the value of this action, reducing risk from asteroid impact, will endure for the rest of time, when in reality, the value of this action, as originally calculated, will only endure for probably less than 50 years.
Considering this particular example—If we spread out to the stars then x-risk from asteroids drops considerably as no one asteroid can kill us all—that is true. But the value of the asteroid reduction intervention is borne from actually getting us to that point in the first place. If we hadn’t reduced risk from asteroids and had gone extinct then we’d have value 0 for the rest of time. If we can avert that and become existentially secure than we have non-zero value for the rest of the time. So yes, we would have indeed done an intervention that has impacts enduring for the rest time. X-risk reduction interventions are trying to get us to a point of existential security. If they do that, their work is done.
Perhaps that could have been worded better in my summary. It is not that we cannot predict what could boost medical research in the far future. Rather it is that we cannot predict the effect that medical research will have on the far future. For example the magnitude of the effect may be so incredibly large that it might out prioritize traditional existential risks, either because it leads to a good future, or perhaps to a bad future. Or perhaps further investments in medical research will not lead to any significant gains in the things we care about. Either way we don’t have a means of predicting how our current actions will influence the far future.
With regards to value, being alive, having the ability to do what we want, and minimizing suffering, might very well be things that people in the far future value, but they are also things that we currently value now. On the authors account therefore, these values can guide our moral decision making by virtue of being things we value now and into the near future and referencing that they will also be valued by the far future is an irrelevant extra piece of information, i.e it does no additional work in guiding our moral decision making.
FWIW I think it’s pretty unclear that something like reducing existential risk should be prioritised just based on near-term effects (e.g. see here). So I think factoring in that future people may value being alive and that they won’t want to be disempowered can shift the balance to reducing existential risk.
If future people don’t want to be alive they can in theory go extinct (this is the option value argument for reducing existential risk). The idea that future generations will want to be disempowered is pretty barmy, but again they can disempower themselves if they want to so it seems good to at least give them the option.
Thanks for linking to that research by Laura Duffy, that’s really interesting. It would have been relevant for the authors of the current article as well.
According to their analysis, spending on conservative existential risk interventions are cost competitive (within an order of magnitude) to spending on AMF. Further, compared to plausible less conservative existential risk interventions, AMF is “probably” an order of magnitude less cost-effective. Under Rethink Priorities’ estimates for welfare ranges, for cage-free campaigns and the hypothetical shrimp welfare intervention, existential risk interventions are either cost competitive, or an order of magnitude less cost-effective.
I think that actually gives some reasonable weight to the idea that existential risk can be justified without reference to the far future. Duffy used a timeline of <200 years and even then a case can be made that interventions focussing on existential risk should be prioritised. At the very least it adds level of uncertainty about the relevance of the far future in moral-decision making.
This is pretty vague. If existential risk is roughly on par with other cause areas then we would be justified in giving any amount of resources to it. If existential risk is orders of magnitude more important then we should greatly prioritize it over other areas (at least on the current margin). So factoring in the far future does seem to be very consequential here.
According to the authors of the linked article, longtermists have not convincingly shown that taking the far future in account impacts decision-making in practice. Their claim is that the burden of proof here lies for the longtermist. If the far future is important for moral decision-making then this claim needs to be justified. A surface level justification that people in the far future would want to be alive, is equally justified by reference to the near future.
You linked a quantitative attempt at answering the question of whether focus on existential risk requires priority if we consider <200 years, and the answer appears to be in the affirmative (depending on weightings). Is there a corresponding attempt at making this case using the far future as a reference point?
In order to provide a justification for preventative x-risk policies with reference to their impact on the far future we would need to compare it with the impact of other focus areas and how they would influence the far future. That is in part where the ‘We Are Not in a Position to Predict the Best Actions for the Far Future’ claim fits in because how are we supposed to do an analysis of the influence of any intervention (such as medical research, but including x-risk interventions) on people living millions of years into the future. It’s possible that if we did have that kind of predictive power, many other focus areas may turn out to be orders of magnitude more important than focus on existential risks.
The analysis I linked to isn’t conclusive on longtermism being the clear winner if only considering the short-term. Under certain assumptions it won’t be the best. Therefore if only considering the short-term, many may choose not to give to longtermist interventions. Indeed this is what we see in the EA movement where global health still reigns supreme as the highest priority cause area.
What most longtermist analysis does is argue that if you consider the far future, longtermism then becomes the clear winner (e.g. here). In short, significantly more value is at stake with reducing existential risk because now you care about enabling far future beings to live and thrive. If longtermism is the clear winner then we shouldn’t see a movement that clearly prioritises global health, we should see a movement that clearly prioritises longtermist causes. This would be a big shift from the status quo.
As for your final point, I think I understand what you / the authors were saying now. I don’t think we have no idea what the far future effects of interventions like medical research are. We can make a general argument it will be good in expectation because it will help us deal with future disease which will help us reduce future suffering. Could that be wrong—sure—but we’re just talking about expectational value. With longtermist interventions, the argument is the far future effects are significantly positive and large in expectation. The simplest explanation is that future wellbeing matters, so reducing extinction risk seems good because we increase the probability of there being some welfare in the future rather than none.
It isn’t a clear winner but neither were any of the other options and it was cost competitive.
In this thread Toby Ord has said that he and most longtermists don’t support ‘strong determinism’. Although he hasn’t elucidated what the mainstream view of longtermism is.
If all the argument amounts to is that it will be good in expectation, well we can say that about a lot of cause areas. What we need is an argument for why it would be good in expectation, compared to all these other cause areas.
Future well being does matter but focusing on existential risk doesn’t lead to greater future well-being necessarily. It leads to humans being alive. If the future is filled with human suffering, then focus on existential risk could be one of the worst focus areas.
Yeah the strong longtermism paper elucidates this argument. I also provide a short sketch of the argument here. At its core is the expected vastness of the future that allows longtermism to beat other areas. The argument for “normal” longtermism i.e. not “strong” is pretty much the same structure.
Yes that’s true. Again we’re dealing with expectations and most people expect the future to be good if we manage not to go extinct. But it’s also worth noting that reducing extinction risk is just one class of reducing existential risk. If you think the future will be bad, you can work to improve the future conditional on us being alive or, in theory, you can work to make us go extinct (but this is of course a bit out there). Improving the future conditional on us being alive might involve tackling climate change, improving institutions, or aligning AI.
And, to reiterate, while we focus on these areas to some extent now, I don’t think we focus on them as much as we would in a world where society at large accepts longtermism.
Where is this comparison? I feel like i’m repeating myself here but in order to argue that focus on existential risk is one of the best things we can do for the far future, it needs to be compared with the effect of other focus areas on the far future. But if we are not in a position to predict how cause areas will effect the far future (including focus on x-risk), then how can we make the comparison to say that focussing on existential risk is better than any of the other causes.
Put another way, if focus on existential risk is better for the far future than medical research, we need to show that focus on medical research is worse for the far future than focus on existential risk. Since we aren’t in a position to predict the impact medical research will have on the far future, we aren’t in a position to make such a comparison. Otherwise the argument just collapses to, existential risk is probably good for the far future, so let’s focus on it.
To be clear, if every focus area received equal priority including existential risk, I don’t see how we can justify a greater investment into existential risk by arguing that it will have the highest expected value for the far future.
However all focus areas don’t receive equal priority. The analysis you linked by Duffy shows that focusing on existential risk, at least in the short term, is cost-competitive with some of the more effective focus like the AMF and perhaps in virtue of this it is more neglected than other areas. Therefore, it appears to be a worthy cause area. My concern is with how reference to the far future is being used as a justification for this cause.
It is commonly assumed a lot of interventions will likely fall prey to the “washing-out hypothesis” where the impact of the intervention becomes less significant as time goes on, meaning that the effects of actions in the near future matter more than their long-term consequences. In other words, over time, the differences between the outcomes of various actions tend to fade or “wash out.” So in practice most people would assume the long-term impact of something like medical research is, in expectation, zero.
Longtermists aim to avoid “washing out”. One way is to find interventions that steer between “attractor states”. For example, extinction is an attractor state in that, once humans go extinct, they will stay that way forever (assuming humans don’t re-evolve). Non-extinction is also an attractor state, although to a lesser extent. Increasing the probability of achieving the better attractor state (probably non-extinction by a large margin, if we make certain foundational assumptions) has high expected value that stretches into the far future. This is because the persistence of the attractor states allows the expected value of reducing extinction risk not to “wash out” over time.
This is all explained better in the paper The Case for Strong Longtermism which I would recommend you read.
The washing out hypothesis is a different concern to what we are talking about here. The idea I have been discussing here is not that an intervention might become less significant as time goes on. An intervention could be extremely significant for the far future, or not significant at all. However, our ability to predict the impact of that intervention on the far future is outside our purview.
From the article:
Or perhaps the difficulty lies in the high number of causal possibilities the further we reach into the future.
In the article they compare the impact of an intervention (malaria bed nets) on the near future with the impact of an intervention (reducing x-risk from asteroids, global pandemics, AI risk) on the far future. As I said earlier, not an adequate comparison.
If we compare the positive impact of an intervention on quadrillions of people to a positive impact of an intervention on only billions of people, should we be surprised that the intervention that considers the impact on more people has a greater effect? Put another way, should we be surprised the bed net intervention has a smaller impact when we reduce the time horizon of its impact to the near future?
To this you might say, well interventions focused on malaria might have this ‘washing out’ effect. But so might interventions for reducing existential risk. For example, the intervention discussed in the paper to reduce extinction-level pandemics is to spend money on strengthening the healthcare system. Something that could easily be subject to the ‘washing out’ effect.
Nevertheless, the bed net intervention is only one intervention, and there are other interventions that could have more plausible effects on the far future which would be more adequate comparisons (if such comparisons were feasible in the first place), for example, medical research.
If extinction and non-extinction are “attractor states”, from what I gather, a state that is expected to last an extremely long time, what exactly isn’t an attractor state?
Let me translate that sentence: Focusing on existential risk is more beneficial for the far future than other cause areas because it increases the probability of humans being alive for an extremely long time. If it’s more beneficial, we need the relevant comparison, as per above, the relevant comparison is lacking.
Any state that isn’t very persistent. For example, an Israel-Gaza ceasefire. We could achieve it, but from history we know it’s unlikely to last very long. The fact that it is unlikely to last makes it less desirable to work towards than if we were confident it would last a long time.
The extinction vs non-extinction example is the classic attractor state example, but not the only one. Another one people talk about is stable totalitarianism. Imagine China or the US can win the race to superintelligence. Which country wins the race essentially controls the world for a very long time given how powerful superintelligence would be. So we have two different attractor states—one where China wins and has long-term control and one where the US wins and has long-term control. Longtermist EAs generally think the state where the US wins is the much better one—the US is a liberal democracy whereas China is an authoritarian state. So if we just manage to ensure the US wins we would experience the better state for a very long time, which seems very high value.
There are ways to counter this. You can argue the states aren’t actually that persistent e.g. you don’t think superintelligence is that powerful or even realistic in the first place. Or you can argue one isn’t clearly better than the other. Or you can argue that there’s not much we can do to achieve one state over other. You touch on this last point when you say that longtermist interventions may be subject to washing out themselves, but it’s important to note that longtermist interventions often aim to achieve short-term outcomes that persist into the long-term, as opposed to long-term outcomes (I explain this better here).
Saving a life through bed nets just doesn’t seem to me to put the world in a better attractor state which makes it vulnerable to washing out. Medical research doesn’t either.
It appears you are saying:
a) We should take actions such that we will enter into a state in the near future that will endure and last a very long time.
b) These near future states that will endure for a long time will be the best states for the beings in the far future.
If so, by design actions now need to lead, relatively quickly, into such an ‘attractor state’ otherwise these actions are subject to the same ‘washing out’ criticism that they were designed to avoid. That means that the state that is hypothesised to last for an extremely long time, is a state that is close to the present state. But then we are left with the somewhat surprising claim that a state that we can establish in the near future which lasts an extremely long time is one of the best things we can do for the far future. On the face of it I find the idea very implausible.
From the perspective of the linked article unfortunately these points suffer from the same limitation of our inability to make predictions about the effect of our actions on the far future. Therefore, we lack the ability to predict which states would persist into the far future and which wouldn’t. And we lack the ability to predict which persistent states would be better than others for the far future.
For example, how exactly does the US winning the race to superintelligence lead to one of the best possible futures for quadrillions of people in the far future? How long is this state expected to last? What will happen once we are no longer in this state?
There is a related issue of ambiguity about what is included in “the state of things that last an extremely long time”. To adapt the terminology from The Case for Strong Longtermism paper, the space of all possible microstates that the world could be in at a single moment of time is S. Call the state that we are in at any given moment in time, subset of S, X. Call the subset of X that persists for an extremely long period of time A.
The world is continually moving through different states of X and the claim from the ‘attractor’ argument is that we should move towards versions of A that will positively influence the far future. However, obviously there are other subsets of X that continue to change, i.e. there are microstates of the world that continue to change, otherwise the world is frozen in time. The issue of ambiguity is, what set of microstates makes up A?
One version of A you claim is the US winning the race to superintelligence. When we are in the state of the US having won the race to superintelligence, what exactly is in A? What exactly is claimed to be persisting for a very long time? The US having dominance over the world? Liberty? Democracy? Wellbeing? And whatever it is, how is that influencing the quadrillions of lives in the far future, given that there is still a large subset of X which is changing.
My comments in the previous post were not that bed nets and medical research are in fact better focus areas for influencing the far future, but rather in order to say they are not better focus areas for influencing the far future you need to show this. In order to show this you need to be able to predict how focus on these areas impacts the far future. We aren’t in a position to predict how focus on these areas effects the far future. Therefore we aren’t in a position to say that medical research/bed nets/‘insert other’ are better or worse focus areas for influencing the far future.
I don’t think I have explained this well enough. I’d be happy to have a call sometime if you want as that might be more efficient than this back and forth. But I’ll reply for now.
No. This is not what I’m saying.
The key thing is that there are two attractor states that differ in value and you can affect if you end up in one or the other. The better one does not have to be the best possible state of the world, it just has to be better than the other attractor state.
So if you achieve the better one you persist at that higher expected value for a very long time compared to the counterfactual of persisting at the lower value for a very long time. So even if the difference in value (at any given time) is kind of small, the fact that this difference persists for a very long time is what gives you the very large counterfactual impact.
Not necessarily. To use the superintelligence example, the world will look radically different under either the US or China having superintelligence than it does now.
As I said earlier it doesn’t necessarily lead to one of the best futures, but to cover the persistence point—this is a potentially fair push back. Some people doubt the persistence of longtermist interventions/attractor states, which would then dampen the value of longtermist interventions. We can still debate the persistence of different states of the world though and many think that a government controlling superintelligence would become very powerful and so be able to persist for a long time (exactly how long I don’t know but “long time” is all we really need for it to become an important question).
Yeah I guess in this case I’m talking about the US having dominance over the world as opposed to China having dominance over the world. Remember I’m just saying one attractor state is better than the other in expectation, not that one of them is so great. I think it’s fair to say I’d rather the US control the world than China control the world given the different values the two countries hold. Leopold Aschenbrenner talks more about this here. Of course I can’t predict the future precisely, but we can talk about expectations.
For now i’m finding the gaps in between useful for reflecting, thanks though. Perhaps in the future!
The world will be radically different, yet you feel confident in predicting that some element of this radically different world will remain constant for a very long time and this being so, moving towards this state is one of the best options for the far future.
I think you may be departing from strong longtermism. The first proposition for ASL is “Every option that is near-best overall is near-best for the far future.” We are talking about making decisions whose outcome is one of the best things we can do for the far future. It’s not merely something that is better than something deemed terrible.
Perhaps I didn’t explain the point about ambiguity well enough. Of all possible states, S, there is some possible state X, that is ‘near-best’, ‘best-possible’, ‘close to best’, what have you, for the far future. Call the ‘near-best’ state for the far future n-bX. There are microstates of n-bX that make it such that it is this ‘near-best’ state. Presumably you need to have some idea of what these microstates are, in order to make predictions regarding what we can do today that will lead towards them.
Therefore, there must be something about the state of the US having dominance over the world as opposed to China, that will presumably lead to the instantiation of some of these microstates of n-bX. Presumably, these beneficial microstates of n-bX don’t involve a country called “the US” and a country called “China”, and arguably lack the property of “dominance”.
So there must be some other thing, state, or property, call it n-bP, whose long term instantiation in the near-present world, is linked to n-bX. So the questions are, what is n-bP, and how is n-bP hypothesised to be linked to “US dominance..”, and how is it hypothesised to be instantiated for a very long time, and how is it hypothesised to be linked to n-bX. It’s ambiguous on all these questions.
We are not talking about what you would rather, we’re talking about what the far future would rather. I get the sense that what you are really defending are ways to incrementally improve the world that are currently under-appreciated. I don’t have an issue with that. What I am unconvinced by, is how reference to the lives of beings quadrillions of years into the future, can meaningfully guide our decisions.
I think you have misunderstood this. An option can be the best thing you can do because it averts a terrible outcome, as opposed to achieving the best possible outcome. For example, if we are at risk of entering a hellscape that will last for eternity and you can press a button to simply stop that from happening, that seems to me like it would be the single best thing anyone can do (overall or for the far future). The end result however would just be a continuation of the status quo. This is the concept of counterfactual impact—we compare the world after our intervention to the world that would have happened in the absence of the intervention and the difference in value is essentially how good the intervention was. Indeed a lot of longtermists simply want to avert s-risks (risks of astronomical suffering).
I don’t understand some of what you’re saying including on ambiguity. I don’t find it problematic to say that the US winning the race to superintelligence is better in expectation than China winning. China has authoritarian values, so if they control the world using superintelligence they are more likely to control it according to authoritarian values, which means less freedom, but freedom is important for wellbeing etc. etc. I think we can say, if we assume persistence, that future people would more likely be thankful the US won the race to superintelligence than China did. I am extrapolating that future people will also like freedom. Could I be wrong, sure, but we are doing things based on expectation.
I would say that your doubts about persistence are the best counter to longtermism. The claim that superintelligence may allow a state to control the world for a very long time is perhaps a more controversial one, but not one I am willing to discount. If you want to engage with object-level arguments on this point check out this document: Artificial General Intelligence and Lock-In.
This is probably a semantic disagreement but averting a terrible outcome could be viewed as one of the best things we can do for the far future. The part I was disagreeing with was when you said “I’m just saying one attractor state is better than the other in expectation, not that one of them is so great.”. This gives the impression that longtermism is satisfied with prioritising one option in comparison to another, regardless of the context of other options which if considered would produce outcomes that are “near-best overall”. And as such it’s a somewhat strange claim that one of the best things you could do for the far future is in actuality “not so great”.
My point could be ultimately be summarised by saying, how do you know that freedom (or any other value), will even makes sense in the far future, let alone valued? You don’t. You’re just assuming it makes sense and will be valued, because it makes sense and is valued now. While that may be sufficient for an argument in reference to the near future, I think it’s a very weak argument to defend its relevance for the far future.
At it’s heart, the “inability to predict” arguments really hold strongly onto the sense that the far future is likely to be radically different and therefore you are making a claim to having knowledge of what is ‘good’ in this radically different future.
I feel like “expectation” is doing far too much work in these arguments. It’s not convincing to just claim something is likely or expected, that just begs the question, why is it likely or expected.
Nevertheless I think the focus on non-existential risk examples like the US having dominance over China is a red herring for defending longtermism. I think the strongest claims are those for taking action on preventing existential risk. But there the action’s are still subject to the same criticisms regarding the inability to predict how they will actually positively influence the far future.
For example, take reducing exitential risk by developing some sort of asteroid defense system. While in the short term developing an asteroid defense system might seem to adequately contribute to the goal of reducing existential risk. It’s unclear how asteroid defense systems or other mitigation policies might interact with other technologies or societal developments in the far future. For example, advanced asteroid deflection technologies could have dual-use potentials (like space weaponization) that could create new risks or unforeseen consequences. Thus, while reducing risk associated with asteroid impacts has immediate positive effects, the net effect on the far future is more ambiguous.
There is also an accounting issue that distorts the estimates of the impact of particular actions on the far future. Calculating the expected value of minimising the existential risk associated with an asteroid impact for example, doesn’t take into account changes in expected value over time. For a simple example, as soon as humans start living comfortably, in addition to but beyond Earth (for example on Mars), the existential risk from an asteroid impact declines dramatically, and further declines are made as we extend out further through the solar system and beyond. Yet the expected value is calculated on the time horizon whereby the value of this action, reducing risk from asteroid impact, will endure for the rest of time, when in reality, the value of this action, as originally calculated, will only endure for probably less than 50 years.
Longtermism should certainly prioritise the best persistent state possible. If we could lock-in a state of the world where there were the maximum number of beings with maximum wellbeing of course I would do that, but we probably can’t.
Ultimately the great value from a longtermist intervention does comes from comparing it to the state of the world that would have happened otherwise. If we can lock-in value 5 instead of locking in value 3, that is better than if we can lock-in value 9 instead of locking value 8.
I think we just have different intuitions here. The future will be different, but I think we can make reasonable guesses about what will be good. For example, I don’t have a problem with a claim that a future where people care about the wellbeing of sentient creatures is likely to be better than one where they don’t. If so, expanding our moral circle seems important in expectation. If you’re asking “why”—it’s because people who care about the wellbeing of sentient creatures are more likely to treat them well and therefore more likely to promote happiness over suffering. They are also therefore less likely to lock-in suffering. And fundamentally I think happiness is inherently good and suffering inherently bad and this is independent of what future people think. I don’t have a problem with reasoning like this, but if you do then I just think our intuitions diverge too much here.
Maybe fair, but if that’s the case I think we need to find those interventions that are not very ambiguous. Moral circle expansion seems one of those that is very hard to argue against. (I know I’m changing my interventions—it doesn’t mean I don’t think the previous ones I said are still good, I’m just trying to see how far your scepticism goes).
Considering this particular example—If we spread out to the stars then x-risk from asteroids drops considerably as no one asteroid can kill us all—that is true. But the value of the asteroid reduction intervention is borne from actually getting us to that point in the first place. If we hadn’t reduced risk from asteroids and had gone extinct then we’d have value 0 for the rest of time. If we can avert that and become existentially secure than we have non-zero value for the rest of the time. So yes, we would have indeed done an intervention that has impacts enduring for the rest time. X-risk reduction interventions are trying to get us to a point of existential security. If they do that, their work is done.