Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
I of course agree that we should take into account the size of the future. I somewhat disagree with this:
Do you have examples in mind? I can think of a couple related to anthropics, but their decision-relevance is unclear.
No matter what the universe is like, or whether we’re in a simulation, or whatever, averting x-risk seems roughly equivalent to increasing future option value, which seems roughly equivalent to being able to make the most of the universe, whatever it’s like.
I tried to describe some possible examples in the post. Maybe strong longtermists should have less trust in scientific consensus, since they should act as if the scientific consensus is wrong on some fundamental issues (e.g. on the 2nd law of thermodynamics, faster than light travel prohibition). Although I think you could make a good argument that this goes too far.
I think the example about humanity’s ability to coordinate might be more decision-relevant. If you need to act as if humanity will be able to overcome global challenges and spread through the galaxy, given the chance, then I think that is going to have relevance for the prioritisation of different existential risks. You will overestimate humanity’s ability to coordinate relative to if you didn’t make that conditioning, and that might lead you to, say, be less worried about climate change.
I agree that it makes this post much less convincing that I can’t describe a clear cut example though. Possibly that’s a reason to not be as worried about this issue. But to me, the fact that “allows for a strong future” should almost always dominate “probably true” as a principle for choosing between beliefs to adopt, intuitively feels like it must be decision-relevant.
Not quite sure what “actual examples” we can possibly conjure up, but I suspect this is somewhat related to the issue of technology-related X-risks.
Related, also with some relevant discussion in the comments: https://forum.effectivealtruism.org/posts/sEnkD8sHP6pZztFc2/fanatical-eas-should-support-very-weird-projects
Thanks! Very related. Is there somewhere in the comments that describes precisely the same issue? If so I’ll link it in the text.
I don’t have any specific comment in mind to single out.
Toby—interesting essay. But I’m struggling to find any rational or emotive force in your argument that ‘strong longtermism tells us to look at the set of possible theories about the world, pick the one in which the future is largest, and, if it is large enough, act as if that theory were true’
The problem is that this leads to a couple of weird edge cases.
First, if we live in a ‘quantum multiverse’, in which there are quadrillions of time-lines branching off every microsecond into new universes, then the future is very very large indeed, but any decisions we make to influence it seem irrelevant, insofar as we’d make any possible decision in some branching time-line.
Second, the largest possible futures seem associated more with infinite religious afterlives than with scientifically plausible theories. Should ‘strong longtermists’ simply adopt Christian metaphysics, on the assumption that an infinite afterlife in heaven would be really cool, compared to any atheist metaphysics?
I’d welcome any thoughts about these examples.
Thanks for the comment! I have quite a few thoughts on that:
First, the intention of this post was to criticize strong longtermism by showing that it has some seemingly ridiculous implications. So in that sense, I completely agree that the sentence you picked out has some weird edge cases. That’s exactly the claim I wanted to make! I also want to claim that you can’t reject these weird edge cases without also rejecting the core logic of strong longtermism that tells us to give enormous priority to longterm considerations.
The second thing to say though is that I wanted to exclude infinite value cases from the discussion, and I think both of your examples probably come under that. The reason for this is not that infinite value cases are not also problematic for strong longtermism (they really are!) but strong longtermists have already adapted their point of view in light of this. In Nick Beckstead’s thesis, he says that in infinite value cases, the usual expected utility maximization framework should not apply. That’s fair enough. If I want to criticize strong longtermists, I should criticize what they actually believe, not a strawman, so I stuck to examples containing very large (but finite) value in this post.
The third and final thought I have is a specific comment on your quantum multiverse case. If we’d make any possible decision in any branch, does that really mean that none of our decisions have any relevance? This seems like a fundamentally different type of argument to the Pascal’s wager-type arguments that this post relates to, in that I think this objection would apply to any decision framework, not just EV maximization. If you’re going to make all the decisions anyway, why does any decision matter? But you still might make the right decision on more branches than you make the wrong decision, and so my feeling is that this objection has no more force than the objection that in a deterministic universe, none of our decisions have relevance because the outcome is pre-determined. I don’t think determinism should be problematic for decision theory, so I don’t think the many-worlds interpretation of quantum mechanics should be either.
This was really well written! I appreciate the concise and to the point writing style, as well as a summary at the top.
Regarding the arguments, I think they make sense to me. Although this is where the whole discussion of longtermism does tend to stay pretty abstract, if we can’t actually put real numbers on it.
For ex, in the spirit of your example—does working on AI safety at MIRI prevent extinction, while assuming a sufficiently great future compared to, say, working on AI capabilities at OpenAI? (That is, maybe a misaligned AI can cause a greater future?)
I don’t think it’s actually possible to do a real calculations in this case, and so we make the (reasonable) base assumption that a future with alligned AI is better than a future with a misaligned AI, and go from there.
Maybe I am overly biased against longtermism either way, but in this example it seems to me like the problem you mention isnt really a real-world worry, but only really a theoretically possible pascals mugging.
Having said that I still think it is a good argument against strong longtermism