We are talking about making decisions whose outcome is one of the best things we can do for the far future.
An option can be the best thing you can do because it averts a terrible outcome, as opposed to achieving the best possible outcome.
This is probably a semantic disagreement but averting a terrible outcome could be viewed as one of the best things we can do for the far future. The part I was disagreeing with was when you said “I’m just saying one attractor state is better than the other in expectation, not that one of them is so great.”. This gives the impression that longtermism is satisfied with prioritising one option in comparison to another, regardless of the context of other options which if considered would produce outcomes that are “near-best overall”. And as such it’s a somewhat strange claim that one of the best things you could do for the far future is in actuality “not so great”.
I don’t understand some of what you’re saying including on ambiguity.
My point could be ultimately be summarised by saying, how do you know that freedom (or any other value), will even makes sense in the far future, let alone valued? You don’t. You’re just assuming it makes sense and will be valued, because it makes sense and is valued now. While that may be sufficient for an argument in reference to the near future, I think it’s a very weak argument to defend its relevance for the far future.
At it’s heart, the “inability to predict” arguments really hold strongly onto the sense that the far future is likely to be radically different and therefore you are making a claim to having knowledge of what is ‘good’ in this radically different future.
Could I be wrong, sure, but we are doing things based on expectation.
I feel like “expectation” is doing far too much work in these arguments. It’s not convincing to just claim something is likely or expected, that just begs the question, why is it likely or expected.
Nevertheless I think the focus on non-existential risk examples like the US having dominance over China is a red herring for defending longtermism. I think the strongest claims are those for taking action on preventing existential risk. But there the action’s are still subject to the same criticisms regarding the inability to predict how they will actually positively influence the far future.
For example, take reducing exitential risk by developing some sort of asteroid defense system. While in the short term developing an asteroid defense system might seem to adequately contribute to the goal of reducing existential risk. It’s unclear how asteroid defense systems or other mitigation policies might interact with other technologies or societal developments in the far future. For example, advanced asteroid deflection technologies could have dual-use potentials (like space weaponization) that could create new risks or unforeseen consequences. Thus, while reducing risk associated with asteroid impacts has immediate positive effects, the net effect on the far future is more ambiguous.
There is also an accounting issue that distorts the estimates of the impact of particular actions on the far future. Calculating the expected value of minimising the existential risk associated with an asteroid impact for example, doesn’t take into account changes in expected value over time. For a simple example, as soon as humans start living comfortably, in addition to but beyond Earth (for example on Mars), the existential risk from an asteroid impact declines dramatically, and further declines are made as we extend out further through the solar system and beyond. Yet the expected value is calculated on the time horizon whereby the value of this action, reducing risk from asteroid impact, will endure for the rest of time, when in reality, the value of this action, as originally calculated, will only endure for probably less than 50 years.
This is fantastic!
Do you know if anything like this exists for other cause areas, or the EA world more broadly?
I have been compiling and exploring resources available for people interested in EA and different cause areas. There is a lot of organisations and opportunities to to get career advice, or undertake courses, or get involved in projects, but it is all scattered and there is no central repository or guide for navigating the EA world that I know of.