Isn’t Response 5 (go longtermist) really a subset of Response 4 (Ignore things that we can’t even estimate)? It proposes to ignore shorttermist interventions, because we can’t estimate their effects.
Also, this seems like a bad decision theory. I can’t estimate the longterm effects of eating an apple, but that doesn’t imply that I should starve due to indecision.
Longtermism is the claim (or thesis) that we can do the most good by focusing on effects going into the longterm future:
Let strong longtermism be the thesis that in a wide class of decision situations, the option that is ex ante best is contained in a fairly small subset of options whose ex ante effects on the very long-run future are best.
Isn’t Response 5 (go longtermist) really a subset of Response 4 (Ignore things that we can’t even estimate)? It proposes to ignore shorttermist interventions, because we can’t estimate their effects.
It’s not ignoring them, it’s selecting interventions which look more robustly good, about which we aren’t so clueless.
Is that idea that once these longtermist interventions are fully-funded (diminishing returns), then we start looking at shortterm interventions?
I think the claim is that we don’t know that any short-termist interventions are good in expectation, because of cluelessness.
For what it’s worth, I don’t agree with this claim; this depends on your specific beliefs about the long-term effects of interventions.
Also, this seems like a bad decision theory. I can’t estimate the longterm effects of eating an apple, but that doesn’t imply that I should starve due to indecision.
Longtermism wouldn’t say you should die, just that, unless you know more, it wouldn’t say that you shouldn’t die either.
You can’t work on longtermist interventions if you die, though, and doing so might be robustly better than dying.
Is this longtermism?
List all possible actions {A1,..,AK}.
For each action Aj, calculate expected value Vt(Aj) over t=1:∞, using the social welfare function.
If we can’t calculate Vt for some t, due to cluelessness, then skip over that action.
Out of the remaining actions, choose the action with the highest expected value.
Or, (3′): if we can’t calculate Vt for Ai and Aj, then assume that they’re equal, and rank them by using their expected value over periods before t.
So longtermism is not a general decision theory, and is only meant to be applied narrowly?
Longtermism is the claim (or thesis) that we can do the most good by focusing on effects going into the longterm future:
https://globalprioritiesinstitute.org/hilary-greaves-william-macaskill-the-case-for-strong-longtermism/