Far-future effects are the most important determinant of what we ought to do
I agree it’s insanely hard to know what will affect the far future, and how. But I think we should still try, often by using heuristics (one I’m currently fond of is “what kinds of actions seem to put us on a good trajectory, e.g. to be doing well in 100 years?”)
I think that in cases where we do have reason to think an action will affect the long run future broadly and positively in expectation (i.e. even if we’re uncertain) that’s an extremely strong reason—and usually an overriding one—to favour it over one that looks worse for the long-run future. I think that’s sufficient for agreement with the statement.
Thank you for doing this analysis!
Would you say this analysis is limited to safety from misalignment related risks, or any (potentially catastrophic) risks from AI, including misuse, gradual disempoerment, etc.?