Thanks for the thoughts. I basically agree with you. I’d consider myself a “longtermist,” too, for similar reasons. I mainly want to reject the comparatively extreme implications of “strong longtermism” as defended by Greaves and MacAskill that extremely speculative and epistemically fragile longtermist interventions are more cost effective than even the most robust and impactful near termist ones.
I think there’s likely a lot of steps we could and should be taking that could quite reasonably be expected to reduce real and pressing risks.
I would add to your last bullet, though, that speculative theories will only die if there’s some way to falsify them or at least seriously call them into question. Strong longtermism is particularly worrying because it is an unfalsifiable theory. For one thing, too much weight is placed on one fundamentally untestable contention: the size and goodness of the far future. Also, it’s basically impossible to actually test whether speculative interventions intended to very slightly reduce existential risk actually are successful (how could we possibly tell if risk was actually reduced by 0.00001%? or increased by 0.00000001%?). As a result, it could survive forever, no matter how poor a job it’s doing.
Longtermist interventions (even speculative ones) supported by “cluster thinking” styles that put more weight on more testable assumptions (e.g. about the neglectedness or tractability of some particular issue, about the effect an intervention could have on certain “signposts” like international coordination, rate of near misses, etc.) or are intended to lead to more significant reductions in existential risk (which could be somewhat easier to measure than very small ones) are likely easier to reject if they prove ineffective.
Thanks for the thoughts. I basically agree with you. I’d consider myself a “longtermist,” too, for similar reasons. I mainly want to reject the comparatively extreme implications of “strong longtermism” as defended by Greaves and MacAskill that extremely speculative and epistemically fragile longtermist interventions are more cost effective than even the most robust and impactful near termist ones.
I think there’s likely a lot of steps we could and should be taking that could quite reasonably be expected to reduce real and pressing risks.
I would add to your last bullet, though, that speculative theories will only die if there’s some way to falsify them or at least seriously call them into question. Strong longtermism is particularly worrying because it is an unfalsifiable theory. For one thing, too much weight is placed on one fundamentally untestable contention: the size and goodness of the far future. Also, it’s basically impossible to actually test whether speculative interventions intended to very slightly reduce existential risk actually are successful (how could we possibly tell if risk was actually reduced by 0.00001%? or increased by 0.00000001%?). As a result, it could survive forever, no matter how poor a job it’s doing.
Longtermist interventions (even speculative ones) supported by “cluster thinking” styles that put more weight on more testable assumptions (e.g. about the neglectedness or tractability of some particular issue, about the effect an intervention could have on certain “signposts” like international coordination, rate of near misses, etc.) or are intended to lead to more significant reductions in existential risk (which could be somewhat easier to measure than very small ones) are likely easier to reject if they prove ineffective.