longtermism and politics both seem “error bars so wide that expected value theory is probably super useless or an excuse for motivated reasoning or both”. But I don’t think this is damning toward EA because downsides of a misfire in a brittle theory of change don’t seem super important for most longtermist interventions (your pandemic preparedness scheme might accidentally abolish endemic flu, so your miscalculation about the harm or likelihood of a way-worse-than-covid is sort of fine). Whereas in politics the brittleness of the theory of change means you can be well-meaningly harmful, which is kinda the central point of anything involving “politics” at all.
Certainly this is not robust to all longtermist interventions, but I find very convincing for the average case.
AI safety has important potential backfire risks, like accelerating capabilities (or causing others to, intentionally or not), worsening differential progress, backfire s-risks. I know less about biorisk, but there are infohazards there, so that bringing more attention to biorisk can also increase the risk of infohazards leaking or search for them.
I think a separate but plausibly better point is the “memetic gradient” is characterized in known awful ways for politics, and many longtermist theories of change offer an opportunity for something better. If you pursue a political theory of change, you’re consenting to a relentless onslaught of people begging you to make your epistemics worse on purpose. This is a perfectly good reason not to sign up for politics, the longtermist ecosystem is not immune to similar issues but it seems certainly like there’s a fighting chance or that it’s the least bad of all options.
longtermism and politics both seem “error bars so wide that expected value theory is probably super useless or an excuse for motivated reasoning or both”. But I don’t think this is damning toward EA because downsides of a misfire in a brittle theory of change don’t seem super important for most longtermist interventions (your pandemic preparedness scheme might accidentally abolish endemic flu, so your miscalculation about the harm or likelihood of a way-worse-than-covid is sort of fine). Whereas in politics the brittleness of the theory of change means you can be well-meaningly harmful, which is kinda the central point of anything involving “politics” at all.
Certainly this is not robust to all longtermist interventions, but I find very convincing for the average case.
AI safety has important potential backfire risks, like accelerating capabilities (or causing others to, intentionally or not), worsening differential progress, backfire s-risks. I know less about biorisk, but there are infohazards there, so that bringing more attention to biorisk can also increase the risk of infohazards leaking or search for them.
I think a separate but plausibly better point is the “memetic gradient” is characterized in known awful ways for politics, and many longtermist theories of change offer an opportunity for something better. If you pursue a political theory of change, you’re consenting to a relentless onslaught of people begging you to make your epistemics worse on purpose. This is a perfectly good reason not to sign up for politics, the longtermist ecosystem is not immune to similar issues but it seems certainly like there’s a fighting chance or that it’s the least bad of all options.