Apologies for posting four shortforms in a row. I accumulated quite a few ideas in recent days, and I poured them all out.
Summary: When exploring/prioritizing causes and interventions, EA might be neglecting alternative future scenarios, especially along dimensions orthogonal to popular EA topics. We may need to consider causes/interventions that specifically target alternative futures, as well as add a “robustness across future worlds” dimension to the ITN framework.
Epistemic status: low confidence
In cause/intervention exploration, evaluation and prioritization, EA might be neglecting alternative future scenarios, e.g.
alternative scenarios of the natural environment: If the future world experienced severe climate change or environmental degradation (which has serious downstream socioeconomic effects), what are the most effective interventions now to positively influence such a world?
alternative scenarios of social forms: If the future world isn’t a capitalist world, or is different from the current world in some other important aspect, what are the most effective interventions now to positively influence such a world?
...
This is not about pushing for certain futures to realize. Instead, it’s about what to do given that future. Therefore, arguments against pushing for certain futures (e.g. low neglectedness) do not apply.
For example, an EA might de-prioritize pushing for future X due to its low neglectedness, but if they think X has a non-trivial probability to realize, and its realization has rich implications for cause/intervention prioritization, then whenever doing prioritization, they need to think about “what I should do in a world where X would be realized”. This could mean:
finding causes/interventions that are robustly impactful across future scenarios, or
finding causes/interventions that specifically target future X.
In theory, the consideration of alternative futures should be captured by the ITN framework, but in practice it’s usually not. Therefore it could be valuable to add one more dimension to the ITN framework: “robustness across future worlds”.
Also, there’re different dimensions on which futures can differ. EA tends to have already considered the dimensions that are related to EA topics (e.g. which trajectory of AI is actualized), but tends to ignore the dimensions that aren’t. But this is unreasonable, as EA-topic-related dimensions aren’t necessarily the dimensions in which futures have the largest variance.
Finally, note that in some future worlds, it’s easier to have high altruistic impact than in other worlds. For example in a capitalist world, altruists seem to be at quite a disadvantage to profit-seekers; in some alternative social forms, altruism plausibly becomes much easier and more impactful, while in some other social forms, it may become even harder. In such cases, we may want to prioritize the futures that have the most potential for current altruistic interventions.
Apologies for posting four shortforms in a row. I accumulated quite a few ideas in recent days, and I poured them all out.
Summary: When exploring/prioritizing causes and interventions, EA might be neglecting alternative future scenarios, especially along dimensions orthogonal to popular EA topics. We may need to consider causes/interventions that specifically target alternative futures, as well as add a “robustness across future worlds” dimension to the ITN framework.
Epistemic status: low confidence
In cause/intervention exploration, evaluation and prioritization, EA might be neglecting alternative future scenarios, e.g.
alternative scenarios of the natural environment: If the future world experienced severe climate change or environmental degradation (which has serious downstream socioeconomic effects), what are the most effective interventions now to positively influence such a world?
alternative scenarios of social forms: If the future world isn’t a capitalist world, or is different from the current world in some other important aspect, what are the most effective interventions now to positively influence such a world?
...
This is not about pushing for certain futures to realize. Instead, it’s about what to do given that future. Therefore, arguments against pushing for certain futures (e.g. low neglectedness) do not apply.
For example, an EA might de-prioritize pushing for future X due to its low neglectedness, but if they think X has a non-trivial probability to realize, and its realization has rich implications for cause/intervention prioritization, then whenever doing prioritization, they need to think about “what I should do in a world where X would be realized”. This could mean:
finding causes/interventions that are robustly impactful across future scenarios, or
finding causes/interventions that specifically target future X.
In theory, the consideration of alternative futures should be captured by the ITN framework, but in practice it’s usually not. Therefore it could be valuable to add one more dimension to the ITN framework: “robustness across future worlds”.
Also, there’re different dimensions on which futures can differ. EA tends to have already considered the dimensions that are related to EA topics (e.g. which trajectory of AI is actualized), but tends to ignore the dimensions that aren’t. But this is unreasonable, as EA-topic-related dimensions aren’t necessarily the dimensions in which futures have the largest variance.
Finally, note that in some future worlds, it’s easier to have high altruistic impact than in other worlds. For example in a capitalist world, altruists seem to be at quite a disadvantage to profit-seekers; in some alternative social forms, altruism plausibly becomes much easier and more impactful, while in some other social forms, it may become even harder. In such cases, we may want to prioritize the futures that have the most potential for current altruistic interventions.