I believe that distribution of current EA efforts is sub-optimal and should focus on generalist resilience efforts to a far greater extent. (anywhere between 5-50% of total efforts by upscaling current efforts x1000-x100.000)
Premises:
1 EA is built on the principles of evidence and reason when coming to conclusions about importance of cause areas and projects for the long-term future of humanity
2 rationality/EA tools such as forecasting and cause prioritization are good tools to come to the classic EA conclusions about ‘High Impact Problems’: the global problems and events that are (1) most probable to (2) cause the biggest magnitude of suffering/deaths/DALY/whatever out of all known problems.
3 Practically all EA efforts are currently built upon these ‘classic EA conclusions’.
Disclaimer:
4 I generally agree with these conclusion given the expected impact, neglectedness and tractability of these well-formulated cause areas.
I generally agree that it is a good thing that EA aims to increase the amount of attention and work done on these specific issues with the goal of upscaling countermeasures by several orders of magnitude .
However:
5 I find it likely that there is a set of global problems / events causing expected suffering (based on 1 probability & 2 magnitude) similar to currently known High Impact Problems, that are currently unknown.
6 I find it likely that some of these unknown High Impact Problems can be ‘discovered’, arguably that’s what happened with AI safety. Let’s say discovery means that a problem becomes obvious and looming enough for someone to write something like a cause area profile on it.
7 I find it likely that some of these unknown High Impact Problems can not be ‘discovered’ until it is ‘too late’ for EA to start countering them—even by attempts of systematically finding any such possible blind spots or black swans. Let’s say too late means that it is too late for preventive countermeasures leading to either: 1 extinction or 2 mitigation that can not avert a large portion of the harm done
(more or less probable examples of such events/problems could be e.g. discovery of a ‘black ball technology’, an unforeseen strongly destabilizing political event, discovery of malevolent multidimensional lifeforms, EA becoming the focus of a targeted attack, let your imagination run free for more examples)
8 In these cases, generalist resilience building efforts are the only way to effectively prevent or mitigate unknown High Impact Problems occurring.
(examples of such efforts could be e.g. research/advocacy for increased societal resilience, research/advocacy for improved institutional decision making or university/career advice focusing more on general leverage rather than specialist cause area specific courses (think ai safety researcher)
Controversial conclusions:
9 Tools like forecasting or cause prioritization and their ‘classic EA conclusions’ do not give a comprehensive picture of all High Impact Problems and likely never will.
10 Practically all EA efforts are currently built upon these ‘classic EA conclusions’* and therefore likely sub-optimally distributed.
11 EA efforts should focus to a greater extent on general countermeasures to all possible High Impact Problems (known/unknown)
12 My naive intuition for a desirable ‘greater extent’ would be: Focusing a significant part of total EA efforts on generalist resilience building, say anywhere between 5-50%, maybe more, by upscaling efforts by x1000-x100.000, maybe more.
Caveats and further thoughts:
Arguably, generalist resilience efforts are not neglected as these topics are traditional politicians’, economists’, etc. daily bread-and-butter.
It is likely hard, maybe impossible to find the ideal split of resource allocation to specialist vs generalist efforts as it could be impossible to quantify the expected negative impact of black swan type unknown High Impact Problems.
I realize that my intuitions about ideal allocation implicitly assume a certain prior probability of unknown High Impact Events occurring and that intuition about this probability might differ from person to person. The classic book ‘Black Swan’ by Nassim Taleb is stuffed with historic examples of unknown High Impact Events and attempts to quantify risks overlooking them; My intuitions have probably largely been influenced by the author.
Arguably, many generalist efforts generalize well to specific classic cause areas which would further increase the ideal proportion of resources going into generalist resilience building.
*Keep in mind that even grants whose purpose is to identify thus far unknown High Impact Problems will only come up with new ‘classic EA conclusions’ and not give a comprehensive picture of all High Impact Problems due to 7.
Focus on Civilizational Resilience over Cause Areas
TLDR:
I believe that distribution of current EA efforts is sub-optimal and should focus on generalist resilience efforts to a far greater extent. (anywhere between 5-50% of total efforts by upscaling current efforts x1000-x100.000)
Premises:
1 EA is built on the principles of evidence and reason when coming to conclusions about importance of cause areas and projects for the long-term future of humanity
2 rationality/EA tools such as forecasting and cause prioritization are good tools to come to the classic EA conclusions about ‘High Impact Problems’: the global problems and events that are (1) most probable to (2) cause the biggest magnitude of suffering/deaths/DALY/whatever out of all known problems.
3 Practically all EA efforts are currently built upon these ‘classic EA conclusions’.
Disclaimer:
4 I generally agree with these conclusion given the expected impact, neglectedness and tractability of these well-formulated cause areas.
I generally agree that it is a good thing that EA aims to increase the amount of attention and work done on these specific issues with the goal of upscaling countermeasures by several orders of magnitude .
However:
5 I find it likely that there is a set of global problems / events causing expected suffering (based on 1 probability & 2 magnitude) similar to currently known High Impact Problems, that are currently unknown.
6 I find it likely that some of these unknown High Impact Problems can be ‘discovered’, arguably that’s what happened with AI safety. Let’s say discovery means that a problem becomes obvious and looming enough for someone to write something like a cause area profile on it.
7 I find it likely that some of these unknown High Impact Problems can not be ‘discovered’ until it is ‘too late’ for EA to start countering them—even by attempts of systematically finding any such possible blind spots or black swans. Let’s say too late means that it is too late for preventive countermeasures leading to either:
1 extinction or 2 mitigation that can not avert a large portion of the harm done
(more or less probable examples of such events/problems could be e.g. discovery of a ‘black ball technology’, an unforeseen strongly destabilizing political event, discovery of malevolent multidimensional lifeforms, EA becoming the focus of a targeted attack, let your imagination run free for more examples)
8 In these cases, generalist resilience building efforts are the only way to effectively prevent or mitigate unknown High Impact Problems occurring.
(examples of such efforts could be e.g. research/advocacy for increased societal resilience, research/advocacy for improved institutional decision making or university/career advice focusing more on general leverage rather than specialist cause area specific courses (think ai safety researcher)
Controversial conclusions:
9 Tools like forecasting or cause prioritization and their ‘classic EA conclusions’ do not give a comprehensive picture of all High Impact Problems and likely never will.
10 Practically all EA efforts are currently built upon these ‘classic EA conclusions’* and therefore likely sub-optimally distributed.
11 EA efforts should focus to a greater extent on general countermeasures to all possible High Impact Problems (known/unknown)
12 My naive intuition for a desirable ‘greater extent’ would be:
Focusing a significant part of total EA efforts on generalist resilience building, say anywhere between 5-50%, maybe more, by upscaling efforts by x1000-x100.000, maybe more.
Caveats and further thoughts:
Arguably, generalist resilience efforts are not neglected as these topics are traditional politicians’, economists’, etc. daily bread-and-butter.
It is likely hard, maybe impossible to find the ideal split of resource allocation to specialist vs generalist efforts as it could be impossible to quantify the expected negative impact of black swan type unknown High Impact Problems.
I realize that my intuitions about ideal allocation implicitly assume a certain prior probability of unknown High Impact Events occurring and that intuition about this probability might differ from person to person.
The classic book ‘Black Swan’ by Nassim Taleb is stuffed with historic examples of unknown High Impact Events and attempts to quantify risks overlooking them; My intuitions have probably largely been influenced by the author.
Arguably, many generalist efforts generalize well to specific classic cause areas which would further increase the ideal proportion of resources going into generalist resilience building.
__________________________________________________
*Keep in mind that even grants whose purpose is to identify thus far unknown High Impact Problems will only come up with new ‘classic EA conclusions’ and not give a comprehensive picture of all High Impact Problems due to 7.