i. On how we think about cause prioritization, and what comes before
2. Consideration of different views and ethics and how this affects what causes might be most important.
It’s not quite clear to me what this means. But it seems related to a broader point that I think is generally under-appreciated, or at least rarely acknowledged, namely that cause prioritization is highly value relative.
The causes and interventions that are optimal relative to one value system are unlikely to be optimal relative to another value system (which isn’t to say that there aren’t some causes and interventions that are robustly good on many different value systems, as there plausibly are, and identifying novel such causes and interventions would be a great win for everyone; but then it is also commensurately difficult to identify new such causes and have much confidence in them given both our great empirical uncertainty and the necessarily tight constraints).
I think it makes sense that people do cause prioritization based on the values, or the rough class of values, that they find most plausible. Provided, of course, that those values have been reflected on quite carefully in the first place, and scrutinized in light of the strongest counterarguments and alternative views on offer.
This is where I see a somewhat mysterious gap in EA, more fundamental and even more gaping than the cause prioritization gap highlighted here: there is surprisingly little reflection on and discussion of values (something I also noted in this post, along with some speculations as to what might explain this gap).
After all, cause prioritization depends crucially on the fundamental values based on which one is trying to prioritize (a crude illustration), so this is, in a sense, the very first step on the path toward thoroughly reasoned cause prioritization.
ii. On the apparent lack of progress
As hinted in Zoe’s post, it seems that much (most?) cutting edge cause prioritization research is found in non-public documents these days, which makes it appear like there is much less research than there in fact is.
This is admittedly problematic in that it makes it difficult to get good critiques of the research in question, especially from skeptical outsiders, and it also makes it difficult for outsiders to know what in fact animates the priorities of different EA agents and orgs. It may well be that it is best to keep most research secret, all things considered, but I think it’s worth being transparent about the fact that there is a lot that is non-public, and that this does pose problems, in various ways, including epistemically.
This post—which I found interesting and useful—feels relevant in relation to your first point. A relevant excerpt:
We can approach ‘figuring out what to do’ at three different levels of directness (which are inspired by the same kind of goal hierarchy as the Values-to-Actions Chain).
Most indirectly, we can ask ‘what should we value?’ We call that values research, which is roughly the same as ethics.
From our values, we can derive a high-level goal to strive for. For longtermism values, such a goal could be minimize existential risk.[1] For another set of values , such as animal-inclusive neartermism, the high-level goal could be to minimize the aggregate suffering of farm animals.[2]
More directly, we can ask ‘given our goal, how can we best achieve it?’ We call the research to answer that question strategy research. The result of strategy research is a number of strategic goals embedded in a strategic plan. For example, in existential risk reduction, strategy research could determine how to best allocate resources between reducing various existential risks based on their relative risk levels and timelines.
Most directly, we can ask ‘given our strategic plan, how should we execute it?’ We call the research to answer that question tactics research. Tactics research is similar to strategy research, but is at a more direct level. This makes tactics more specific. For example, in existential risk reduction, tactics research could be taking one of the sub goals from a strategic plan, say ‘reduce the competitive dynamics surrounding human-level AI’, and ask a specific question that deals with part of the issue: ‘How can we foster trust and cooperation between the US and Chinese governments on AI development?’ In general, less direct questions have more widely relevant answers, but they also provide less specific recommendations for actions to take.
Finally, the plans can be implemented based on the insights from the three research levels.
(I added two line breaks and changed where the diagram was, compared to the original text.)
(That post was written on behalf of my former employer, but not by me, and before I was aware of them.)
Thanks for writing this post! :-)
Two points:
i. On how we think about cause prioritization, and what comes before
It’s not quite clear to me what this means. But it seems related to a broader point that I think is generally under-appreciated, or at least rarely acknowledged, namely that cause prioritization is highly value relative.
The causes and interventions that are optimal relative to one value system are unlikely to be optimal relative to another value system (which isn’t to say that there aren’t some causes and interventions that are robustly good on many different value systems, as there plausibly are, and identifying novel such causes and interventions would be a great win for everyone; but then it is also commensurately difficult to identify new such causes and have much confidence in them given both our great empirical uncertainty and the necessarily tight constraints).
I think it makes sense that people do cause prioritization based on the values, or the rough class of values, that they find most plausible. Provided, of course, that those values have been reflected on quite carefully in the first place, and scrutinized in light of the strongest counterarguments and alternative views on offer.
This is where I see a somewhat mysterious gap in EA, more fundamental and even more gaping than the cause prioritization gap highlighted here: there is surprisingly little reflection on and discussion of values (something I also noted in this post, along with some speculations as to what might explain this gap).
After all, cause prioritization depends crucially on the fundamental values based on which one is trying to prioritize (a crude illustration), so this is, in a sense, the very first step on the path toward thoroughly reasoned cause prioritization.
ii. On the apparent lack of progress
As hinted in Zoe’s post, it seems that much (most?) cutting edge cause prioritization research is found in non-public documents these days, which makes it appear like there is much less research than there in fact is.
This is admittedly problematic in that it makes it difficult to get good critiques of the research in question, especially from skeptical outsiders, and it also makes it difficult for outsiders to know what in fact animates the priorities of different EA agents and orgs. It may well be that it is best to keep most research secret, all things considered, but I think it’s worth being transparent about the fact that there is a lot that is non-public, and that this does pose problems, in various ways, including epistemically.
This post—which I found interesting and useful—feels relevant in relation to your first point. A relevant excerpt:
(I added two line breaks and changed where the diagram was, compared to the original text.)
(That post was written on behalf of my former employer, but not by me, and before I was aware of them.)