I’ve always thought of “Cause X” as a theme for events like EAG that are meant to prompt thinking in EA, and wasn’t ever intended as something to take seriously and literally in actual EA action. If it was intended to be that, I don’t think it ever should have been. I don’t think it should be treated as such either. I don’t see how it makes sense to anyone as a practical pursuit.
There have been some cause prioritization efforts that took ‘Cause X’ seriously. Yet the presence of x-risk reduction in EA as a top priority, the #1 question has been to verify the validity and soundness of the fundamental assumptions underlying x-risk reduction as the top global priority. That’s because to its nature, based on the overall soundness in the fundamental assumptions behind x-risk, its basically binary whether it is or isn’t the top priority. For prioritizers willing to work within the boundaries that the assumptions determining x-risk as the top moral priority are all true, cause prioritization focused on how actors should be working on x-risk reduction.
Since the question became reformulated as “Is x-risk reduction Cause X?,” much cause prioritization research has been reduced to research on questions in relevant areas of still-great uncertainty (e.g., population ethics and other moral philosophy, forecasting, etc.). As far as I’m aware, no other cause pri efforts have been predicated on the theme of ‘finding Cause X.’
In general, I’ve never thought it made much sense. Any cause that has gained traction in EA already entails a partial answer to that question, along some common lines that arguably define what EA is.
While they’re disparate, all the causes in EA combine some form of practical aggregate consequentialism with global-scale interventions to impact the well-being of as large a population as feasible, within whatever other constraints one is working with. This is true of the initial cause areas EA prioritized: global poverty alleviation; farm animal welfare; and AI alignment. Other causes, like public policy reform, life extension, mental health interventions, wild animal welfare, and other existential risks, all fit with this framework.
It’s taken for granted in EA conversations, but there are shared assumptions that go into this common perspective that distinguish EA from other efforts to do good. If someone disagrees with that framework, and has different fundamental assumptions about what is important, then they naturally sort themselves into different kinds of extant movements that align with their perspective better, such as more overtly political movements. In essence, what separates EA from any other movement, in terms of how any of us, and other private individuals, chose in which socially conscious community to spend our own time, is the different assumptions we make in trying to answer the question: ‘What is Cause X?’
They’re not brought to attention much, but there are sources outlining what the ‘fundamental assumptions’ of EA are (what are typically called ’EA values) which I can provide upon request. Within EA, I think pursuing what someone thinks Cause X is takes the following form:
1. If one is confident one’s current priority is the best available option one can realistically impact within the EA framework, working on it directly makes sense. An example of this work is the work of any EA-aligned organization permanently dedicated to work in one or more specific causes, and efforts to support them.
2. If one is confident one’s current priority is the best available option, but one needs more evidence to convincingly justify it as a plausible top priority in EA, or doesn’t know how individuals can do work to realistically have an impact on the cause, doing research to figure that out makes sense. An example of this kind of work is the research Rethink Priorities is undertaking to identify crucial evidence underpinning fundamental assumptions in causes like wild animal welfare.
3. If one is confident the best available option one will identify is within the EA framework, but you have little to no confidence in what those options will be, it makes sense to do very fundamental research that intellectually explores the principles of effective altruism. An example of this kind of work in EA is that of the Global Priorities Institute.
I don’t see how it makes sense to anyone as a practical pursuit.
GiveWell & Open Phil have at times undertaken systematic reviews of plausible cause areas; their general framework for this seems quite practical.
That’s because to its nature, based on the overall soundness in the fundamental assumptions behind x-risk, its basically binary whether it is or isn’t the top priority.
Pretty strongly disagree with this. I think there’s a strong case for x-risk being a priority cause area, but I don’t think it dominates all other contenders. (More on this here.)
The concerns you raise in your linked post are actually concerns a lot of other people have cited I have in mind for why they don’t currently prioritize AI alignment, existential risk reduction, or the long-term future. Most EAs I’ve talked to who don’t share those priorities say they’d be open to shifting their priorities in that direction in the future, but currently they have unresolved issues with the level of uncertainty and speculation in these fields. Notably, EA is now focusing more and more effort on the source of unresolved concerns with existential risk reduction, such as demonstrated ability to predict the long-term future. That work is only beginning though.
Givewell’s and Open Phil’s worked wasn’t termed ‘Cause X,’ but I think a lot of the stuff you’re pointing to would’ve started before ‘Cause X’ was a common term in EA. They definitely qualify. One thing is Givewell and Open Phil are much bigger organizations than most in EA, so they are unusually able to pursue these things. So my contention that this kind of research is impractical for most organizations to do still holds up. It may be falsified in the near future though. Aside from Givewell and Open Phil, the organizations that can permanently focus on cause prioritization are:
institutes at public universities with large endowments, like the Future of Humanity Institute and the Global Priorities Institute at Oxford University.
small, private non-profit organizations like Rethink Priorities.
Honestly, I am impressed and pleasantly surprised organizations like Rethink Priorities can go from a small team to a growing organization in EA. Cause prioritization is such a niche cause unique to EA, I didn’t know if there was hope for it to keep sustainably growing. So far, the growth of the field has proven sustainable. I hope it keeps up.
I’ve always thought of “Cause X” as a theme for events like EAG that are meant to prompt thinking in EA, and wasn’t ever intended as something to take seriously and literally in actual EA action. If it was intended to be that, I don’t think it ever should have been. I don’t think it should be treated as such either. I don’t see how it makes sense to anyone as a practical pursuit.
There have been some cause prioritization efforts that took ‘Cause X’ seriously. Yet the presence of x-risk reduction in EA as a top priority, the #1 question has been to verify the validity and soundness of the fundamental assumptions underlying x-risk reduction as the top global priority. That’s because to its nature, based on the overall soundness in the fundamental assumptions behind x-risk, its basically binary whether it is or isn’t the top priority. For prioritizers willing to work within the boundaries that the assumptions determining x-risk as the top moral priority are all true, cause prioritization focused on how actors should be working on x-risk reduction.
Since the question became reformulated as “Is x-risk reduction Cause X?,” much cause prioritization research has been reduced to research on questions in relevant areas of still-great uncertainty (e.g., population ethics and other moral philosophy, forecasting, etc.). As far as I’m aware, no other cause pri efforts have been predicated on the theme of ‘finding Cause X.’
In general, I’ve never thought it made much sense. Any cause that has gained traction in EA already entails a partial answer to that question, along some common lines that arguably define what EA is.
While they’re disparate, all the causes in EA combine some form of practical aggregate consequentialism with global-scale interventions to impact the well-being of as large a population as feasible, within whatever other constraints one is working with. This is true of the initial cause areas EA prioritized: global poverty alleviation; farm animal welfare; and AI alignment. Other causes, like public policy reform, life extension, mental health interventions, wild animal welfare, and other existential risks, all fit with this framework.
It’s taken for granted in EA conversations, but there are shared assumptions that go into this common perspective that distinguish EA from other efforts to do good. If someone disagrees with that framework, and has different fundamental assumptions about what is important, then they naturally sort themselves into different kinds of extant movements that align with their perspective better, such as more overtly political movements. In essence, what separates EA from any other movement, in terms of how any of us, and other private individuals, chose in which socially conscious community to spend our own time, is the different assumptions we make in trying to answer the question: ‘What is Cause X?’
They’re not brought to attention much, but there are sources outlining what the ‘fundamental assumptions’ of EA are (what are typically called ’EA values) which I can provide upon request. Within EA, I think pursuing what someone thinks Cause X is takes the following form:
1. If one is confident one’s current priority is the best available option one can realistically impact within the EA framework, working on it directly makes sense. An example of this work is the work of any EA-aligned organization permanently dedicated to work in one or more specific causes, and efforts to support them.
2. If one is confident one’s current priority is the best available option, but one needs more evidence to convincingly justify it as a plausible top priority in EA, or doesn’t know how individuals can do work to realistically have an impact on the cause, doing research to figure that out makes sense. An example of this kind of work is the research Rethink Priorities is undertaking to identify crucial evidence underpinning fundamental assumptions in causes like wild animal welfare.
3. If one is confident the best available option one will identify is within the EA framework, but you have little to no confidence in what those options will be, it makes sense to do very fundamental research that intellectually explores the principles of effective altruism. An example of this kind of work in EA is that of the Global Priorities Institute.
https://www.openphilanthropy.org/research/cause-reports
GiveWell & Open Phil have at times undertaken systematic reviews of plausible cause areas; their general framework for this seems quite practical.
Pretty strongly disagree with this. I think there’s a strong case for x-risk being a priority cause area, but I don’t think it dominates all other contenders. (More on this here.)
The concerns you raise in your linked post are actually concerns a lot of other people have cited I have in mind for why they don’t currently prioritize AI alignment, existential risk reduction, or the long-term future. Most EAs I’ve talked to who don’t share those priorities say they’d be open to shifting their priorities in that direction in the future, but currently they have unresolved issues with the level of uncertainty and speculation in these fields. Notably, EA is now focusing more and more effort on the source of unresolved concerns with existential risk reduction, such as demonstrated ability to predict the long-term future. That work is only beginning though.
Givewell’s and Open Phil’s worked wasn’t termed ‘Cause X,’ but I think a lot of the stuff you’re pointing to would’ve started before ‘Cause X’ was a common term in EA. They definitely qualify. One thing is Givewell and Open Phil are much bigger organizations than most in EA, so they are unusually able to pursue these things. So my contention that this kind of research is impractical for most organizations to do still holds up. It may be falsified in the near future though. Aside from Givewell and Open Phil, the organizations that can permanently focus on cause prioritization are:
institutes at public universities with large endowments, like the Future of Humanity Institute and the Global Priorities Institute at Oxford University.
small, private non-profit organizations like Rethink Priorities.
Honestly, I am impressed and pleasantly surprised organizations like Rethink Priorities can go from a small team to a growing organization in EA. Cause prioritization is such a niche cause unique to EA, I didn’t know if there was hope for it to keep sustainably growing. So far, the growth of the field has proven sustainable. I hope it keeps up.