I also think that EA sometimes dismisses categories of problems out of an assumption that most solutions currently proposed to those problems are either not neglected or have a low expected value, despite the likelihood that high-value opportunities are lurking amidst the chaff.
After all, EA’s original focus was sorting through the labyrinth of ineffective direct global health and poverty reduction interventions. In theory, we should now be sorting through other broad fields like public policy, climate change, and so on, to find interventions comparable to the best direct aid/global development opportunities.
In the climate change realm, environmental law groups like Earthjustice appear on paper to be competitive with top GiveWell nonprofits. Much more thorough research would be needed, but napkin calculations seem promising.
There needs to be more willingness by grantwriters and other funders to bear search-costs for new ideas. It seems like there is a strong emphasis on skepticism within EA, which is great, but it usually translates to, we should not fund this because of perceived issues X, Y, and Z or uncertainty regarding the benefits of A, B, and C, when these issues and benefits are better addressed through empirical testing than a skeptic’s intuitions. We need a community that will bear the discovery costs of promising interventions, but this seldom happens unless the proponent of the idea already has clout and/or connections within EA.
If we don’t have the information to evaluate the effectiveness of a possible solution, the answer is not to discard a potential solution, but rather evaluate the information costs, and the potential value associated with the array of reasonably possible outcomes.
What would be helpful, if this doesn’t exist, would be aggregating sets of potential solutions, listing the resources currently directed toward evaluating their EV, determining bottlenecks (often money) in assessing EV, and making reasonable estimates of potential exploitation values given various hypothesize EV. Then those with resources in EA could ensure that promising paths have the resources to be explored, and we can exploit the best solutions fully.
I am rather pessimistic about EA’s prospects for this.
Why are you pessimistic? I assume it has something to do with perceived power dynamics or incentive misalignment?
It seems like a crowd sourcing mechanism for potential solutions, plus a small team to manage the data and make estimates on expected info cost, actual cost, and impact, would be fairly simple to implement.
Maybe one could even lean harder into crowdsourcing the info/actual costs and impact by operating a sort of prediction market on it, so if a solution is indeed researched at some point and your prediction of the info and actual estimated costs was correct/close, you get points and higher weighting in future crowdsourced estimates?
Although I don’t necessarily endorse it, the argument against investing in new cause areas that aren’t a significant existential risk within a few decades is straightforward. Therefore, any cause prioritization project should address this objection with a counter-rebuttal, or else pessimism is warranted.
Good point. I’d imagine that this objection stems from the perspective “basically all the highest utility/dollar interventions are in x-risk, but continuing global health interventions costs us little because we already have those systems in place, so it’s not worth abandoning them.”
From this perspective, one might think that even maintaining existing global health interventions is a bad util/dollar proposition in a vacuum (as those resources would be better spent on x-risk), but for external reasons, splintering EA is not worth pressuring people to abandon global health.
Let’s imagine splintering EA to mean nearly only x-riskers being left in EA, and maybe a group of dissidents creating a competing movement.
These are the pros for x-riskers post-split:
Remaining EAs are laser-focused on x-risk, and perhaps more people have shifted their focus from partly global health and partly x-risk to fully x-risk than vice versa. (More x-risk EAs and x-risk EAs are more effective).
These are the cons for x-riskers post-split:
Remaining EAs have less broad public support and less money going into “general EA stuff” like community building and conferences, because some of the general EA money and influence was coming from people who mostly cared about global health. As a related consequence, it becomes harder to attract people initially interested in global health and convert them into x-riskers. (Less x-risk EAs and x-risk EAs are less effective).
It seems that most x-riskers think the cons outweigh the pros, or a split would have occurred—at least there would be more talk of one.
The thing is, refraining from adding climate change as an EA focus would likely have a similar pro/con breakdown to removing global health as an EA focus:
Pros: No EAs are persuaded to put money/effort that might have gone to x-risk into climate change.
Cons:
Loss of utils due to potentially EA-compatible people who expend time or money on climate change prevention/mitigation not joining the movement and adopting EA methods.
Loss of potential general funding and support for EA from people who think that the top climate change interventions can compete with the util/dollar rates of top global health and x-risk interventions, plus the hordes of people who aren’t necessarily thinking in terms of utils/dollar yet and just instinctively feel climate change is so important that a movement ignoring it can’t possibly know what they’re doing. Even if someone acting on instinct rather than utils/dollar won’t necessarily improve the intellectual richness of EA, their money and support would be pretty unequivocally helpful.
These are basically the same pros and cons to kicking out global health people, plus an extra cost to not infiltrating another cause area with EA methods.
Therefore, I would argue that any x-risker that does not want to splinter EA should also support EA branching out into new areas.
I think this is a totally reasonable argument, and you can add a piece that’s about personal fit. Like 80,000 hours has pretty arbitrarily guesstimated how heavily one ought to weigh personal fit in career choice, slapped some numbers on it and published it in a prioritization scale that people take too seriously sometimes and doesn’t actually make sense if you actually look at what some of the numbers imply (ie everybody should work on AI safety even if an actively bad fit).
But if you start by weighing personal fit more highly, there becomes a case for saying “ok, I happen to care a lot about climate change, so even if it’s not the highest priority cause, it’s still what I personally ought to work on. And I need guidance about how to move the needle on climate change.” And if you start from there you can still do a whole EA solutions prioritization analysis just taking for granted that it will be 100% climate change focused.
Personally I suspect that’s a good solution—we have great ideas about how to prioritize stuff but I’m not very optimistic about our ability to fundamentally change what causes people work on. Like maybe people should work on AI x risk even if they’re an actively bad fit, but they won’t, and we shouldn’t waste time trying to convince them to. Instead we should just create a great on-ramp for people who are interested in AI safety and a second great on-ramp for people who are interested in climate change. Figure out where people are and are not flexible in what they work on and target that.
I’m pretty new to the movement and generally have never done research at a high formal level, so I suppose expertise. Is there a link somewhere to a sort of guide for doing research at the level of detail expected?
I have to admit I only read the title and a few sentences here and there. But you are right that EAs are not much into systems change. Part of this is founders effect. But I also believe part of it is because of a misuse of the neglectedness framework. Systems change is basically politics, which is not a neglected area. But my prior is that there are lots of neglected interventions.
I remember a few years ago, there seemed to be a small but growing interest in systems change in EA. I found the facebook group, but it’s mostly dead now. Scrolling back it seems like it was taken over by memetic warfare rather than discussion sadly. Effective Altruism: System Change | Facebook
Maybe the real reason EA has not been able to have an ongoing systems change discussion is because this is always how it ends?
LW is sort of like a sister community to EA, with lots of overlap in membership and influence going both ways. I believe that that the above post is part of the founders effect that has kept EA away from politics, but I also think the argument are not wrong.
I don’t believe Facebook’s structure and people’s prior associations with the quality of discussion that occurs on Facebook would enable rational debate at the level of the EA forum, but on any platform, I would agree that if a line in the sand is crossed and discussions of specific policies become conceived of as “Politics”, and tribalism creeps in, the results are usually quite bad.
I can’t imagine that political tribalism would fly on the EA forum, although of course it is necessary to be vigilant to maintain that. Indeed, if I were to rewrite that post today I would revise it to express much less confidence in a particular view of global systems, and focus more on the potential for thinking about global systems to offer opportunities for large impacts.
I think there is evidence EA is capable of doing this without damaging epistemics. It is currently widely accepted to talk about AI or nuclear regulations that governments might adopt, and I haven’t seen anything concerning in those threads. My argument is essentially just that policy interventions of high neglectedness and tractability should not be written off reflexively.
Earthjustice and other law groups (there’s a YIMBY Law group as well that is probably less impactful but at least worth looking into) are nice because they improve de facto systems, but don’t need to engage with the occasional messiness of overt system change. Instead, they ensure local governments follow the laws that are already in place.
I also think that EA sometimes dismisses categories of problems out of an assumption that most solutions currently proposed to those problems are either not neglected or have a low expected value, despite the likelihood that high-value opportunities are lurking amidst the chaff.
After all, EA’s original focus was sorting through the labyrinth of ineffective direct global health and poverty reduction interventions. In theory, we should now be sorting through other broad fields like public policy, climate change, and so on, to find interventions comparable to the best direct aid/global development opportunities.
In the climate change realm, environmental law groups like Earthjustice appear on paper to be competitive with top GiveWell nonprofits. Much more thorough research would be needed, but napkin calculations seem promising.
I think those things sounds like good suggestions. What’s your bottleneck for making this research?
There needs to be more willingness by grantwriters and other funders to bear search-costs for new ideas. It seems like there is a strong emphasis on skepticism within EA, which is great, but it usually translates to, we should not fund this because of perceived issues X, Y, and Z or uncertainty regarding the benefits of A, B, and C, when these issues and benefits are better addressed through empirical testing than a skeptic’s intuitions. We need a community that will bear the discovery costs of promising interventions, but this seldom happens unless the proponent of the idea already has clout and/or connections within EA.
If we don’t have the information to evaluate the effectiveness of a possible solution, the answer is not to discard a potential solution, but rather evaluate the information costs, and the potential value associated with the array of reasonably possible outcomes.
What would be helpful, if this doesn’t exist, would be aggregating sets of potential solutions, listing the resources currently directed toward evaluating their EV, determining bottlenecks (often money) in assessing EV, and making reasonable estimates of potential exploitation values given various hypothesize EV. Then those with resources in EA could ensure that promising paths have the resources to be explored, and we can exploit the best solutions fully.
I am rather pessimistic about EA’s prospects for this.
Why are you pessimistic? I assume it has something to do with perceived power dynamics or incentive misalignment?
It seems like a crowd sourcing mechanism for potential solutions, plus a small team to manage the data and make estimates on expected info cost, actual cost, and impact, would be fairly simple to implement.
Maybe one could even lean harder into crowdsourcing the info/actual costs and impact by operating a sort of prediction market on it, so if a solution is indeed researched at some point and your prediction of the info and actual estimated costs was correct/close, you get points and higher weighting in future crowdsourced estimates?
Although I don’t necessarily endorse it, the argument against investing in new cause areas that aren’t a significant existential risk within a few decades is straightforward. Therefore, any cause prioritization project should address this objection with a counter-rebuttal, or else pessimism is warranted.
Good point. I’d imagine that this objection stems from the perspective “basically all the highest utility/dollar interventions are in x-risk, but continuing global health interventions costs us little because we already have those systems in place, so it’s not worth abandoning them.”
From this perspective, one might think that even maintaining existing global health interventions is a bad util/dollar proposition in a vacuum (as those resources would be better spent on x-risk), but for external reasons, splintering EA is not worth pressuring people to abandon global health.
Let’s imagine splintering EA to mean nearly only x-riskers being left in EA, and maybe a group of dissidents creating a competing movement.
These are the pros for x-riskers post-split:
Remaining EAs are laser-focused on x-risk, and perhaps more people have shifted their focus from partly global health and partly x-risk to fully x-risk than vice versa. (More x-risk EAs and x-risk EAs are more effective).
These are the cons for x-riskers post-split:
Remaining EAs have less broad public support and less money going into “general EA stuff” like community building and conferences, because some of the general EA money and influence was coming from people who mostly cared about global health. As a related consequence, it becomes harder to attract people initially interested in global health and convert them into x-riskers. (Less x-risk EAs and x-risk EAs are less effective).
It seems that most x-riskers think the cons outweigh the pros, or a split would have occurred—at least there would be more talk of one.
The thing is, refraining from adding climate change as an EA focus would likely have a similar pro/con breakdown to removing global health as an EA focus:
Pros: No EAs are persuaded to put money/effort that might have gone to x-risk into climate change.
Cons:
Loss of utils due to potentially EA-compatible people who expend time or money on climate change prevention/mitigation not joining the movement and adopting EA methods.
Loss of potential general funding and support for EA from people who think that the top climate change interventions can compete with the util/dollar rates of top global health and x-risk interventions, plus the hordes of people who aren’t necessarily thinking in terms of utils/dollar yet and just instinctively feel climate change is so important that a movement ignoring it can’t possibly know what they’re doing. Even if someone acting on instinct rather than utils/dollar won’t necessarily improve the intellectual richness of EA, their money and support would be pretty unequivocally helpful.
These are basically the same pros and cons to kicking out global health people, plus an extra cost to not infiltrating another cause area with EA methods.
Therefore, I would argue that any x-risker that does not want to splinter EA should also support EA branching out into new areas.
I think this is a totally reasonable argument, and you can add a piece that’s about personal fit. Like 80,000 hours has pretty arbitrarily guesstimated how heavily one ought to weigh personal fit in career choice, slapped some numbers on it and published it in a prioritization scale that people take too seriously sometimes and doesn’t actually make sense if you actually look at what some of the numbers imply (ie everybody should work on AI safety even if an actively bad fit).
But if you start by weighing personal fit more highly, there becomes a case for saying “ok, I happen to care a lot about climate change, so even if it’s not the highest priority cause, it’s still what I personally ought to work on. And I need guidance about how to move the needle on climate change.” And if you start from there you can still do a whole EA solutions prioritization analysis just taking for granted that it will be 100% climate change focused.
Personally I suspect that’s a good solution—we have great ideas about how to prioritize stuff but I’m not very optimistic about our ability to fundamentally change what causes people work on. Like maybe people should work on AI x risk even if they’re an actively bad fit, but they won’t, and we shouldn’t waste time trying to convince them to. Instead we should just create a great on-ramp for people who are interested in AI safety and a second great on-ramp for people who are interested in climate change. Figure out where people are and are not flexible in what they work on and target that.
I’m pretty new to the movement and generally have never done research at a high formal level, so I suppose expertise. Is there a link somewhere to a sort of guide for doing research at the level of detail expected?
Welcome to EA Sam!
I actually don’t know, I’ve never done that type of research either. I mostly thing about AI risk.
But I did scroll though the list of EA Forum tags for you and found these:
Research—EA Forum (effectivealtruism.org)
Global priorities research—EA Forum (effectivealtruism.org)
Independent research—EA Forum (effectivealtruism.org)
Research methods—EA Forum (effectivealtruism.org)
Research training programs—EA Forum (effectivealtruism.org)
Maybe there’s something helpful in there?
I found your systems change post
A Newcomer’s Critique of EA—Underprioritizing Systems Change? - EA Forum (effectivealtruism.org)
I have to admit I only read the title and a few sentences here and there. But you are right that EAs are not much into systems change. Part of this is founders effect. But I also believe part of it is because of a misuse of the neglectedness framework. Systems change is basically politics, which is not a neglected area. But my prior is that there are lots of neglected interventions.
For example, this is super cool:
Audrey Tang on what we can learn from Taiwan’s experiments with how to do democracy − 80,000 Hours (80000hours.org)
I remember a few years ago, there seemed to be a small but growing interest in systems change in EA. I found the facebook group, but it’s mostly dead now. Scrolling back it seems like it was taken over by memetic warfare rather than discussion sadly.
Effective Altruism: System Change | Facebook
Maybe the real reason EA has not been able to have an ongoing systems change discussion is because this is always how it ends?
Related:
Politics is the Mind-Killer—LessWrong
LW is sort of like a sister community to EA, with lots of overlap in membership and influence going both ways. I believe that that the above post is part of the founders effect that has kept EA away from politics, but I also think the argument are not wrong.
I don’t believe Facebook’s structure and people’s prior associations with the quality of discussion that occurs on Facebook would enable rational debate at the level of the EA forum, but on any platform, I would agree that if a line in the sand is crossed and discussions of specific policies become conceived of as “Politics”, and tribalism creeps in, the results are usually quite bad.
I can’t imagine that political tribalism would fly on the EA forum, although of course it is necessary to be vigilant to maintain that. Indeed, if I were to rewrite that post today I would revise it to express much less confidence in a particular view of global systems, and focus more on the potential for thinking about global systems to offer opportunities for large impacts.
I think there is evidence EA is capable of doing this without damaging epistemics. It is currently widely accepted to talk about AI or nuclear regulations that governments might adopt, and I haven’t seen anything concerning in those threads. My argument is essentially just that policy interventions of high neglectedness and tractability should not be written off reflexively.
Earthjustice and other law groups (there’s a YIMBY Law group as well that is probably less impactful but at least worth looking into) are nice because they improve de facto systems, but don’t need to engage with the occasional messiness of overt system change. Instead, they ensure local governments follow the laws that are already in place.