Why Iām not sure itād be worthwhile for all EAs to gain a high-level understanding of (basically) all IBCs
(Note: Iām not saying I think itās unlikely to be worthwhile, just that Iām not sure. And as noted in another comment, I do agree with the broad thrust of this post.)
I basically endorse a tentative version of Objection #1; I think more people understanding more IBCs is valuable, for the reasons you note, but itās just not clear how often itās valuable enough to warrant the time required (even if we find ways to reduce the time required). I think there are two key reasons why thatās unclear to me:
I donāt think causes differ astronomically in the expected impact a reasonable EA should assign them after (letās say) a thousand hours of learning and thinking about IBCs, using good resources
(Note: By ādo causes differ astronomically in impactā, I mean something like ādoes the best intervention in one cause area differ astronomically in impact from the best intervention in another areaā, or a similar statement but with the average impact of āpositive outliersā in each cause, or something)
I do think a superintelligent being with predictive powers far beyond our own would probably see the leading EA cause areas as differing astronomically in impact or expected impact
But weāre very uncertain about many key questions, and will remain very uncertain (though less so) after a thousand hours of learning and thinking. And that dampens the differences in expected impact
I think this point might actually itself warrant inclusion as an IBC about global priorities research or cause prioritisation or something
And that in turn dampens the value of further efforts to work out which cause one should prioritise
It also pushes in favour of plucking low-hanging fruit in multiple areas and in favour of playing to oneās comparative advantage rather than just to whatās highest priority on the margin
I expect the EA community will do more good if many EAs accept a bit more uncertainty than they might naturally be inclined to accept regarding their own impact, in order to just do a really good job of something
This applies primarily to the sort of EAs who would naturally be inclined to worry a lot about cause prioritisation. I think most of the general public, and some EAs, should think a lot more than the naturally would about whether theyāre prioritising the right things for their own impact.
This also might apply especially to people who already have substantial career capital in one cause area
(But note that Iām saying ādampensā and āpushes in favourā, not āeliminatesā or ādecisiveely proves one shouldā)
I think different interventions within a cause area (or at least within the best cause area) differ in expected impact by a similar amount to how much causes differ (and could differ astronomically in ātrue expected impactā, evaluated by some being that has far less uncertainty than we do)
So I disagree with what I think you mean by your claim that āThere probably wonāt be as astronomical differences in value within these cause areas (e.g. between different ways to improve near-term human welfare)ā
One thing that makes this clearly true is that, within every cause area, there are some interventions which have a negative expected impact, and other which have the best expected impact (as far as we can tell)
So the difference within each cause area spans the range from a negative value to the best value within the cause area
And at least within the best cause area, thatās probably a larger difference than the difference between cause areas (since Iād guess that each cause areaās best interventions are probably at least somewhat positive in expectation, or not as negative as something that backfires in a very important domain)
Itās harder to say how large the differences in expected impact between the currently leading candidate interventions within each cause area are
But Iād guess that each cause area will contain some interventions that would be considered by people new to the cause area that will have approximately 0 or negative value
E.g., by being and appearing naive and thus causing reputational harms or other downside risks
(Again, I feel I should highlight that I do agree with the general thrust of this post.)
Ironically, having said this, I also think I disagree with you in sort-of the opposite direction on two specific points (though I think this is quite superficial and minor).
As such, I think it makes sense for EAs to engage with the various IBCs to decide on a preferred cause area, but after that to restrict further reading and engagement to within that preferred cause area (and not within other cause areas they have already ruled out).
I agree with the basic idea that itās probably best to start off thinking mostly about things like IBCs, and then on average gradually increase how much one focuses on prioritising and acting within a cause area. But it doesnāt seem ideal to me to see this as a totally one-directional progression from one stage to a very distinct stage.
I think even to begin with, it might often be good to already be spending some time on prioritising and acting within a cause area.
And more so, I think that, even once one has mostly settled on one cause area, it could occasionally be good to spend a little time thinking about IBCs again. E.g., letās say a person decides to focus on longtermism, and ends up in a role where they build great skills and networks related to lobbying. But these skills and networks are also useful in relation to lobbying for other issues as well, and the person is asked if they could take on a potentially very impactful role using the same skills and networks to reduce animal suffering. (Maybe thereās some specific reason why theyād be unusually well-positioned to do that.) I think itād probably then be worthwhile for that person to again think a bit about cause prioritisation.
I donāt think they should focus on the question āIs there a consideration I missed earlier that means near-term animal welfare is a more important cause than longtermism?ā I think it should be more like āDo/āShould I think that near-term animal welfare is close enough to as important a cause as longtermism that I should take this role, given considerations of comparative advantage, uncertainty, and the community taking a portfolio approach?ā
(But I think this is just a superficial disagreement, as I expect youād actually agree with what Iāve said, and that you might even have put in the sentence Iām disagreeing with partly to placate my own earlier comments :D)
For example, if one has read up on population ethics and is confident that they hold a person-affecting view, one can rule out reducing extinction risk at that point without having to engage with that area further (i.e. by understanding the overall probability of x-risk this century).
If thatās what you mean, then I think I basically agree with the point youāre making. But itās still possible for someone with a person-affecting view to prioritise reducing extinction risk (not just other existential risks), primarily because of the fact extinction would harm the people alive at the time of the extinction event. So it still might be worth that person at least spending a little bit of time checking whether the overall probability of extinction risk seems high enough for them to prioritise it on those grounds. (Personally, Iād guess extinction risk wouldnāt be a top priority on purely person-affecting grounds, but would still be decently important. I havenāt thought about it much, though.)
It also seems useful to imagine what we want the EA movement to become in (say) 10 years time, and to consider who this post is talking about when it says āevery EAā.
For example, maybe we want EA to become more like a network than a communityāconnecting a vast array of people from different areas to important ideas and relevant people, but with only a small portion of these people making āEAā itself a big part of their lives or identities. This might look like a lot of people mostly doing what theyāre already doing, but occasionally using EA ideas to guide or reorient themselves. That might be a more natural way for EA to have a substantial influence on huge numbers of people, including very ābusy and mainstreamā people like senior policymakers, than for all those people to actually ābecome EAsā. This seems like it might be a very positive vision (Iām not sure itās what we should aim for, but it might be), but itās probably incompatible with all of these people knowing about most IBCs.
Or, relatedly, imagine the EA movement grows to contain 100,000 people. Imagine,20,000 are working on things like AI safety research and nuclear security policy, in places like MIRI, the US government, and the Carnegie Foundation; 20,000 are working on animal welfare in a similar range of orgs; 20,000 on global health in a similar range of orgs; etc. It doesnāt seem at all obvious to me that the world will be better in 50 years if all of those people spent the time required to gain a high-level understanding of most/āall IBCs, rather than spending some of that time learning more about whatever specific problem they were working on. E.g., I imagine a person whoās already leaning towards a career that will culminate in advising a future US president on nuclear policy might be better off just learning even more minutia relevant to that, and trusting that other people will do great work in other cause areas.
To be fair, youāre just talking about what should be the case now. I think prioritisation is more important, relative to just getting work done, the smaller EA is. But I think this might help give a sense of why Iām not sure how often learning more about IBCs would be worthwhile.
So I disagree with what I think you mean by your claim that āThere probably wonāt be as astronomical differences in value within these cause areas (e.g. between different ways to improve near-term human welfare)ā
For the record, on reflection, I actually donāt think this claim is important for my general argument, and I agree with you that it might not be true.
What really matters is if there are astronomical differences in (expected) value between the best interventions in each cause area.
In other words, in theory it shouldnāt matter if the top-tier shorttermist interventions are astronomically better than mid-tier shorttermist interventions, it just matters how the top-tier shorttermist interventions compare to the top-tier longtermist interventions.
I think this claim does matter in that it affects the opportunity costs of thinking about IBCs. (Though I agree that it doesnāt by itself make or break the case for thinking about IBCs.)
If the differences in expected impact (after further thought) between the superficially-plausibly-best interventions within the best cause area are similar to the differences in expected impact (after further thought) between cause areas, that makes it much less obvious that all/āmost EAs should have a high-level understanding of all/āmost cause areas. (Note that I said āmuch less obviousā, not ādefinitely falseā.)
Itās still plausible that every EA should first learn about almost all IBCs, and then learn about almost all important within-cause considerations for the cause area they now prioritise. But it also seems plausible that they should cut off the between-cause prioritisation earlier in order to roll with their best-guess-at-that-point, and from then on just focus on doing great within that cause area, and trust that other community members will also be doing great in other cause areas. (This would be a sort of portfolio, multiplayer-thinking approach, as noted in one of my other comments.)
Thanks for these thoughts and links Michael, and Iām glad you agree with the broad thrust of the post! Youāve given me a lot to think about and Iām finding my view on this is evolving.
I donāt think causes differ astronomically in the expected impact a reasonable EA should assign them after (letās say) a thousand hours of learning and thinking about IBCs, using good resources
Thanks for this framing which is helpful. Reading through the comments and some of your links, I actually think that the specific claim I need to provide more of an argument for is this one:
There are astronomical differences in the expected value of different cause areas and people can uncover this through greater scrutiny of existing arguments and information.
I tentatively still hold this view, although Iām starting to think that it may not hold as broadly as I originally thought and that I got the cause area classification wrong. For example, it might not hold between near-term animal focused and near-term human focused areas. In other words, perhaps it just isnāt really possible, given the information that is currently available, to come to a somewhat confident conclusion that one of these areas is much better than the other in expected value terms. I have also realised that Maryam, in my hypothetical example, didnāt actually conclude that near-term animal areas were better than near-term human areas in terms of expected value. Instead she just concluded that near-term animal areas (which are mainstream EA) were better than aspecific near-term human area (mental healthāwhich is EA but not mainstream). So Iām now starting to question whether the way I classified cause areas was helpful.
Having said all that I would like to return to longtermist areas. I actually do think that many prominent EAs, say Toby Ord, Will MacAskill and Hilary Greaves, would argue that longtermist areas are astronomically better than shorttermist areas in expected value terms. Greaves and MacAskillās The Case for Strong Longtermism basically argues this. Ordās The Precipice basically argues this, but specifically from an x-risk perspective. It might be that longtermism is the only case where prominent thinkers in the movement do think there is a clear argument to be made for astronomically different expected value.
Does this then mean itās very important to educate people about ideas that help people prioritise between shorttermist areas and longtermist areas? Well I think if we adopt some epistemic humility and accept that itās probably worth educating people about ideas where prominent EAs have claimed astronomical expected value without much disagreement from other prominent EAs, then the answer is yes. The idea here being that, the fact that these people hold these ideas without much prominent disagreement means there is a good possibility these ideas are correct and so, on average, people being aware of these ideas should make them reorient in ways that will allow them to do more good. This actually makes some sense to me, although I realise this argument is grounded in epistemic humility and deferring, which is quite different to my original argument. Itās not pure deferring of course as people can still come across the ideas and reject them, Iām just saying itās important that they come across and understand the ideas in the first place.
So to sum up, I think my general idea might still work, but I need to rework my cause areas. A better classification might be: non-EA cause areas, shortermist EA cause areas, longtermist EA cause areas. These are the cause areas between which I still think there should be astronomical differences in expected value that can plausibly be uncovered by people. In light of this different cause classification I suspect some of my IBCs will dropāspecifically those that help in deciding between interventions between which there is reasonable disagreement amongst prominent EAs (e.g. Ord and MacAskill disagree on how influential the present is, so that can probably be dropped).
Given this it might be that my list of IBCs reduce to:
Those that help people orient from non-EA to EA, so that they can make switches like Maryam did
Those that help people potentially orient from shorttermism to longtermism, so that they can make switches like Arjun did
I feel that I may have rambled a lot here and I donāt know if what I have said makes sense. Iād be interested to hear your thoughts on all of this.
I think you make a bunch of interesting points. I continue to agree with the general thrust of what you propose, though disagreeing on parts.
I actually do think that many prominent EAs, say Toby Ord, Will MacAskill and Hilary Greaves, would argue that longtermist areas are astronomically better than shorttermist areas in expected value terms. Greaves and MacAskillās The Case for Strong Longtermism basically argues this. Ordās The Precipice basically argues this, but specifically from an x-risk perspective. It might be that longtermism is the only case where prominent thinkers in the movement do think there is a clear argument to be made for astronomically different expected value.
I havenāt read that key paper from Greaves & MacAskill. I probably should. But some complexities that seem worth noting are that:
The longtermist interventions we think are best usually have a nontrivial chance of being net harmful
It seems plausible to me that, if ālongtermism is correctā, then longtermist interventions that are actually net harmful will tend to be more harmful than neartermist interventions that are actually net harmful
This is basically because the ābackfiresā would be more connected to key domains, orgs, decisions, etc.
Which has various consequences, such as the chance of creating confusion on key questions, causing longtermism-motivated people to move towards career paths that are less good than those they wouldāve gone towards, burning bridges (e.g., with non-Western governments), or creating reputational risks (seeming naive, annoying, etc.)
Even longtermist interventions that are typically very positive in expectation could be very negative in expectation if done in terrible ways by very ill-suited people
Neartermist interventions will also have some longtermist implications, and Iād guess that usually thereās a nontrivial chance that they have extremely good longtermist implications
E.g., the interventions probably have a nontrivial chance of meaningfully increasing or decreasing economic growth, technological progress, or moral circle expansion, which in turn is plausibly very good or very bad from a longtermist perspective
Related to the above point: In some cases, people might actually do very similar tasks whether they prioritise one cause area or another (specific) cause area
E.g., I think work towards moral circle expansion is plausibly a top priority from a longtermist perspective, and working on factory farming is plausibly a top priority way to advance that goal (though I think both claims are unlikely to be true). And I think that moral circle expansion and factory farming are also plausibly top priorities from a neartermist perspective.
Greaves, MacAskill, and Ord might be partly presenting a line of argument that they give substantial credence to, without constantly caveating it for epistemic humility in the way that they might if actually making a decision
A question I think is useful is āLetās say we have a random person of the kind who might be inclined towards EA. Letās say we could assign them to work on a randomly chosen intervention out of the set of interventions that such a person might think is a good idea from a longtermist perspective, or to work on randomly chosen intervention out of the set of interventions that such a person might think is a good idea from a neartermist animal welfare perspective. We have no other info about this person, which intervention theyād work on, or how theyād approach it. On your all-things-considered moral and empirical viewsānot just your independent impressions - it >1,000,000 times as good to assign this person to the randomly chosen longtermist intervention than the randomly chosen neartermist animal welfare intervention?ā
Iām 95+% confident theyād say āNoā (at least if I made salient to them the above points and ensured they understood what I meant by the question).
(I think expected impact differing by factors of 10 or 100s is a lot more plausible. And I think larger differences in expected impact are more plausible once we fill in more details about a specific situation, like a personās personal fit and what specific intervention theyāre considering. But learning about IBCs doesnāt inform us on those details.)
Iād be interested in your thoughts on this take (including whether you think Iām just sort-of talking past your point, or that I really really should just read Greaves and MacAskillās paper!).
Greaves and MacAskill donāt cover concerns about potential downsides of longtermist interventions in their paper. I think they implicitly make a few assumptions, such as that someone pursuing the interventions they mention would actually do them thoughtfully and carefully. I do agree that one can probably go into say DeepMind without really knowing their stuff and end up doing astronomical harm.
Overall I think your general point is fair. When it comes to allocating a specific person to a cause area, the difference in expected value across cause areas probably isnāt as large as I originally thought, for example due to considerations such as personal fit. Generally I think your comments have updated me away from my original claim that everyone should know all IBCs, but I do still feel fairly positive about more content being produced to improve understanding of some of these ideas and Iām quite excited about this possibility.
Slight aside about the Greaves and MacAskill paperāI personally found it a very useful paper that helped me understand the longtermism claim in a slightly more formal way than say an 80K blog post. Itās quite an accessible paper. I also found the (somewhat limited) discussion about the potential robustness of longtermism to different views very interesting. Iām sure Greaves and MacAskill will be strengthening that argument in the future. So overall I would recommend giving it a read!
I do still feel fairly positive about more content being produced to improve understanding of some of these ideas and Iām quite excited about this possibility.
Yeah, Iām definitely on the same page on those points!
So overall I would recommend giving it a read!
Ok, this has made it more likely that Iāll make time for reading the paper in the coming weeks. Thanks :)
Why Iām not sure itād be worthwhile for all EAs to gain a high-level understanding of (basically) all IBCs
(Note: Iām not saying I think itās unlikely to be worthwhile, just that Iām not sure. And as noted in another comment, I do agree with the broad thrust of this post.)
I basically endorse a tentative version of Objection #1; I think more people understanding more IBCs is valuable, for the reasons you note, but itās just not clear how often itās valuable enough to warrant the time required (even if we find ways to reduce the time required). I think there are two key reasons why thatās unclear to me:
I donāt think causes differ astronomically in the expected impact a reasonable EA should assign them after (letās say) a thousand hours of learning and thinking about IBCs, using good resources
(Note: By ādo causes differ astronomically in impactā, I mean something like ādoes the best intervention in one cause area differ astronomically in impact from the best intervention in another areaā, or a similar statement but with the average impact of āpositive outliersā in each cause, or something)
I do think a superintelligent being with predictive powers far beyond our own would probably see the leading EA cause areas as differing astronomically in impact or expected impact
But weāre very uncertain about many key questions, and will remain very uncertain (though less so) after a thousand hours of learning and thinking. And that dampens the differences in expected impact
Tomasik fleshes this sort of point out here: https://āāreducing-suffering.org/āāwhy-charities-dont-differ-astronomically-in-cost-effectiveness/āā
I think this point might actually itself warrant inclusion as an IBC about global priorities research or cause prioritisation or something
And that in turn dampens the value of further efforts to work out which cause one should prioritise
It also pushes in favour of plucking low-hanging fruit in multiple areas and in favour of playing to oneās comparative advantage rather than just to whatās highest priority on the margin
See also the comments on this post
See also Doing good together ā how to coordinate effectively, and avoid single-player thinking
I expect the EA community will do more good if many EAs accept a bit more uncertainty than they might naturally be inclined to accept regarding their own impact, in order to just do a really good job of something
This applies primarily to the sort of EAs who would naturally be inclined to worry a lot about cause prioritisation. I think most of the general public, and some EAs, should think a lot more than the naturally would about whether theyāre prioritising the right things for their own impact.
This also might apply especially to people who already have substantial career capital in one cause area
(But note that Iām saying ādampensā and āpushes in favourā, not āeliminatesā or ādecisiveely proves one shouldā)
I think different interventions within a cause area (or at least within the best cause area) differ in expected impact by a similar amount to how much causes differ (and could differ astronomically in ātrue expected impactā, evaluated by some being that has far less uncertainty than we do)
So I disagree with what I think you mean by your claim that āThere probably wonāt be as astronomical differences in value within these cause areas (e.g. between different ways to improve near-term human welfare)ā
One thing that makes this clearly true is that, within every cause area, there are some interventions which have a negative expected impact, and other which have the best expected impact (as far as we can tell)
So the difference within each cause area spans the range from a negative value to the best value within the cause area
And at least within the best cause area, thatās probably a larger difference than the difference between cause areas (since Iād guess that each cause areaās best interventions are probably at least somewhat positive in expectation, or not as negative as something that backfires in a very important domain)
Itās harder to say how large the differences in expected impact between the currently leading candidate interventions within each cause area are
But Iād guess that each cause area will contain some interventions that would be considered by people new to the cause area that will have approximately 0 or negative value
E.g., by being and appearing naive and thus causing reputational harms or other downside risks
(Again, I feel I should highlight that I do agree with the general thrust of this post.)
Ironically, having said this, I also think I disagree with you in sort-of the opposite direction on two specific points (though I think this is quite superficial and minor).
I agree with the basic idea that itās probably best to start off thinking mostly about things like IBCs, and then on average gradually increase how much one focuses on prioritising and acting within a cause area. But it doesnāt seem ideal to me to see this as a totally one-directional progression from one stage to a very distinct stage.
I think even to begin with, it might often be good to already be spending some time on prioritising and acting within a cause area.
And more so, I think that, even once one has mostly settled on one cause area, it could occasionally be good to spend a little time thinking about IBCs again. E.g., letās say a person decides to focus on longtermism, and ends up in a role where they build great skills and networks related to lobbying. But these skills and networks are also useful in relation to lobbying for other issues as well, and the person is asked if they could take on a potentially very impactful role using the same skills and networks to reduce animal suffering. (Maybe thereās some specific reason why theyād be unusually well-positioned to do that.) I think itād probably then be worthwhile for that person to again think a bit about cause prioritisation.
I donāt think they should focus on the question āIs there a consideration I missed earlier that means near-term animal welfare is a more important cause than longtermism?ā I think it should be more like āDo/āShould I think that near-term animal welfare is close enough to as important a cause as longtermism that I should take this role, given considerations of comparative advantage, uncertainty, and the community taking a portfolio approach?ā
(But I think this is just a superficial disagreement, as I expect youād actually agree with what Iāve said, and that you might even have put in the sentence Iām disagreeing with partly to placate my own earlier comments :D)
Iām guessing you mean āoverall probability of extinction riskā, rather than overall probability of x-risk as a whole? I say this because other types of existential riskāespecially unrecoverable dystopiasācould still be high priorities from some person-affecting perspectives.
If thatās what you mean, then I think I basically agree with the point youāre making. But itās still possible for someone with a person-affecting view to prioritise reducing extinction risk (not just other existential risks), primarily because of the fact extinction would harm the people alive at the time of the extinction event. So it still might be worth that person at least spending a little bit of time checking whether the overall probability of extinction risk seems high enough for them to prioritise it on those grounds. (Personally, Iād guess extinction risk wouldnāt be a top priority on purely person-affecting grounds, but would still be decently important. I havenāt thought about it much, though.)
It also seems useful to imagine what we want the EA movement to become in (say) 10 years time, and to consider who this post is talking about when it says āevery EAā.
For example, maybe we want EA to become more like a network than a communityāconnecting a vast array of people from different areas to important ideas and relevant people, but with only a small portion of these people making āEAā itself a big part of their lives or identities. This might look like a lot of people mostly doing what theyāre already doing, but occasionally using EA ideas to guide or reorient themselves. That might be a more natural way for EA to have a substantial influence on huge numbers of people, including very ābusy and mainstreamā people like senior policymakers, than for all those people to actually ābecome EAsā. This seems like it might be a very positive vision (Iām not sure itās what we should aim for, but it might be), but itās probably incompatible with all of these people knowing about most IBCs.
Or, relatedly, imagine the EA movement grows to contain 100,000 people. Imagine,20,000 are working on things like AI safety research and nuclear security policy, in places like MIRI, the US government, and the Carnegie Foundation; 20,000 are working on animal welfare in a similar range of orgs; 20,000 on global health in a similar range of orgs; etc. It doesnāt seem at all obvious to me that the world will be better in 50 years if all of those people spent the time required to gain a high-level understanding of most/āall IBCs, rather than spending some of that time learning more about whatever specific problem they were working on. E.g., I imagine a person whoās already leaning towards a career that will culminate in advising a future US president on nuclear policy might be better off just learning even more minutia relevant to that, and trusting that other people will do great work in other cause areas.
To be fair, youāre just talking about what should be the case now. I think prioritisation is more important, relative to just getting work done, the smaller EA is. But I think this might help give a sense of why Iām not sure how often learning more about IBCs would be worthwhile.
For the record, on reflection, I actually donāt think this claim is important for my general argument, and I agree with you that it might not be true.
What really matters is if there are astronomical differences in (expected) value between the best interventions in each cause area.
In other words, in theory it shouldnāt matter if the top-tier shorttermist interventions are astronomically better than mid-tier shorttermist interventions, it just matters how the top-tier shorttermist interventions compare to the top-tier longtermist interventions.
I think this claim does matter in that it affects the opportunity costs of thinking about IBCs. (Though I agree that it doesnāt by itself make or break the case for thinking about IBCs.)
If the differences in expected impact (after further thought) between the superficially-plausibly-best interventions within the best cause area are similar to the differences in expected impact (after further thought) between cause areas, that makes it much less obvious that all/āmost EAs should have a high-level understanding of all/āmost cause areas. (Note that I said āmuch less obviousā, not ādefinitely falseā.)
Itās still plausible that every EA should first learn about almost all IBCs, and then learn about almost all important within-cause considerations for the cause area they now prioritise. But it also seems plausible that they should cut off the between-cause prioritisation earlier in order to roll with their best-guess-at-that-point, and from then on just focus on doing great within that cause area, and trust that other community members will also be doing great in other cause areas. (This would be a sort of portfolio, multiplayer-thinking approach, as noted in one of my other comments.)
Thanks for these thoughts and links Michael, and Iām glad you agree with the broad thrust of the post! Youāve given me a lot to think about and Iām finding my view on this is evolving.
Thanks for this framing which is helpful. Reading through the comments and some of your links, I actually think that the specific claim I need to provide more of an argument for is this one:
I tentatively still hold this view, although Iām starting to think that it may not hold as broadly as I originally thought and that I got the cause area classification wrong. For example, it might not hold between near-term animal focused and near-term human focused areas. In other words, perhaps it just isnāt really possible, given the information that is currently available, to come to a somewhat confident conclusion that one of these areas is much better than the other in expected value terms. I have also realised that Maryam, in my hypothetical example, didnāt actually conclude that near-term animal areas were better than near-term human areas in terms of expected value. Instead she just concluded that near-term animal areas (which are mainstream EA) were better than a specific near-term human area (mental healthāwhich is EA but not mainstream). So Iām now starting to question whether the way I classified cause areas was helpful.
Having said all that I would like to return to longtermist areas. I actually do think that many prominent EAs, say Toby Ord, Will MacAskill and Hilary Greaves, would argue that longtermist areas are astronomically better than shorttermist areas in expected value terms. Greaves and MacAskillās The Case for Strong Longtermism basically argues this. Ordās The Precipice basically argues this, but specifically from an x-risk perspective. It might be that longtermism is the only case where prominent thinkers in the movement do think there is a clear argument to be made for astronomically different expected value.
Does this then mean itās very important to educate people about ideas that help people prioritise between shorttermist areas and longtermist areas? Well I think if we adopt some epistemic humility and accept that itās probably worth educating people about ideas where prominent EAs have claimed astronomical expected value without much disagreement from other prominent EAs, then the answer is yes. The idea here being that, the fact that these people hold these ideas without much prominent disagreement means there is a good possibility these ideas are correct and so, on average, people being aware of these ideas should make them reorient in ways that will allow them to do more good. This actually makes some sense to me, although I realise this argument is grounded in epistemic humility and deferring, which is quite different to my original argument. Itās not pure deferring of course as people can still come across the ideas and reject them, Iām just saying itās important that they come across and understand the ideas in the first place.
So to sum up, I think my general idea might still work, but I need to rework my cause areas. A better classification might be: non-EA cause areas, shortermist EA cause areas, longtermist EA cause areas. These are the cause areas between which I still think there should be astronomical differences in expected value that can plausibly be uncovered by people. In light of this different cause classification I suspect some of my IBCs will dropāspecifically those that help in deciding between interventions between which there is reasonable disagreement amongst prominent EAs (e.g. Ord and MacAskill disagree on how influential the present is, so that can probably be dropped).
Given this it might be that my list of IBCs reduce to:
Those that help people orient from non-EA to EA, so that they can make switches like Maryam did
Those that help people potentially orient from shorttermism to longtermism, so that they can make switches like Arjun did
I feel that I may have rambled a lot here and I donāt know if what I have said makes sense. Iād be interested to hear your thoughts on all of this.
I think you make a bunch of interesting points. I continue to agree with the general thrust of what you propose, though disagreeing on parts.
I havenāt read that key paper from Greaves & MacAskill. I probably should. But some complexities that seem worth noting are that:
The longtermist interventions we think are best usually have a nontrivial chance of being net harmful
It seems plausible to me that, if ālongtermism is correctā, then longtermist interventions that are actually net harmful will tend to be more harmful than neartermist interventions that are actually net harmful
This is basically because the ābackfiresā would be more connected to key domains, orgs, decisions, etc.
Which has various consequences, such as the chance of creating confusion on key questions, causing longtermism-motivated people to move towards career paths that are less good than those they wouldāve gone towards, burning bridges (e.g., with non-Western governments), or creating reputational risks (seeming naive, annoying, etc.)
See also Cotton-Barrattās statements about āSafeguarding against naive utilitarianismā
Even longtermist interventions that are typically very positive in expectation could be very negative in expectation if done in terrible ways by very ill-suited people
Neartermist interventions will also have some longtermist implications, and Iād guess that usually thereās a nontrivial chance that they have extremely good longtermist implications
E.g., the interventions probably have a nontrivial chance of meaningfully increasing or decreasing economic growth, technological progress, or moral circle expansion, which in turn is plausibly very good or very bad from a longtermist perspective
Related to the above point: In some cases, people might actually do very similar tasks whether they prioritise one cause area or another (specific) cause area
E.g., I think work towards moral circle expansion is plausibly a top priority from a longtermist perspective, and working on factory farming is plausibly a top priority way to advance that goal (though I think both claims are unlikely to be true). And I think that moral circle expansion and factory farming are also plausibly top priorities from a neartermist perspective.
Greaves, MacAskill, and Ord might be partly presenting a line of argument that they give substantial credence to, without constantly caveating it for epistemic humility in the way that they might if actually making a decision
See also the idea of sharing independent impressions
A question I think is useful is āLetās say we have a random person of the kind who might be inclined towards EA. Letās say we could assign them to work on a randomly chosen intervention out of the set of interventions that such a person might think is a good idea from a longtermist perspective, or to work on randomly chosen intervention out of the set of interventions that such a person might think is a good idea from a neartermist animal welfare perspective. We have no other info about this person, which intervention theyād work on, or how theyād approach it. On your all-things-considered moral and empirical viewsānot just your independent impressions - it >1,000,000 times as good to assign this person to the randomly chosen longtermist intervention than the randomly chosen neartermist animal welfare intervention?ā
Iām 95+% confident theyād say āNoā (at least if I made salient to them the above points and ensured they understood what I meant by the question).
(I think expected impact differing by factors of 10 or 100s is a lot more plausible. And I think larger differences in expected impact are more plausible once we fill in more details about a specific situation, like a personās personal fit and what specific intervention theyāre considering. But learning about IBCs doesnāt inform us on those details.)
Iād be interested in your thoughts on this take (including whether you think Iām just sort-of talking past your point, or that I really really should just read Greaves and MacAskillās paper!).
Thanks for this.
Greaves and MacAskill donāt cover concerns about potential downsides of longtermist interventions in their paper. I think they implicitly make a few assumptions, such as that someone pursuing the interventions they mention would actually do them thoughtfully and carefully. I do agree that one can probably go into say DeepMind without really knowing their stuff and end up doing astronomical harm.
Overall I think your general point is fair. When it comes to allocating a specific person to a cause area, the difference in expected value across cause areas probably isnāt as large as I originally thought, for example due to considerations such as personal fit. Generally I think your comments have updated me away from my original claim that everyone should know all IBCs, but I do still feel fairly positive about more content being produced to improve understanding of some of these ideas and Iām quite excited about this possibility.
Slight aside about the Greaves and MacAskill paperāI personally found it a very useful paper that helped me understand the longtermism claim in a slightly more formal way than say an 80K blog post. Itās quite an accessible paper. I also found the (somewhat limited) discussion about the potential robustness of longtermism to different views very interesting. Iām sure Greaves and MacAskill will be strengthening that argument in the future. So overall I would recommend giving it a read!
Yeah, Iām definitely on the same page on those points!
Ok, this has made it more likely that Iāll make time for reading the paper in the coming weeks. Thanks :)