I have major reservations about your conclusion (in part because I embrace anti-fanaticism, in part because I see big challenges and some downsides to outsourcing moral reflection and decision-making to another person). However, I really appreciate how well you outlined the problem and I also appreciate that you don’t shy away from proposing a possible solution, even while retaining a good measure of epistemic humility. Thanks for posting!
Sarah Weiler
Thanks a lot for writing this up and sharing your evaluations and thinking!
I think there is lots of value in on-the-ground investigations and am glad for the data you collected to shine more light on the Cameroonian experience. That said, reading the post I wasn’t quite sure what to make of some of your claims and take-aways, and I’m a little concerned that your conclusions may be misrepresentating part of the situation. Could you share a bit more about your methodology for evaluating the cost-effectiveness of different organisations in Cameroon? What questions did these orgs answer when they entered your competition? What metrics and data sources did you rely on when evaluating their claims and efforts through your own research?
Most centrally, I would be interested to know: 1) Did you find no evidence of effects or did you find evidence for no effect[1]?; and 2) Which time horizon did you look at when measuring effects, and are you concerned that a limited time horizon might miss essential outcomes?
If you find the time, I’d be super grateful for some added information and your thoughts on the above!
- ^
The two are not necessarily the same and there’s a danger of misrepresentation and misleading policy advice when equating them uncritically. This has been discussed in the field of evidence-based health and medicine, but I think it also applies to observational studies on development interventions like the ones you analyse: Ranganathan, Pramesh, & Buyse (2015): Common pitfalls in statistical analysis: “No evidence of effect” versus “evidence of no effect”; Vounzoulaki (2020): ‘No evidence of effect’ versus ‘evidence of no effect’: how do they differ?; Tarnow-Mordi & Healy (1999): Distinguishing between “no evidence of effect” and “evidence of no effect” in randomised controlled trials and other comparisons
- ^
Hi! Thanks for posting this, I think international relations/politics and different approaches to it receive too little attention in EA discussions and thinking and am happy to see contributions on the topic here on the forum! :))
However, your outline seems a bit overly reductive to me: Within international relations theory and discussions, the realist/idealist dichotomy has probably never existed in a pure form, and much less so since the end of the Second World War. Over the second half of the twentieth century until roughly today, these categories tend to be more reflective of how scholars and thinkers in the space classify themselves and their colleagues (see disciplinary overviews here, here, and here):
Liberalism (or Liberal Institutionalism)
Neo-realism (and many variants thereof, such as defensive and offensive realism or neo-classical realism)
Constructivism (again, with various versions)
English School of IR
Critical theories (Marxist IR, Feminist IR, Post-colonial IR, Postmodern IR, etc)
Also, I think it’s useful to point out that the contrast between “values” and “interests” can be quite misleading, since “interests” cannot be defined without some notion of “the good” and thus pursuing “national interests” also always requires some moral choice from the country in question (or from the country’s leaders). In addition, people who advocate for a foreign policy that promotes human rights protection and/or other moral values abroad will often have the empirical conviction that this “idealist” promotion of values is in the national interest of their home country (because they think a world without extreme moral infringements is more conducive to overall peace, lower rates of transnational crime and terrorism, etc.). All of this makes me feel rather frustrated (and sometimes also annoyed) when I hear people use labels such as “realism” or “idealism”, suggesting that the former is more empirically grounded or value-free (of course, this is not your fault as the author of this piece, since you didn’t make up these terms and are simply describing how they are used by others in this space).
Thanks for organising this and sharing the programme here! Is there any reason you did not put the price in the description posted here? I think that this is—at least for someone like myself—a significant decision criterion for a potential applicant, and it is a bit strange/inconvenient only to learn about it after filling in the entire application form.
(For other readers: the normal price is $550 for the entire programme, and there is an option to apply for financial support within the application form)
I don’t draw precisely the same conclusions as you (I’m somewhat less reluctant to entertain strategies that aim to introduce untested systemic changes on a relatively large scale), but I really appreciate the clarity and humility/transparency of your comment, and I think you outline some considerations that are super relevant to the topic of discussion. Thanks for writing this up :)!
First two points sound reasonable (and helpfully clarifying) to me!
I suppose it is the nature of being scope-sensitive and prioritiationist though that something being very important and neglected and moderately tractable (like x-risk work) isn’t always enough for it to be the ‘best’
I share the guess that scope sensitivity and prioritarianism could be relevant here, as you clearly (I think) endorse these more strongly and more consistently than I do; but having thought about it for only 5-10 minutes, I’m not sure I’m able to exactly point at how these notions play into our intuitions and views on the topic—maybe it’s something about me ignoring the [(super-high payoff of larger future)*(super-low probability of affecting whether there is a larger future) = (there is good reason to take this action)] calculation/conclusion more readily?
That said, I fully agree that “something being very important and neglected and moderately tractable (like x-risk work) isn’t always enough for it to be the ‘best’ ”. To figure out which option is best, we’d need to somehow compare their respective scores on importance, neglectedness, and tractability… I’m not sure actually figuring that out is possible in practice, but I think it’s fair to challenge the claim that “action X is best because it is very important and neglected and moderately tractable” regardless. In spite of that, I continue to feel relatively confident in claiming that efforts to reduce x-risks are better (more desirable) than efforts to increase the probable size of the future, because the former is an unstable precondition for the latter (and because I strongly doubt the tractability and am at least confused about the desirability of the latter).
Another, perhaps tortured, analogy: you have founded a company, and could spend all your time trying to avoid going bankrupt and mitigating risks, but maybe some employee should spend some fraction of their time thinking about best-case scenarios and how you could massively expand and improve the company 5 years down the line if everything else falls into place nicely.
I think my stance on this example would depend on the present state of the company. If the company is in really dire straits, I’m resource-constrained, and there are more things that need fixing now than I feel able to easily handle, I would seriously question whether one of my employees should go thinking about making best-case future scenarios the best they can be[1]. I would question this even more strongly if I thought that the world and my company (if it survives) will change so drastically in the next 5 years that the employee in question has very little chance of imaging and planning for the eventuality.
(I also notice while writing that a part of my disagreement here is motivated by values rather than logic/empirics: part of my brain just rejects the objective of massively expanding and improving a company/situation that is already perfectly acceptable and satisfying. I don’t know if I endorse this intuition for states of the world (I do endorse it pretty strongly for private life choices), but can imagine that the intuitive preference for satisficing informs/shapes/directs my thinking on the topic at least a bit—something for myself to think about more, since this may or may not be a concerning bias.)
I expected this would not be a take particularly to your liking, but your pushback is stronger than I thought, this is useful to hear. [...] As a process note, I think these discussions are a lot easier and better to have when we are (I think) both confident the other person is well-meaning and thoughtful and altruistic, I think otherwise it would be a lot easier to dismiss prematurely ideas I disagree with or find uncomfortable.
+100 :)
- ^
(This is not to say that it might not make sense for one or a few individuals to think about the company’s mid- to long-term success; I imagine that type of resource allocation will be quite sensible in most cases, because it’s not sustainable to preserve the company in a day-to-day survival strategy forever; but I think that’s different from asking these individuals to paint a best-case future to be prepared to make a good outcome even better.)
- ^
Thanks for writing this up, Oscar! I largely disagree with the (admittedly tentative) conclusions, and am not sure how apt I find the NIMBY analogy. But even so, I found the ideas in the post helpfully thought-provoking, especially given that I would probably fall into the cosmic NIMBY category as you describe it.
First, on the implications you list. I think I would be quite concerned if some of your implications were adopted by many longtermists (who would otherwise try to do good differently):
Support pro-expansion space exploration policies and laws
Even accepting the moral case for cosmic YIMBYism (that aiming for a large future is morally warranted), it seems far from clear to me that support for pro-expansion space exploration policies would actually improve expected wellbeing for the current and future world. Such policies & laws could share many of the downsides colonialism and expansionism have had previously:
Exploitation of humans & the environment for the sake of funding and otherwise enabling these explorations;
Planning problems: Colonial-esque megaprojects like massive space exploration likely constitute a bigger task than human planners can reasonably take on, leading to large chances of catastrophic errors in planning & execution (as evidenced by past experiences with colonialism and similarly grand but elite-driven endeavours)
Power dynamics: Colonial-esque megaprojects like massive space exploration seem prone to reinforcing the prestige, status, and power for those people who are capable of and willing to support these grand endeavours, who—when looking at historical colonial-esque megaprojects—do not have a strong track record of being the type of people well-suited to moral leadership and welfare-enhancing actions (you do acknowledge this when you talk about ruthless expansionists and Molochian futures, but I think it warrants more concern and worry than you grant);
(Exploitation of alien species (if there happened to be any, which maybe is unlikely? I have zero knowledge about debates on this)).
This could mean that it is more neglected and hence especially valuable for longtermists to focus on making the future large conditional on there being no existential catastrophe, compared to focusing on reducing the chance of an existential catastrophe.
It seems misguided and, to me, dangerous to go from “extinction risk is not the most neglected thing” to “we can assume there will be no extinction and should take actions conditional on humans not going extinct”. My views on this are to some extent dependent on empirical beliefs which you might disagree with (curious to hear your response there!): I think humanity’s chances to avert global catastrophe in the next few decades are far from comfortably high, and I think the path from global catastrophe to existential peril is largely unpredictable but it doesn’t seem completely unconceivable that such a path will be taken. I think there are far too few earnest, well-considered, and persistent efforts to reduce global catastrophic risks at present. Given all that, I’d be quite distraught to hear that a substantial fraction (or even a few members) of those people concerned about the future would decide to switch from reducing x-risk (or global catastrophic risk) to speculatively working on “increasing the size of the possible future”, on the assumption that there will be no extinction-level event to preempt that future in the first place.
---
On the analogy itself: I think it doesn’t resonate super strongly (though it does resonate a bit) with me because my definition of and frustration with local NIMBYs is different from what you describe in the post.
In my reading, NIMBYism is objectionable primarily because it is a short-sighted and unconstructive attitude that obstructs efforts to combat problems that affect all of us; the thing that bugs me most about NIMBYs is not their lack of selflessness but their failure to understand that everyone, including themselves, would benefit from the actions they are trying to block. For example, NIMBYs objecting to high-rise apartment buildings seem to me to be mistaken in their belief that such buildings would decrease their welfare: the lack of these apartment buildings will make it harder for many people to find housing, which exacerbates problems of homelessness and local poverty, which decreases living standards for almost everyone living in that area (incl. those who have the comfort of a spacious family house, unless they are amongst the minority who enjoy or don’t mind living in the midst of preventable poverty and, possibly, heightened crime). It is a stubborn blindness to arguments of that kind and an unwillingness to consider common, longer-term needs over short-term, narrowly construed self-interests that form the core characteristic of local NIMBYs in my mind.
The situation seems to be different for the cosmic NIMBYs you describe. I might well be working with an unrepresentative sample, but most of the people I know/have read who consciously reject cosmic YIMBYism do so not primarily on grounds of narrow self-interest but for moral reasons (population ethics, non-consequentialist ethics, etc) or empirical reasons (incredibly low tractability of today’s efforts to influence the specifics about far-future worlds; fixing present/near-future concerns as the best means to increase wellbeing overall, including in the far future). I would be surprised if local NIMBYs were motivated by similar concerns, and I might actually shift my assessment of local NIMBYism if it turned out that they are.
New Update (as of 2024-03-27): This comment, with its very clear example to get to the bottom of our disagreement, has been extremely helpful in pushing me to reconsider some of the claims I make in the post. I have somewhat updated my views over the last few days (see the section on “the empirical problem” in the Appendix I added today), and this comment has been influential in helping me do that. Gave it a Delta for that reason; thanks Jeff!
While I now more explicitly acknowledge and agree that, when measured in terms of counterfactual impact, some actions can have hundreds of times more impact than others, I retain a sense of unease when adopting this framing:
When evaluating impact differently (e.g. through Shapley-value-like attribution of “shares of impact”, or through a collective rationality mindset (see comments here and here for what I mean by collective rationality mindset)), it seems less clear that the larger donor is 100x more impactful than the smaller donor. One way for reasoning about this would be something like: Probably—necessarily? - the person donating $100,000 had more preceding actions leading up to the situation where she is able and willing to donate that much money and there will probably—necessarily? - be more subsequent actions needed to make the money count, to ensure that it has positive consequences. There will then be many more actors and actions between which the impact of the $100,000 donation will have to be apportioned; it is not clear whether the larger donor will appear vastly more impactful when considered from this different perspective/measurement strategy...
You can shake your head and claim—rightly, I believe—that this is irrelevant for deciding whether donating $100,000 or donating $1,000 is better. Yes, for my decision as an individual, calculating the possible impact of my actions by assessing the likely counterfactual consequences resulting directly from the action will sometimes be the most sensible thing to do, and I’m glad I’ve come to realise that explicitly in response to your comment.
But I believe recognising and taking seriously the fact that, considered differently, my choice to donate $100,000 does not mean that I individually am responsible for 100x more impact than the donor of $1,000 can be relevant for decisions in two ways:
1) It prevents me from discounting and devaluing all the other actors that contribute vital inputs (even if they are “easily replaceable” as individuals)
2) It encourages me to take actions that may facilitate, enable, or support large counterfactual impact by other people. This perspective also encourages me to consider actions that may have a large counterfactual impact themselves, but in more indirect and harder-to-observe ways (even if I appear easily replaceable in theory, it’s unclear whether I will be replaced in practice, so the counterfactual impact seems extremely hard to determine; what is very clear is that by performing a relevant supportive action, I will be contributing something vital to the eventual impact).
If you find the time to come back to this so many days after the initial post, I’d be curious to hear what you think about these (still somewhat confused?) considerations :)
Thanks a lot for that comment, Dennis. You might not believe it (judging by your comment towards the end), but I did read the full thing and am glad you wrote it all up!
I come away with the following conclusions:
It is true that we often credit individuals with impacts that were in fact the results of contributions from many people, often over long times.
However, there are still cases where individuals can have outsize impact compared to the counterfactual case where they do not exist.
It is not easy to say in advance which choices or which individuals will have these outsize influences …
… but there are some choices which seem to greatly increase the chance of being impactful.
Put in this way, I have very little to object. Thanks for providing that summary of your takeaways, I think that will be quite helpful to me as I continue to puzzle out my updated beliefs in response to all the comments the essay has gotten so far (see statements of confusion here and here).
For example, anyone who thinks that being a great teacher cannot be a super-impactful role is just wrong. But if you do a very simplistic analysis, you could conclude that. It’s only when you follow through all the complex chain of influences that the teacher has on the pupils, and that the pupils have on others, and so on, that you see the potential impact.
That’s interesting. I think I hadn’t really considered the possibility of putting really good teachers (and similar people-serving professions) into the super-high-impact category, and then my reaction was something like “If obviously essential and super important roles like teachers and nurses are not amongst the roles a given theory considers relevant and worth pursuing, then that’s suspicious and gives me reason to doubt the theory.” I now think that maybe I was premature in assuming that these roles would necessarily lie outside the super-high-impact category?
The real question, even of not always posed very precisely, is: for individuals who, for whatever reason, finds themselves in a particular situation, are there choices or actions that might make them 100x more impactful? [...] And yet, it feels like there are choices we make which can greatly increase or decrease the odds that we can make a positive and even an outsize contribution. And I’m not convinced by (what I understand to be) your position that just doing good without thinking too much about potential impact is the best strategy.
I think the sentiment behind those words is one that I wrongfully neglected in my post. For practical purposes, I think I agree that it can be useful and warranted to take seriously the possibility that some actions will have much higher counterfactual impact than others. I continue to believe that there are downsides or perils to the counterfactual perspective, and that it misses some relevant features of the world; but I can now also see more clearly that there are significant upsides to that same perspective and that it can often be a powerful tool for making the world better (if used in a nuanced way). Again, I haven’t settled on a neat stance to bring my competing thoughts together here, but I feel like some of your comments above will get me closer to that goal of conceptual clarification—thanks for that!
I feel like there are some models of how markets work that quite successfully predict macro behaviour of systems without knowing all the local individual factors?
You’re right that you’re more optimistic than me for this one. I don’t think we have good models of that kind in economics (or: I haven’t come across such models; I have tried to look for them a little bit but am far from knowing all modeling attempts that have ever been made, so I might have missed the good/empirically reliable ones).
I do agree that “we can make, in some cases, simple models that accurately capture some important features of the world”—but my sense is that in the social sciences (/ whenever the object of interest is societal or human), the features we are able to capture accurately are only a (small) selection of the ones that are relevant for reasonably assessing something like “my expected impact from taking action X.” And my sense is also that many (certainly not all!) people who like to use models to improve their thinking on the world over-rely on the information they gain from the model and forget that these other, model-external features also exist and are relevant for real-life decision-making.
[The thoughts expressed below are tentative and reveal lingering confusion in my own brain. I hope they are somewhat insightful anyways.]
This seems on-point and super sensible as a rough heuristic (not a strict proof) when looking at impact through a counterfactual analysis that focuses mostly on direct effects. But I don’t know if and how it translates to different perspectives of assessing impact. If there never were high impact opportunities in the first place, because impact is dispersed across the many actions needed to bring about desired consequences, then it doesn’t matter whether a lot or only a few people try to grab these opportunities from the table—because there would be nothing to grab in the first place.
Maybe the example helps to explain my thinking here (?): If we believe that shrimp/insect welfare can be improved significantly by targeted interventions that a small set of people push for and implement, then I think your case for it being a high impact opportunity is much more reasonable than if we believe that actual improvements in this area will require a large-scale effort by millions of people (researchers, advocates, implementers, etc). I think most desirable change in the world is closer to the latter category.*
*Kind of undermining myself: I do recognise that this depends on what we “take for granted” and I tentatively accept that there are many concrete decision situations where it makes sense to take more for granted than I am inclined to do (the infrastructure we use for basically everything, many of the implementing and supporting actions needed for an intervention to actually have positive effects, etc), in which case it might be possible to consider more possible positive changes in the world to fall closer to the former category (the former category ~ changes in the world that can be brought about by a small group of individuals).
So I agree that there is a danger of thinking too much of oneself as some sort of ubermensch do-gooder, but the question of to what extent impact varies by person or action is separate.
I think that makes sense and is definitely a take that I feel respect (and gratitude/hope) for.
I think it is lamentable but probably true that some people’s lives will have far greater instrumental effects on the world than others.
Even after a week of reflecting on the empirical question—do some people have magnitudes higher impact than others? - and the conceptual question—which impact evaluation framework (counterfactual, Shapley value attribution, something else entirely) should we use to assess levels of impact? -, I remain uncertain and confused on my own beliefs here (see more in my comment on the polio vaccine example above). So I’m not sure what my current response to your claim “[it’s] probably true that some people’s lives will have far greater instrumental effects on the world than others” is or should be.
[The thoughts expressed below are tentative and reveal lingering confusion in my own brain. I hope they are somewhat insightful anyways.]
but I think the counterfactual is illustrative
Completely agree! The concept of counterfactual analysis seems super relevant to explaining how and why some of my takes in the original post differ from “the mainstream EA narrative on impact”. I’m still trying to puzzle out exactly how my claims in “The empirical problem” link to the counterfactual analysis point—do I think that my claims are irrelevant to a counterfactual impact analysis? do I, in other words, accept and agree that impact between actions/people differs by several magnitudes when calculated via counterfactual analysis methods? how can I best name, describe, illustrate, and maybe defend the alternative perspective on impact evaluations that seems to inform my thinking in the essay and in general? what role does and should counterfactual analysis play in my thinking alongside that alternative perspective?
To discuss with regards to the polio example: I see the rationale for claiming that the vaccine inventors are somehow more pivotal because they are less easily replaceable than all those people performing supportive and enabling actions. But just because an action is replacement doesn’t mean it’s unimportant. It is a fact that the vaccine discovery could not have happened and would not have had any positive consequences if the supporting & enabling actions had not been performed by somebody. I can’t help myself, but this seems relevant and important when I think about the impact I as an individual can have; on some level, it seems true to say that as an individual, living in a world where everything is embedded in society, I cannot have any meaningful impact on my own; all effects I can bring about will be brought about by myself and many other people; if only I acted, no meaningful effects could possibly occur. Should all of this really just be ignored when thinking about impact evaluations and my personal decisions (as seems to occur in counterfactual analyses)? I don’t know.
- 25 Mar 2024 19:23 UTC; 4 points) 's comment on Critique of the notion that impact follows a power-law distribution by (
- 24 Mar 2024 18:24 UTC; 2 points) 's comment on Critique of the notion that impact follows a power-law distribution by (
I think it is uncontroversial that at least on the negative side of the scale some actions are vastly worse than others, e.g. a mass murder or a military coup of a democratic leader, compared to more ‘everyday’ bads like being a grumpy boss.
Agreed! I share the belief that there are huge differences in how bad an action can be and that there’s some relevance in distinguish between very bad and just slightly bad ones. I didn’t think this was important to mention in my post, but if it came across as suggesting that we basically should only think in terms of three buckets, I clearly communicated poorly—I agree that this would be too crude.
It feels pretty hard to know which actions are neutral, for many of the reasons you say that the world is complex and there are lots of flow-through effects and interactions.
Strongly agreed! I strongly share the worry that identifying neutral actions would be extremely hard in practice—took me a while to settle on “bullshit jobs” as a representative example in the original post, and I’m still unsure whether it’s a solid case of “neutral actions”. But I think for me, this uncertainty reinforces the case for more research/thinking to identify actions with significantly positive outcomes vs actions that are basically neutral. I find myself believing that dividing actions into “significantly positive” vs “everything else” is epistemologically more tractable than dividing them into “the very best” vs “everything else”. (I think I’d agree that there is a complementary quest—identifying very bad actions and roughly scoring them on how bad they would be—which is worthwhile pursuing alongside either of the two options mentioned in the last sentence; maybe I should’ve mentioned this in the post?)
Identifying which positive actions are significantly so versus insignificantly so feels like it just loses a lot of information compared to a finer-grained scale.
I think I disagree mostly for epistemological reasons—I don’t think we have much access to that information at a finer-grained scale; based on that, giving up on finding such information wouldn’t be a great loss because there isn’t much to lose in the first place.
I think I might also disagree from a conceptual or strategic standpoint: my thinking on this—especially when it comes to catastrophic risks, maybe a bit less for global health & development / poverty—tends to be more about “what bundle of actions and organisations and people do we need for the world to improve towards a state that is more sustainable and exhibits higher wellbeing (/less suffering)?” For that question, knowing and contributing to significantly good actions seems to be of primary importance, since I believe that we’ll need many of these good actions—not just the very best ones—for eventual success anyways. Since publishing this essay and receiving a few comments defending (or taking for granted) the counterfactual perspective on impact analysis, I’ve come to reconsider whether I should base my thinking on that perspective more often than I currently do. I remain uncertain and undecided on that point for now, but feel relatively confident that I won’t end up concluding that I should pivot to only or primarily using the counterfactual perspective (vs. the “collective rationality / how do I contribute to success at all” perspective)… Curious to hear if all that makes some sense to you (though you might continue to disagree)?
Shapley values are a great tool for divvying up attribution in a way that feels intuitively just, but I think for prioritization they are usually an unnecessary complication. In most cases you can only guess what they might be because you can’t mentally simulate the counterfactual worlds reliably, and your set of collaborators contains billions of potentially relevant actors. [emphasis added]
From what I’ve learned about Shapley values so far, this seems to mirror my takeaway. I’m still giving myself another 2-3 days until I write up a more fleshed-out response to the commenters who recommended looking into Shapley values, but I might well end up just copying some version of the above; so thanks for formulating and putting it here already!
(I think if EAs were more individualist, “the core” from cooperative game theory would be more popular than the Shapley value.)
I do not understand this point but would like to (since the stance I developed in the original post went more in the direction of “EAs are too individualist”). If you find the time, could you explain or point to resources to explain what you mean by “the core from cooperative game theory” and how that links to (non-)individualist perspectives, and to impact modeling?
Oh, and we get so caught up in the object-level here that we tend to fail to give praise for great posts: Great work writing this up! When I saw it, it reminded me of Brian Tomasik’s important article on the same topic, and sure enough, you linked it right before the intro! I’m always delighted when someone does their research so well that whatever random spontaneous associations I (as a random reader) have are already cited in the article!
Very glad to read that, thank you for deciding to add that piece to your comment :)!
Thanks for the comment! Just to make sure I understand correctly: the tails would partially cancel out in expected impact estimates because many actions with potentially high positive impact could also have potentially high negative impact if any of our assumptions are wrong? Or were you gesturing at something else? (Please feel free to simply point me to the post you shared if the answer is continued therein; I haven’t had the chance to read it carefully yet)
[Sarah] But when it comes to “the world’s most pressing problems,” I don’t have the sense that we have those 95% of people to rely on to deal with the collective action problems.
[Jason] I had global health in mind—the vast majority of the funding and people on board are not EAs or conducting EA-type analyses (although many are at least considering cost-effectiveness).
Quick point on this: I didn’t mean to suggest that EAs constitute vastly more than 5% of people working on pressing problems. Completely agree that “the vast majority of the funding and people on board [in global health] are not EAs or conducting EA-type analyses”, but I still think that relatively few of those people (EA or not) approach problems with a collective rationality mindset, which would mean asking themselves: “how do I need to act if I want to be part of the collectively most rational solution?” rather than: “how do I need to act if I want to maximise the (counterfactual) impact from my next action?” or, as maybe done by many non-EA people in global health: “how should I act given my intuitive motivations and the (funding) opportunities available to myself?”. I think—based on anecdotal evidence and observation—that the first of these questions is not asked enough, inside EA and outside of it.
I can see some circumstances in which EA’s actions could be important for collective-action purposes. [...] It’s at least possible to estimate the expected impact of switching 100 votes in a swing state in a given election given what other actors are expected to do. It’s not easy, and the error bars are considerable, but it can be done.
I think it’s correct that some collective action problems can be addressed by individuals or small groups deciding to take action based on their counterfactual impact (and I thank you for the paper and proverb references, found it helpful to read these related ideas expressed in different terms!). In practice, I think (and you seem to acknowledge) that estimating that counterfactual impact for interventions aimed at disrupting collective action problems (by convincing lots of other people to behave collectively rational) is extremely hard and I thus doubt whether counterfactual impact calculations are the best (most practicable) tool for deciding whether and when to take such actions (I think the rather unintuitive analysis by 80,000 Hours on voting demonstrates the impracticability of these considerations for everyday decisions relatively well). But I can see how this might sometimes be a tenable and useful way to go. I find your reflections on how to do this interesting (checking for a plausible theory of change; checking for the closeness of reaching a required tipping point); my quick response (because this comment is already awfully long) would be that they seem useful but limited heuristics (what exactly makes a theory of change in deeply uncertain and empirically-poor domains “plausible”?; and for the tipping point, you mentioned my counterpoint already: if everybody always waiting for a fairly reachable tipping point, many large social changes would never have happened).
But the approach that I gesture at when I talk about “we should often act guided by principles of collective rationality” is different from guesstimating the counterfactual impact of an action that tries to break the collective action dilemma. I think what the collective rationality approach (in my mind) comes down to is an acceptance that sometimes we should take an action that has a low counterfactual impact, because our community (local, national, all of humanity) depends on many people taking such actions to add up to a huge impact. The very point of the collective action problem is that, counterfactually considered, my impact from taking that action will be low, because one individual taking or not taking the action is usually either completely or largely pointless. An example of that would be “making an effort to engage in dinner-table conversations on societally-important issues.” If (and I acknowledge this may be a controversial if) we believe that a vibrant and functioning democracy would be one where most citizens have such conversations every once in a while, then it would be collectively rational for me to engage in these conversations. But this will only really become an impactful and useful action (for our country’s democracy, ignoring benefits to myself) if many other citizens in my country do the same thing. And if many other citizens in my country do do the same thing, then paradoxically it doesn’t really matter that much anymore whether I do it; because it’s the mass doing it that counts, and any one individual that is added or subtracted from that mass has little effect. I think such dynamics can be captured by counterfactual impact reasoning only relatively unintuitively and in ways that are often empirically intractable in practice.
I don’t see anything about this hypothetical intervention that renders it incapable of empirical analysis. If one can determine how effective the organization is at dismantling stereotypes and self-images per dollar spent, then a donor can adjudicate the tradeoff between donating to it and donating to AMF based on how super harmful they think internalized racism is vs. how bad they think toddlers dying of malaria is.
Weakly agree that there can be some empirical analysis to estimate part of the effectiveness of the hypothetical stereotypes-intervention (though I do want to note that such estimates run a large risk of missing important, longer-running effects that only surface after long-time engagement and/or are not super easy to observe at all). I think the main point I was trying to make here is that the empirical question of “how bad internalized racism is”, i.e. how much it decreases development and flourishing, is one that seems hard if not impossible to address via quantitative empirical analysis. I could imagine your response being that we can run some correlational studies on communities or individuals with less vs. more internalized racism and then go from there; I don’t think this will give us meaningful causal knowledge given the many hidden variables that will differ between the groups we analyze and given the long-running effects we seek to find.
- 27 Mar 2024 22:43 UTC; 7 points) 's comment on Critique of the notion that impact follows a power-law distribution by (
The conclusion/mindset and approach you describe resonate a fair bit with me, thanks for spelling them out and leaving them hear as a comment!
Thanks a lot for taking the time to read the essay and write up those separate thoughts in response!! I’ll get to the other comments over the next week or so, but for now: thank you for adding that last comment. Though I really (!) am grateful for all the critical and thought-provoking feedback from yourself and others in this comment thread, I can’t deny that reading the appreciative and encouraging lines in that last response is also welcome (and will probably be one of the factors helping me to keep exercising a critical mind even if it feels exhausting/confusing at times) :D
Thanks for explaining! In this case, I think I come away far less convinced by your conclusions (and the confidence of your language) than you seem to. I (truly!) find what you did admirable given the resources you seem to have had at your disposal and the difficult data situation you faced. And I think many of the observations you describe (e.g., about how orgs responded to your call; about donor incentives) are insightful and well worth discussing. But I also think that the output would be significantly more valuable had you added more nuance and caution to your findings, as well as a more detailed description of the underlying data & analysis methods.
But, as said before, I still appreciate the work you did and also the honesty in you answer here!