Hi there!
I currently co-lead the biosecurity grantmaking program at Effective Giving. Before that, I’ve worked in various research roles focused on pandemic preparedness and biosecurity.
Joshua TM
Hi there!
I currently co-lead the biosecurity grantmaking program at Effective Giving. Before that, I’ve worked in various research roles focused on pandemic preparedness and biosecurity.
Joshua TM
Thanks for writing this, Gavin.
Reading (well, listening to) Mountains Beyond Mountains, I was deeply inspired by Farmer. I think a lot of people in the EA community would benefit from giving the book a chance.
Sure, I sometimes found his rejection of an explicit cost-effectiveness-based approach very frustrating, and it seemed (and still seems) that his strategy was at times poorly aligned with the goal of saving as many lives as possible. But it also taught me the importance of sometimes putting your foot in the ground and insisting that none of the options on the table are acceptable; that we have to find an alternative if none of the present solutions meet a certain standard.
In economics and analytic philosophy (and by extension, in EA) we’re often given two choices and told to choose one, regardless of how unpalatable both may be. Maximisation subject to given constraints, it goes. Do an expensive airlift from Haiti to Boston to save the child or invest in cost-effective preventive interventions, it goes. And in the short term, the best way to save the most lives may indeed be to accept that that is the choice we have, to buckle down and calculate. But I’d argue that, sometimes, outright rejecting certain unpalatable dilemmas, and instead insisting on finding another, more ambitious way, can be part of an effective activist strategy for improving the world, especially in the longer term.
My impression is that this kind of activist strategy has been behind lots of vital social progress that the cost-effectiveness-oriented, incrementalist approach wouldn’t be suited for.
Just here to say that this bit is simultaneously wonderfully hilarious and extraordinarily astute:
The first is that I think infinite ethics punctures a certain type of utilitarian dream. It’s a dream I associate with the utilitarian friend quoted above (though over time he’s become much more of a nihilist), and with various others. In my head (content warning: caricature), it’s the dream of hitching yourself to some simple ideas – e.g., expected utility theory, totalism in population ethics, maybe hedonism about well-being — and riding them wherever they lead, no matter the costs. Yes, you push fat men and harvest organs; yes, you destroy Utopias for tiny chances of creating zillions of evil, slightly-happy rats (plus some torture farms on the side). But you always “know what you’re getting” – e.g., more expected net pleasure. And because you “know what you’re getting,” you can say things like “I bite all the bullets,” confident that you’ll always get at least this one thing, whatever else must go.
Plus, other people have problems you don’t. They end up talking about vague and metaphysically suspicious things like “people,” whereas you only talk about “valenced experiences” which are definitely metaphysically fine and sharp and joint-carving. They end up writing papers entirely devoted to addressing a single category of counter-example – even while you can almost feel the presence of tons of others, just offscreen. And more generally, their theories are often “janky,” complicated, ad hoc, intransitive, or incomplete. Indeed, various theorems prove that non-you people will have problems like this (or so you’re told; did you actually read the theorems in question?). You, unlike others, have the courage to just do what the theorems say, ‘intuitions’ be damned. In this sense, you are hardcore. You are rigorous. You are on solid ground.
+1. One concrete application: Offer donation options instead of generous stipends as compensation for speaking engagements.
Hi Elika,
Thanks for writing this, great stuff!
I would probably frame some things a bit differently (more below), but I think you raise some solid points, and I definitely support the general call for nuanced discussion.
I have a personal anecdote that really speaks to your “do your homework point.” When doing research for our 2021 article on dual-use risks (thanks for referencing it!) , I was really excited about our argument for implementing “dual-use evaluation throughout the research life cycle, including the conception, funding, conduct, and dissemination of research.” The idea that effective dual-use oversight requires intervention at multiple points felt solid, and some feedback we’d gotten on presentations of our work gave me the impression that this was a fairly novel framing.
It totally wasn’t! NSABB called for this kind of oversight throughout the research cycle (at least) as early as 2007, [1] and, in hindsight, it was pretty naïve of me to think that this simple idea was new. In general, it’s been a pretty humbling experience to read more of the literature and realise just how many of the arguments that I thought were novel based on their appearance in recent op-eds and tweets can be found in discussions from 10, 20, or even 50 years ago.
Alright, one element of your post that I would’ve framed differently: You put a lot of emphasis on the instrumental benefits of nuanced discussion in the form of building trust and credibility, but I hope readers of your post also realise the intrinsic value of being more nuanced.
E.g., from the summary
“[what you say] does impact how much you are trusted, whether or not you are invited back to the conversation, and thus the potential to make an impact”
And the very last sentence:
“Always make sure ‘you’re invited back to the table’.”
This is a great point, and I really do think it’s possible to burn bridges and lose respect by coming across as ignorant or inflammatory. But getting the nuanced details wrong is also a recipe for getting solutions wrong! As you say, proper risk-benefit analysis for concrete dual-use research is almost always difficult, given that the research in question very often has some plausible upside for pandemic preparedness or health more generally.
And even if you know what specific research to draw red lines around, implementation is riddled with challenges: How do you design rules that won’t be obsolete with scientific advances? How do you make criteria that won’t restrict research that you didn’t intend to restrict? How do you avoid inadvertent attention hazards from highlighting the exact kinds of research that seem the most risky? Let’s say you’ve defined the perfect rules. Who should be empowered to make the tough judgment calls on what to prohibit? If you’re limiting access to certain knowledge, who gets to have that access? And so on, and so on.
I do think there’s value in strongly advocating for more robust dual-use oversight or lab biosafety, and (barring infohazard concerns), I think op-eds aimed at both policymakers and the general public can be helpful. It’s just that I think such advocacy should be more in the tone of “Biosecurity is important, and more work on it is urgently needed” and less “Biosecurity Is Simple, I Would Just Ban All GOF.”
Bottom line, I especially like the parts of your post that encourage people to be more nuanced, not just sound more nuanced.
From Casadevall 2015: “In addition to defining the type of research that should elicit heightened concern, the NSABB recommended that research be examined for DURC potential throughout its life span, from experimental conception to final dissemination of the results.”
Thanks for laying this out so clearly. One frustrating aspect of having a community comprised of so many analytic philosophy students (myself included!) is a common insistence on interpreting statements, including highly troubling ones, exactly as they may have been intended by the author, at the exclusion of anything further that readers might add, such as historical context or ways that the statement could be misunderstood or exploited for ill purposes. Another example of this is the discussion around Beckstead’s (in my opinion, deeply objectionable) quote regarding the (hypothetical, ceteris-paribus, etc., to be clear) relative value of saving rich versus poor lives.[1]
I do understand the value of hypothetical inquiry as part analytic philosophy and appreciate its contributions to the study of morality and decision-making. However, for a community that is so intensely engaged in affecting the real world, it often feels like a frustrating motte-and-bailey, where the bailey is the efforts to influence policy and philanthropy on the direct basis of philosophical writings, and the motte is the insistence that those writings are merely hypothetical.
In my opinion, it’s insufficient to note that an author intends for some claim to be “hypothetical” or “abstract” or “pro tanto” or “all other things equal”, if the claim is likely to be received or applied in the way it was literally written. E.g., proposals for ubiquitous surveillance cannot be dismissed as merely hypothetical, if there’s a appreciable chance that some readers come away as even slightly more supportive of, or open to, the idea of ubiquitous surveillance in practice.
To be clear, I’m not saying that the community shouldn’t conduct or rely on the kind of hypothetical-driven philosophy exemplified in Bostrom’s VWH or in Beckstead’s dissertation. But I do think it’s important, then, to either i) make it clear that a piece of writing is intended as analytic philosophy that generally should be applied with extreme care to the real world or ii) to do a much better job at incorporating historical context and taking potential misinterpretations and misapplications extremely seriously.
For VWH, Option i) could look like replacing the journal with one for analytic philosophy and replacing the Policy Implications box with a note clarifying that this is work of philosophy, not policy analysis. Option ii) could involve an even more extensive discussion of downside risks – I genuinely don’t think that the 6 sentences quoted above on how “unprecedentedly intense forms of surveillance, or a global governance institution capable of imposing its will on any nation, could also have bad consequences” constitutes anywhere near the appropriate effort to manage downside risks associated with a policy article on ubiquitous global surveillance. Specifically, that effort would require engaging with the real-world history of totalitarian surveillance and its consequences; outlining in more detail how the surveillance system could go wrong or even itself pose an existential risk; and warning in much more unequivocal terms about the danger of misunderstanding or misapplying this proposal.
For Beckstead’s dissertation quote, Option i) is, to be fair, already somewhat in play, given that the quote is from a dissertation in analytic philosophy and there’s a good amount of ceteris-paribus-hypothetical-pro-tanto-etc. caveating, though the specific passage could maybe do with a bit more. Option ii) could involve embedding the quote in the context of both historical and modern philanthropy, particularly through a postcolonial lens; also discussing hypothetical counterexamples of when the opposite conclusion might hold; and cautioning in the strongest possible terms against specific misunderstandings or misapplications of the principle. Nowadays – though arguably less so in 2013, when it was written – it could also involve a discussion of how the principle under discussion in the paragraph relates to the fact the reallocation of funds that could plausibly have been used for global health towards improving the comfort of affluent individuals in the Global North, such as myself. I understand that this is a philosophy dissertation, so the above might not be easy to include – but then I think we have a difficult challenge of relying a lot on ahistorical, non-empirical philosophy as guidance for a very real, very practical movement.
The bottom line is that certain seminal texts in effective altruism should either be treated as works of analytic philosophy with its afforded bold, and even troubling, speculation, or as policy-guiding with its requirement for extreme caution in the presence of downside risks; they can’t be both at once.
___
[1] For context, here’s the quote in question, from Beckstead’s dissertation, On the overwhelming importance of shaping the far future:
”Saving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries. Why? Richer countries have substantially more innovation, and their workers are much more economically productive. By ordinary standards—at least by ordinary enlightened humanitarian standards—saving and improving lives in rich countries is about equally as important as saving and improving lives in poor countries, provided lives are improved by roughly comparable amounts. But it now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal.”
Just logging in to say that, as someone who co-ran a large university EA group for three years (incidentally the one that Aaron founded many years prior!), I find it plausible that, in some scenarios, the decision that EA Munich made would be the all-things-considered best one.
Hi Nadia, thanks for writing this post! It’s a thorny topic, and I think people are doing the field a real service when they take the time to write about problems as they see them –– I particularly appreciate that you wrote candidly about challenges involving influential funders.
Infohazards truly are a wicked problem, with lots of very compelling arguments pushing in different directions (hence the lack of consensus you alluded to), and it’s frustratingly difficult to devise sound solutions. But I think infohazards are just one of many factors contributing to the overall opacity in the field causing some of these epistemic problems, and I’m a bit more hopeful about other ways of reducing that opacity. For example, if the field had more open discussions about things that are not very infohazardous (e.g., comparing strategies for pursuing well-defined goals, such as maintaining the norm against biological weapons), I suspect it’d mitigate the consequences of not being able to discuss certain topics (e.g. detailed threat models) openly. Of course, that just raises the question of what is and isn’t an infohazard (which itself may be infohazardous...), but I do think there are some areas where we could pretty safety move in the direction of more transparency.
I can’t speak for other organisations, but I think my organisation (Effective Giving, where I lead the biosecurity grantmaking program) could do a lot to be more transparent just by overcoming obstacles to transparency that are unrelated to infohazards. These include the (time) costs of disseminating information; concerns about how transparency might affect certain key relationships, e.g. with prospective donors whom we might advise in the future; and public relations considerations more generally; and they’re definitely very real obstacles, but they generally seem more tractable than the infohazard issue.
I think we (again, just speaking for Effective Giving’s biosecurity program) have a long way to go, and I’d personally be quite disappointed if we didn’t manage to move in the direction of sharing more of our work during my tenure. This post was a good reminder of that, so thanks again for writing it!
+1. I always assumed that the ‘Open’ in ‘Open Philanthropy’ referred to an aspiration for a greater degree of transparency than is typically seen in philanthropy, and I generally support this aspiration being shared in the wider effective altruism philanthropic space. The EA Funds are an amazingly flexible way of funding extremely valuable work – but it seems to me that this flexibility would still benefit from the scrutiny and crowd-input that becomes possible through measures like public reports.
Thanks for sharing this!
I think this quote from Piper is worth highlighting:
(...) if the shift to longtermism meant that effective altruists would stop helping the people of the present, and would instead put all their money and energy into projects meant to help the distant future, it would be doing an obvious and immediate harm. That would make it hard to be sure EA was a good thing overall, even to someone like me who shares its key assumptions.
I broadly agree with this, except I think the first “if” should be replaced with “insofar as.” Even as someone who works full-time on existential risk reduction, it seems very clear to me that longtermism is causing this obvious and immediate harm; the question is whether that harm is outweighed by the value of pursuing longtermist priorities.
GiveWell growth is entirely compatible with the fact that directing resources toward longtermist priorities means not directing them toward present challenges. Thus, I think the following claim by Piper is unlikely to be true:
My main takeaway from the GiveWell chart is that it’s a mistake to believe that global health and development charities have to fight with AI and biosecurity charities for limited resources.
To make that claim, you have to speculate about the counterfactual situation where effective altruism didn’t include a focus on longtermism. E.g., you can ask:
Would major donors still be using the principles of effective altruism for their philanthropy?
Would support for GiveWell charities have been even greater in that world?
Would even more people have been dedicating their careers to pressing current challenges like global development and animal suffering?
My guess is that the answer to all three is “yes”, though of course I could be wrong and I’d be open to hear arguments to the contrary. In particular, I’d love to see evidence for the idea of a ‘symbiotic’ or synergistic relationship. What are the reasons to think that the focus on longtermism has been helpful for more near-term causes? E.g., does longtermism help bring people on board with Giving What We Can who otherwise wouldn’t have been? I’m sure that’s the case for some people, but how many? I’m genuinely curious here!
To be clear, it’s plausible that longtermism is extremely good for the world all-things-considered and that longtermism can coexist with other effective altruism causes.
But it’s very clear that focusing on longtermism trades off against focusing on other present challenges, and it’s critical to be transparent about that. As Piper says, “prioritization of causes is at the heart of the [effective altruism] movement.”
Hi!
This is Joshua, I work on the biosecurity program at the philanthropic advisor Effective Giving. In 2021, we recommended two grants to UNIDIR’s work on biological risks, e.g. this report on stakeholder perspectives on the Biological Weapons Convention, which you might find interesting.
“Open Phil posts all of its grants with some explanation.”
I do not think that this is accurate, I believe that some of their grants are not posted to their website.
Thanks for writing this!
One thing I really agreed with.
I particularly appreciate your point about avoiding ‘bait-and-switch’ dynamics. I appreciate that it’s important to build broad support for a movement, but I ultimately think that it’s crucial to be transparent about what the key considerations and motivations are within longtermism. If, for example, the prospect of ‘digital minds’ is an essential part of how leading people in the movement think about the future, then I think that should be part of public outreach, notwithstanding how offputting or unintuitive it may be. (MacAskill has a comment about excluding the subject here).
One thing I disagreed with.
I agree it’s good to be transparent about priorities, including regarding the weight placed on AI risk within the movement. But I tend to disagree that it’s so important to share subjective numerical credences and it sometimes has real downsides, especially for extremely speculative subjects. Making implicit beliefs explicit is helpful. But it also causes people to anchor on what may ultimately be an extremely shaky and speculative guess, hindering further independent analysis and leading to long citation trails. For example, I think the “1-in-6” estimate from The Precipice may have led to premature anchoring on that figure, and likely is relied upon too much relative to how speculative it necessarily is.
I appreciate that there are many benefits of sharing numerical credences and you seem like an avid proponent of sharing subjective credences (you do a great job at it in this post!), so we don’t have to agree. I just wanted to highlight one substantial downside of the practice.