Hi there!
I currently co-lead the biosecurity grantmaking program at Effective Giving. Before that, I’ve worked in various research roles focused on pandemic preparedness and biosecurity.
Joshua TM
Hi there!
I currently co-lead the biosecurity grantmaking program at Effective Giving. Before that, I’ve worked in various research roles focused on pandemic preparedness and biosecurity.
Joshua TM
Hi James!
Good question. That estimate was for our entire process of producing the paper, including any relevant research. We wrote on a topic that somewhat overlapped with areas we already knew a bit about, so I can imagine there’d be extra hours if you write on something you’re less familiar with. Also, I generally expect that the time investment might vary a lot between groups, so I wouldn’t put too much weight on my rough estimate. Cheers!
Just here to say that this bit is simultaneously wonderfully hilarious and extraordinarily astute:
The first is that I think infinite ethics punctures a certain type of utilitarian dream. It’s a dream I associate with the utilitarian friend quoted above (though over time he’s become much more of a nihilist), and with various others. In my head (content warning: caricature), it’s the dream of hitching yourself to some simple ideas – e.g., expected utility theory, totalism in population ethics, maybe hedonism about well-being — and riding them wherever they lead, no matter the costs. Yes, you push fat men and harvest organs; yes, you destroy Utopias for tiny chances of creating zillions of evil, slightly-happy rats (plus some torture farms on the side). But you always “know what you’re getting” – e.g., more expected net pleasure. And because you “know what you’re getting,” you can say things like “I bite all the bullets,” confident that you’ll always get at least this one thing, whatever else must go.
Plus, other people have problems you don’t. They end up talking about vague and metaphysically suspicious things like “people,” whereas you only talk about “valenced experiences” which are definitely metaphysically fine and sharp and joint-carving. They end up writing papers entirely devoted to addressing a single category of counter-example – even while you can almost feel the presence of tons of others, just offscreen. And more generally, their theories are often “janky,” complicated, ad hoc, intransitive, or incomplete. Indeed, various theorems prove that non-you people will have problems like this (or so you’re told; did you actually read the theorems in question?). You, unlike others, have the courage to just do what the theorems say, ‘intuitions’ be damned. In this sense, you are hardcore. You are rigorous. You are on solid ground.
Thanks for your comment, much appreciated!
I wholeheartedly agree that taking action to do something is often the most important, and most desperately lacking, component. Why is it lacking?
One potential cause could be if many people agree with a critical take, but those people are not the ones who have a lot of influence, e.g. because decision-making power is concentrated.
Another explanation could be that there are actually many people who agree with a critical take on the direction of effective altruism and would have the ability to do something about it, but they just can’t/won’t dedicate time to it given their other professional commitments. (I count myself in that category. If I had a clone of myself that had to work on something other than biorisk, I might ask them to work full-time on ‘steering’ for this movement.)
Thankfully, we can expect a large influx of new, exciting members of the community (thanks to the awesome rowing of so many community builders!) looking for projects to take up. For that reason, I think it’s important that (1) there’s a culture of dissent that prompts people to think about new directions to pull the movement in, and (2) there are institutions in place that can facilitate the implementation of critical work, e.g. funding, positions within organisations, or mechanisms for distributed decision-making for the movement.
Hey Linch, thanks for this thoughtful comment!
Yeah, I agree that my examples of steering sometimes are closely related to other terms in Holden’s framework, particularly equity – indeed I have a comment about that buried deep in a footnote.
One reason I think this happens is because I think a super important concept for steering is the idea of moral uncertainty, and taking moral uncertainty seriously can imply putting a greater weight on equity than you otherwise might.
I guess another reason is that I tend to assume that effective steering is, as an empirical matter, more likely to be achieved if you incorporate a wide range of voices and perspectives. And this does in practice end up being similar to efforts to “amplify the voices and advance the interests of historically marginalized groups” that Holden puts under the category of equity. But yeah, like you say, it can be hard to differentiate whether people advocate for equity and diversity of perspectives for instrumental or intrinsic reasons (I’m keen on both).
I also think your last remark is a fair critique of my post – perhaps I did bring in some more controversial (though, to me, compelling!) perspectives under the less controversial heading of steering.
A very similar critique I’ve heard from two others is something like: “Is your argument purely that there isn’t enough steering going on, or is it also that you disagree with the current direction of steering?” And I think, to be fair, that it’s also partly the latter for me, at least on some very specific domains.
But one response to that is that, yes, I disagree with some of the current steering – but a necessary condition for changing direction is that people talk/care/focus more on steering, so I’m going to make the case for that first.
Thanks again for your comment!
As discussed in a bit more detail in this post, I’d love to see themed prizes focusing specifically on critical engagement with effective altruism. This could be very broad (e.g., “Best critique of the effective altruism movement”) or more narrow (e.g., something like “Best critique of a specific assumption that is widely made in the community” or “Best writeup on how applied longtermism could go wrong”).
To the next content specialist on the Forum: I’d be happy to discuss further!
Sounds good! I’ll post a comment and make sure to reach out to the next content specialist. Thanks!
Thanks, Aaron, this is a great suggestion! I’ll try to get around to writing a very brief post about it this weekend.
On a related note, I’d be curious to hear what you think of the idea of using EA Forum prizes for this sort of purpose? Of course, there’d have to be some more work on specifying what exactly the prize should be for, etc.
If you know who will be working on the Forum going forward, I’d love to get a sense of whether they’d be interested in doing some version of this. If so, I’d be more than happy to set up a meeting to discuss.
James, thanks for pointing this out, and thanks, Pablo, that was indeed the link I intended to use! Fixed it now.
Thanks for laying this out so clearly. One frustrating aspect of having a community comprised of so many analytic philosophy students (myself included!) is a common insistence on interpreting statements, including highly troubling ones, exactly as they may have been intended by the author, at the exclusion of anything further that readers might add, such as historical context or ways that the statement could be misunderstood or exploited for ill purposes. Another example of this is the discussion around Beckstead’s (in my opinion, deeply objectionable) quote regarding the (hypothetical, ceteris-paribus, etc., to be clear) relative value of saving rich versus poor lives.[1]
I do understand the value of hypothetical inquiry as part analytic philosophy and appreciate its contributions to the study of morality and decision-making. However, for a community that is so intensely engaged in affecting the real world, it often feels like a frustrating motte-and-bailey, where the bailey is the efforts to influence policy and philanthropy on the direct basis of philosophical writings, and the motte is the insistence that those writings are merely hypothetical.
In my opinion, it’s insufficient to note that an author intends for some claim to be “hypothetical” or “abstract” or “pro tanto” or “all other things equal”, if the claim is likely to be received or applied in the way it was literally written. E.g., proposals for ubiquitous surveillance cannot be dismissed as merely hypothetical, if there’s a appreciable chance that some readers come away as even slightly more supportive of, or open to, the idea of ubiquitous surveillance in practice.
To be clear, I’m not saying that the community shouldn’t conduct or rely on the kind of hypothetical-driven philosophy exemplified in Bostrom’s VWH or in Beckstead’s dissertation. But I do think it’s important, then, to either i) make it clear that a piece of writing is intended as analytic philosophy that generally should be applied with extreme care to the real world or ii) to do a much better job at incorporating historical context and taking potential misinterpretations and misapplications extremely seriously.
For VWH, Option i) could look like replacing the journal with one for analytic philosophy and replacing the Policy Implications box with a note clarifying that this is work of philosophy, not policy analysis. Option ii) could involve an even more extensive discussion of downside risks – I genuinely don’t think that the 6 sentences quoted above on how “unprecedentedly intense forms of surveillance, or a global governance institution capable of imposing its will on any nation, could also have bad consequences” constitutes anywhere near the appropriate effort to manage downside risks associated with a policy article on ubiquitous global surveillance. Specifically, that effort would require engaging with the real-world history of totalitarian surveillance and its consequences; outlining in more detail how the surveillance system could go wrong or even itself pose an existential risk; and warning in much more unequivocal terms about the danger of misunderstanding or misapplying this proposal.
For Beckstead’s dissertation quote, Option i) is, to be fair, already somewhat in play, given that the quote is from a dissertation in analytic philosophy and there’s a good amount of ceteris-paribus-hypothetical-pro-tanto-etc. caveating, though the specific passage could maybe do with a bit more. Option ii) could involve embedding the quote in the context of both historical and modern philanthropy, particularly through a postcolonial lens; also discussing hypothetical counterexamples of when the opposite conclusion might hold; and cautioning in the strongest possible terms against specific misunderstandings or misapplications of the principle. Nowadays – though arguably less so in 2013, when it was written – it could also involve a discussion of how the principle under discussion in the paragraph relates to the fact the reallocation of funds that could plausibly have been used for global health towards improving the comfort of affluent individuals in the Global North, such as myself. I understand that this is a philosophy dissertation, so the above might not be easy to include – but then I think we have a difficult challenge of relying a lot on ahistorical, non-empirical philosophy as guidance for a very real, very practical movement.
The bottom line is that certain seminal texts in effective altruism should either be treated as works of analytic philosophy with its afforded bold, and even troubling, speculation, or as policy-guiding with its requirement for extreme caution in the presence of downside risks; they can’t be both at once.
___
[1] For context, here’s the quote in question, from Beckstead’s dissertation, On the overwhelming importance of shaping the far future:
”Saving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries. Why? Richer countries have substantially more innovation, and their workers are much more economically productive. By ordinary standards—at least by ordinary enlightened humanitarian standards—saving and improving lives in rich countries is about equally as important as saving and improving lives in poor countries, provided lives are improved by roughly comparable amounts. But it now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal.”
+1. I always assumed that the ‘Open’ in ‘Open Philanthropy’ referred to an aspiration for a greater degree of transparency than is typically seen in philanthropy, and I generally support this aspiration being shared in the wider effective altruism philanthropic space. The EA Funds are an amazingly flexible way of funding extremely valuable work – but it seems to me that this flexibility would still benefit from the scrutiny and crowd-input that becomes possible through measures like public reports.
This list is certainly profoundly not-exhaustive for me but I’d rather post this version than spend ages thinking of a better answer and ultimately not posting anything. So, here goes:
Cassidy Nelson and Gregory Lewis. When I was considering applying for my current role at the Future of Humanity Institute (FHI), the fact that they were leading the biorisk team was a pretty big consideration in favour of applying. I had some reservations about (my perceived-from-afar version of) the culture at FHI, and these two people just made me really excited about working there. Cassidy had been an incredibly smart and empathetic mentor during a fellowship I did and I really liked Gregory’s post on epistemic modesty.
Mushfiq Mobarak, of No Lean Season and Covid-19-mask-RCT fame. Based on a few in-person interactions (through my undergrad EA group), he seems to me to be what all economists should aspire for; exceptionally clear-minded and thoughtful, yet compassionate.
Paul Farmer. Mountains Beyond Mountains is a fantastic book. His approach to making the world better is pretty profoundly different from that of effective altruism, and I suspect that most readers of this Forum (myself included) would disagree pretty strongly with many or most of his principles. But I find it hard not to be inspired by his sheer level of commitment to making the world better and his unwavering insistence that every human being deserves the same rights and standards of living.
The people with whom I co-organised our undergrad student group—especially (but not limited to) Xuan, Frankie Andersen-Wood, Mojmír Stehlík, and Jessica McCurdy—who helped create an awesome, inclusive, and impactful EA community where there easily could’ve been none.
Terrific, thanks!
Thanks, Pablo!
Thanks for the context. I should note that I did not in any way intend to disparage Beckstead’s personal character or motivations, which I definitely assume to be both admirable and altruistic.
As stated in my comment, I found the quote relevant for the argument from Torres that Haydn discussed in this post. I also just generally think the argument itself is worth discussing, including by considering how it might be interpreted by readers who do not have the context provided by the author’s personal actions.
Thanks for making this list, Tessa – so much that I have yet to read! And thanks for including our article :)
I thought I might suggest a few other readings on vaccine development:
Long Shot: Vaccines for National Defense (2012), Kendall Hoyt
Prototype pathogen approach for pandemic preparedness: world on fire (2020), Barney Graham and Kizzmekia Corbett
Novel Vaccine Technologies: Essential Components of an Adequate Response to Emerging Viral Diseases (2018), Barney Graham, John Mascola and Anthony Fauci
Also, I think you omitted a super important 80k podcast: Ambassador Bonnie Jenkins on 8 years of combating WMD terrorism.
Finally, since you already included a ton of readings from fellow EA’s, I thought I’d also suggest Questioning Estimates of Natural Pandemic Risk (2018), David Manheim.
Thanks again for making this!
I’m not sure I understand your distinction – are you saying that while it would be objectionable to conclude that saving lives in rich countries is more “substantially more important”, it is not objectionable to merely present an argument in favour of this conclusion?
I think if you provide arguments that lead to a very troubling conclusion, then you should ensure that they’re very strongly supported, eg by empirical or historical evidence. Since Beckstead didn’t do that (which perhaps is to be expected in a philosophy thesis), I think it would at the very least have been appropriate to recognise that the premises for the argument are extremely speculative.
I also think the argument warrants some disclaimers – e.g., a warning that following this line of reasoning could lead to undesirable neglect of global poverty or a disclaimer that we should be very wary of any argument that leads to conclusions like ‘we should prioritise people like ourselves.’
Like Dylan Balfour said above, I am otherwise a big fan of this important dissertation; I just think that this quote is not a great look and it exemplifies a form of reasoning that we longtermists should be careful about.
Seconded.
In case you (or anyone else) is interested, there’ll be a panel discussion with a few biosecurity experts this Thursday: 2022 Next Generation for Biosecurity Competition: How can modern science help develop effective verification protocols to strengthen the Biological Weapons Convention? A Conversation with the Experts.