Hi there!
I currently co-lead the biosecurity grantmaking program at Effective Giving. Before that, I’ve worked in various research roles focused on pandemic preparedness and biosecurity.
Joshua TM
Hi there!
I currently co-lead the biosecurity grantmaking program at Effective Giving. Before that, I’ve worked in various research roles focused on pandemic preparedness and biosecurity.
Joshua TM
This is a very welcome contribution to a professional field (ie., the GCBR-focused parts of the pandemic preparedness and biosecurity space) that can often feel opaque and poorly coordinated — sincere thanks to Max and everyone else who helped make it!
Thanks for sharing this and congrats on a very longstanding research effort!
Are you able to provide more details on the backgrounds of the “biorisk experts”? For example, the kinds of organisations they work for, their seniority (eg years of professional experience), or their prior engagement with global catastrophic biological risks specifically (as opposed to pandemic preparedness or biorisk management more broadly).
I ask because I’m wondering about potential selection effects with respect to level of concern about catastrophe/extinction from biology. Without knowing your sampling method, I could imagine that you could potentially disproportionately have reached people who worry more about catastrophic and extinction risks than the typical “biorisk expert.”
Hi!
This is Joshua, I work on the biosecurity program at the philanthropic advisor Effective Giving. In 2021, we recommended two grants to UNIDIR’s work on biological risks, e.g. this report on stakeholder perspectives on the Biological Weapons Convention, which you might find interesting.
To be clear, I definitely think there’s a spectrum of attitudes towards security, centralisation, and other features of hazard databases, so I think you’re pointing to an important area of meaningful substantive disagreement!
Yes, benchtop devices have significant ramifications!
Agreed, storing the database on-device does sound much harder to secure than some kind of distributed storage. Though, I can imagine that some customers will demand airgapped on-device solutions, where this challenge could present itself anyway.
Agreed, sending exact synthesis orders from devices to screeners seems undesirable/unviable, for a host of reasons.
But that’s consistent with my comment, which just meant to emphasise that I don’t read Diggans and Leproust as advocating for a fully “public” hazard database, as slg’s comment could be read to imply.
Hi slg — great point about synthesis screening being a very concrete example where approaches to security can make a big difference.
One quibble I have: Your hyperlink seems to suggest that Diggans and Leproust advocate for a fully “public” database of annotated hazard sequences. But I think it’s worth noting that although they do use the phrase “publicly available” a couple of times, they also pretty explicitly discuss the idea of having such a database be accessible to synthesis providers only, which is a much smaller set and seems to carry significantly lower risks for misuse than truly public access. Relevant quote:
“Sustained funding and commitment will be required to build and maintain a database of risk-associated sequences, their known mechanisms of pathogenicity and the biological contexts in which these mechanisms can cause harm. This database (or at a minimum a screening capability making use of this database), to have maximum impact on global DNA synthesis screening, must be available to both domestic and international providers.”
Also worth noting the parenthetical about having providers use a screening mechanism with access to the database without having such direct access themselves, which seems like a nod to some of the features in, eg, SecureDNA’s approach.
Hi Nadia, thanks for writing this post! It’s a thorny topic, and I think people are doing the field a real service when they take the time to write about problems as they see them –– I particularly appreciate that you wrote candidly about challenges involving influential funders.
Infohazards truly are a wicked problem, with lots of very compelling arguments pushing in different directions (hence the lack of consensus you alluded to), and it’s frustratingly difficult to devise sound solutions. But I think infohazards are just one of many factors contributing to the overall opacity in the field causing some of these epistemic problems, and I’m a bit more hopeful about other ways of reducing that opacity. For example, if the field had more open discussions about things that are not very infohazardous (e.g., comparing strategies for pursuing well-defined goals, such as maintaining the norm against biological weapons), I suspect it’d mitigate the consequences of not being able to discuss certain topics (e.g. detailed threat models) openly. Of course, that just raises the question of what is and isn’t an infohazard (which itself may be infohazardous...), but I do think there are some areas where we could pretty safety move in the direction of more transparency.
I can’t speak for other organisations, but I think my organisation (Effective Giving, where I lead the biosecurity grantmaking program) could do a lot to be more transparent just by overcoming obstacles to transparency that are unrelated to infohazards. These include the (time) costs of disseminating information; concerns about how transparency might affect certain key relationships, e.g. with prospective donors whom we might advise in the future; and public relations considerations more generally; and they’re definitely very real obstacles, but they generally seem more tractable than the infohazard issue.
I think we (again, just speaking for Effective Giving’s biosecurity program) have a long way to go, and I’d personally be quite disappointed if we didn’t manage to move in the direction of sharing more of our work during my tenure. This post was a good reminder of that, so thanks again for writing it!
Thanks for researching and writing this!
Thanks for doing this survey and sharing the results, super interesting!
Regarding
maybe partly because people who have inside views were incentivised to respond, because it’s cool to say you have inside views or something
Yes, I definitely think that there’s a lot of potential for social desirability bias here! And I think this can happen even if the responses are anonymous, as people might avoid the cognitive dissonance that comes with admitting to “not having an inside view.” One might even go as far as framing the results as “Who do people claim to defer to?”
Hi Elika,
Thanks for writing this, great stuff!
I would probably frame some things a bit differently (more below), but I think you raise some solid points, and I definitely support the general call for nuanced discussion.
I have a personal anecdote that really speaks to your “do your homework point.” When doing research for our 2021 article on dual-use risks (thanks for referencing it!) , I was really excited about our argument for implementing “dual-use evaluation throughout the research life cycle, including the conception, funding, conduct, and dissemination of research.” The idea that effective dual-use oversight requires intervention at multiple points felt solid, and some feedback we’d gotten on presentations of our work gave me the impression that this was a fairly novel framing.
It totally wasn’t! NSABB called for this kind of oversight throughout the research cycle (at least) as early as 2007, [1] and, in hindsight, it was pretty naïve of me to think that this simple idea was new. In general, it’s been a pretty humbling experience to read more of the literature and realise just how many of the arguments that I thought were novel based on their appearance in recent op-eds and tweets can be found in discussions from 10, 20, or even 50 years ago.
Alright, one element of your post that I would’ve framed differently: You put a lot of emphasis on the instrumental benefits of nuanced discussion in the form of building trust and credibility, but I hope readers of your post also realise the intrinsic value of being more nuanced.
E.g., from the summary
“[what you say] does impact how much you are trusted, whether or not you are invited back to the conversation, and thus the potential to make an impact”
And the very last sentence:
“Always make sure ‘you’re invited back to the table’.”
This is a great point, and I really do think it’s possible to burn bridges and lose respect by coming across as ignorant or inflammatory. But getting the nuanced details wrong is also a recipe for getting solutions wrong! As you say, proper risk-benefit analysis for concrete dual-use research is almost always difficult, given that the research in question very often has some plausible upside for pandemic preparedness or health more generally.
And even if you know what specific research to draw red lines around, implementation is riddled with challenges: How do you design rules that won’t be obsolete with scientific advances? How do you make criteria that won’t restrict research that you didn’t intend to restrict? How do you avoid inadvertent attention hazards from highlighting the exact kinds of research that seem the most risky? Let’s say you’ve defined the perfect rules. Who should be empowered to make the tough judgment calls on what to prohibit? If you’re limiting access to certain knowledge, who gets to have that access? And so on, and so on.
I do think there’s value in strongly advocating for more robust dual-use oversight or lab biosafety, and (barring infohazard concerns), I think op-eds aimed at both policymakers and the general public can be helpful. It’s just that I think such advocacy should be more in the tone of “Biosecurity is important, and more work on it is urgently needed” and less “Biosecurity Is Simple, I Would Just Ban All GOF.”
Bottom line, I especially like the parts of your post that encourage people to be more nuanced, not just sound more nuanced.
From Casadevall 2015: “In addition to defining the type of research that should elicit heightened concern, the NSABB recommended that research be examined for DURC potential throughout its life span, from experimental conception to final dissemination of the results.”
Hi, thanks for your response and for the context about general university-related processes.
I’m pretty confident that if you ask almost anyone who has worked for FHI within the past two years, their overall account will match mine, even if they would frame some details differently. In my time there, I did not hear anyone present a significantly different version of events. (I don’t just mean this rhetorically – it’d be great to hear from anyone else at FHI here!)
I’ll just respond with some context to specific parts:
First, the entire Oxford University had a hiring freeze—not just the Philosophy Dept., not just FHI—the whole school paused hiring when Covid hit. The university I worked for did the same thing just 4 years prior when it’s endowment took a massive hit—hiring freezes are normal.
No, I was referring to a hiring freeze affecting FHI, specifically. As mentioned, the Global Priorities Institute – based in the same philosophy department, at the same university, and using the same offices – has been able to recruit new hires long after FHI stopped being able to do so in early 2021. (I think I received my job offer in late 2020; I know that RSP was unable to recruit for a new cohort when they tried to do so in January 2021).
For those running to point out GPI is hiring—that’s because the Forethought Foundation helps them circumvent the dept (which is what FHI was trying to do too).
As far as I am aware, most/all GPI staff are members of the University staff, i.e. hired by GPI rather than Forethought.
Second, the first thing one does when money is tight is prioritize—and unfortunately that brings up an uncomfortable conversation about which departments are essential/priority to sustain. [...] Unfortunately, Oxford might be reevaluating FHI in light of its new institute
I don’t know FHI’s exact financial situation, but I know that the institute relied to a significant degree on philanthropic funding (e.g. 1, 2, 3), as opposed to funding from the university. I think it is very unlikely that FHI’s inability to hire owes to not receiving funds from the University. For example, Open Philanthropy recommended a large grant for the Research Scholars Program in April 2021, but the program has still not been able to bring on new scholars since then.
Third, the “formal review” that JTM mentioned was probably a review being done on all depts
No, I am quite confident that this was specifically a review of FHI. This impression is based on conversations among staff. Looking just now, I also have an email that was sent to FHI staff from the Faculty of Philosophy that gives me the same impression.
Finally, JTM also mentioned that GPI hasn’t been struggling as much as FHI and suggest it’s because of senior leadership’s relationship with the University.
Maybe you would also call this hearsay, but in their resignation letter that was circulated among staff in August 2021 (shared by the resignee themself), one senior member of FHI’s staff referred specifically to the “unreasonably bad relationship with the faculty” as the cause of FHI’s inability to hire or fundraise.
Yes! That’s what I meant to refer to with this: “Two of the senior researchers occasionally organize seminar discussions, which I think are popular.”
I’m glad they’re happening more regularly now! I’ll make an edit to make that clearer.
Hi,
Is it because the FHI lacked funding, or that it didn’t manage to hire people[?]
My impression, as an employee that was never privy to much information beyond what I could gather from conversations with other researchers and a few occasional emails from the University: One of the biggest problems for FHI is that it has a poor relationship with the Department of Philosophy, its formal “home” within Oxford University. This breakdown of relations has meant that FHI has not been ‘allowed’ to hire since sometime in 2021 (I think I was among the last new people to join FHI when I joined in Oct’ 2021). That is, FHI did (as far as I am aware) have funds to hire people, but could not do it because hiring occurs through the University system, and the University disallowed it.
That raises the question, why the breakdown of relations? Your answer will depend on whom you ask – I imagine that the Department of Philosophy and Nick Bostrom would give very, very different answers as to what the root cause was. In early 2022, the Department conducted a formal ‘review’ of FHI, though I never heard what came of it. I have my own guesses that might be too speculative to share here, but I’ll just note that the Global Priorities Institute is housed in the same department, within the same university, uses the same offices, has some of the same funders, and, to my knowledge, does not have the same challenges, at least not to the point where they have been unable to hire.
or [is it] that people found better alternatives to their FHI roles?
Often that would be the reason, yes. Personally, I went down to part-time after a few months of full-time at FHI because I was keen to work for another organization that had a more cohesive strategy and more concrete projects.
When I later resigned, it was because I wanted all of my time working for another organization, which was similarly related to the lack of management, support, and strategy at FHI. At that point, it was also because I did not want to be associated with the Institute, given my worldviews and even from a reputational perspective.
Hi!
I don’t know the answer to your specific question, but can perhaps provide some circumstantial context, as someone who was employed at the Future of Humanity Institute (i.e., the Oxford University entity, not the Foundation you’re asking about) between October 2021 and January 2023. I was full-time for about 3 months of that time and part-time for the rest, but worked out of the FHI offices the whole time.
In my ~1 year at FHI, I never heard anything about the Foundation, nor did I interact with it in any way.
More generally, if you are trying to get a sense of “What is up with FHI?”, I can add some of my impressions. I’m partly doing this because there has been a lot of discussion about Nick Bostrom lately, and insofar as some people might be considering his role at FHI when having those discussions, I think it’s useful for people to have a slightly more accurate understanding of what FHI is and isn’t.
I should stress that I don’t speak for the organization and can only speak to my own experience. I should also note that I imagine FHI looked very different in previous years, particularly before the ongoing hiring freeze that started in 2021.
The positive framing of FHI is that it is currently a place where about a handful of researchers are pursuing their work with a great degree of academic freedom, and very limited constraints in the form of, e.g., publication incentives. I also think there are some informal forms of mentoring and knowledge-sharing happening, mostly facilitated by the fact that FHI shares offices with other organizations and that the remaining FHI researchers seem eager to interact with interested people. I think the focus of the research happening there is mostly what you would expect based on the website. Two of the senior researchers occasionally organize seminar discussions, which I think are popular. [Edit: Removed ‘occasionally’, as the seminar is currently weekly.]
At the same time, my impression is that FHI is no longer a very operational ‘institute’ in the typical sense of that word. Because FHI has not been able to hire new staff since late 2021, and a large fraction of the staff have left, the remaining organizational infrastructure is very limited.
To give a few illustrative examples:
Since the director of strategy and operations left (after >5 years of hard work at the institute) in early 2022, the only form of operational support (that I observed) has come from the Oxford University Department of Philosophy. E.g., when I reduced my working hours at FHI and when I later resigned, all of that was handled by the Department, not FHI.
After the lead of the Research Scholars Program left in mid-2021, I think the program mostly came to a halt, though it had already been plagued by the hiring freeze.
After I went down to 0.2 FTE to focus on other work, the regular online biosecurity seminar that I had been running stopped.
After my line manager left in mid-2022, I did not have another manager assigned.
While there used to be some regular “lab meeting” type of calls where people would present their work, I don’t think they’re happening anymore.
Perhaps another illustrative thing to mention is that beyond one group call and one or two email threads, I did not have any interactions with the director Nick Bostrom during my entire time at FHI. (This despite us having offices twenty feet from each other.) My subjective experience is that Bostrom has either not been interested or not been able to act as the director of the organization over the past eighteen months.
I don’t want this to be read as negative statements about the former and current staff at FHI. I know that some of them have worked very hard to mitigate and overcome the institutional difficulties at the institute, and I appreciate that.
Perhaps you can discount this comment slightly as the account of a disgruntled former employee, but I think all of what I said here is accurate and not misleading.
[Edit: I may not respond to all/any comments on this.]
Thank you for writing this.
“Open Phil posts all of its grants with some explanation.”
I do not think that this is accurate, I believe that some of their grants are not posted to their website.
Thank you for writing this, I think it’s very important.
Thanks for writing this, I found it interesting!