Interested in biosecurity, policy, AI governance, community building, and entrepreneurship.
Talos Fellow, trained as a biomedical engineer in France and Switzerland (MSc).
Originally from France.
Interested in biosecurity, policy, AI governance, community building, and entrepreneurship.
Talos Fellow, trained as a biomedical engineer in France and Switzerland (MSc).
Originally from France.
Last Wednesday, I joined a panel at the 2024 iGEM Grand Jamboree on “Youth & Education for Biosecurity, Biosafety, and Global Policy” within the Responsibility Conference. It took me some time to reflect on what message I wanted to pass on to the diverse audience (team members, sponsors, governments, academia and industry people, etc.).
Here is what I said:
So here we are, at the Responsibility Conference: let’s talk about responsibility. I believe each and everyone of you, whatever your age, origin, background, skin color, gender… can embody this responsibility, first not to do harm, but also to proactively contribute to this world that we’re building together, and make it better.
From there, there are many possible paths ahead, especially when one thinks about what it means for their career, and I think there are two structuring questions one can use to explore them.
First: how can I contribute? What are am I good at? What do I enjoy doing on a day-to-day basis?
And second: what am I driven by? What will bring me longterm motivation? What will bring me a feeling of fulfillment?
If I take my own story as an example, I have a scientific background, but I regularly found myself asking this question: will I find longterm motivation in researching e.g. some specific bacterial pathogen? People around me empowered me to think I could contribute more. And because contributing more, having more impact, is a driver for me, it was essential that I surround myself with aligned people and understand, with their help, how I could do that.
I realized that I believe that biotechnologies come with real, concrete, potentially large scale risks, and enabling the extraordinary benefits of biology while preventing those risks was a strong motivator for me. I hope I can contribute to a future in which the world is safe from biological threats.
However, when we are students or even young professionals, we tend to follow the path that have been laid down before us by others. Usually it means keeping the steering wheel straight, and go deeper in one domain of expertise. I get it, I started doing a Ph.D. because science is fascinating and that’s the natural next step, right? But is the mission driving me? It’s a tough question, it’s uncomfortable and at times overwhelming, but one day maybe Future You might thank you for doing the work.
Another point I would like to get across is: there are options beyond the duality of academia and industry for scientists. I’ve been there, when I was in uni it’s the choice I was told I had. But scientists can contribute in other ways, as some of my fellow panelists are also examples of, where the core of the job is not technical, but understanding the technicalities is still very important.
Let’s talk about one of those options: policy. There aren’t that many obvious ways to transition from science to policy, and I wanted to talk about my own bumpy road to here.
When I was asking myself hard what I was driven by, I realized two things: research is fascinating, but I want collaboration and building strong connections to be a big part of my job, rather than pushing the intellectual frontier. Second, I’m less excited about being at the forefront of innovation, and more about optimizing how our discoveries can be applied to improve lives at scale and have real-world impact.
At some point and after dozens of conversations with friends or strangers whose jobs I was inspired by, I really felt drawn to the policy world. It seemed like a way to follow the motivation I mentioned earlier: enabling the extraordinary benefits of biotech while preventing potential risks.
I’m not gonna say it was easy, and it required me to build resistance to rejection, but at some point stars will align. I found the Talos Fellowship, which trains young professionals to contribute to emerging tech policy through a reading group and then a 6-month placement in a think tank. While mostly focused on advanced AI, you probably know that AI has a huge potential in enabling biotechnologies, and there is a lot to work on at the interface of the two. I’m grateful for the opportunity I got at Talos to skill up, and now be placed at the Simon Institute for Longterm Governance, a think tank working on international and multilateral AI governance. My goal is to leverage the skills I’m building in bio, AI, and policy, to contribute to make the future safer for everyone.
What I want you to takeaway from here is the following:
Two questions: how can you contribute, and what is driving you
Also, there is more to what you can see for yourself. It takes exploring and considering unusual ideas seriously. Be bold, reach out, talk to people you’re inspired by, seek the difficult feedback that will allow you to make better decisions:
Dare to diverge from the path, the only thing you need is to step away and make a turn. Find the people that will empower you to do that.
By doing this, you also unlock the potential of doing more and better, which you might feel a responsibility for:
Scientists can, and should, contribute beyond science.
Super cool thanks! Will keep that in mind too.
Thanks for this! We are currently working on a visualization with a volunteer. Let’s see how it goes, but we’ll definitely keep your offer in mind!
Hamish already shared his code with me, but in the end we decided to go with something else, at least for now.
Thank you Ysa for that thoughtful aggregation of advice. I didn’t know this wasn’t something one could find in the forum, and I’m glad you filled that gap, and so well. I think I’d add:
Accept confusion, and try to resolve it. Some people might think wildly differently from you. Work on improving clarity of thoughts and arguments, even with the short amount of time you have. It might be the best insight you’ll take away with you.
Welcome overwhelm signals, and don’t overlook them. A friend once told me they would book ‘coexisting 1-1s’ with friends: if you have friends also coming at EAGx, schedule a slot with them where it’s not expected to talk. It can be a good forcing mechanism to take a well-deserved break.
Thank you Jérémy for this thorough work! It’s quite interesting that the earliest interventions seem the most risky, while early response seems the ones to prioritize.
Thank you John for sharing! This is super interesting.
Particularly, the “PNG” part makes me reflect on community belonging and inclusivity, I think it’s an important part.
Thanks for sharing! I’m honestly not sure what to answer to this, I feel some of your doubts / points are already addressed in the post. I guess it’s where the crux is, whether you believe increasing diversity of representation would be positive for the movement as a way to show others that EA is not a sticker that absolutely defines the whole set of beliefs/values of a person, or not. Maybe I’ll change my mind in the future about this. But I probably still want to advocate for making the decision to “not affiliate” intentional, when it could just be a non-decision, a default.
Damon Centola (Change: How to Make Big Things Happen)
Policy speakers (US Congress, European Commission, …)
More about biosecurity (projects and funders)
Maybe a bit more people not focusing on US or UK? (not sure if the current balance is actually suboptimal)
Thank you for sharing that, I think those numbers are a great addition to this post. It would be amazing to have an update of this post-FTX and post-OpenAI.
Thank you! That’s a useful framing.
Thank you Camille for sharing this.
I’ve thought a bit about that angle too, and I tend to think it is also a strategy (though obviously, a costly one) for oppressed minorities to have more people being outspoken about their belonging to a certain group. I think the quote from Change touches a bit on that. Though being a woman and being LGBTQ+ are very different flavours of ‘minority belonging’ (for several reasons that I won’t expand on here). For people to debunk stereotypes about certain populations, you need to show their diversity, or at least it seems to me like one of the viable strategies.
I am, by no means, saying it’s not costly. I think I would want more people to consider sharing the cost.
About this point
the wealth of ressources and arguments I could throw to people, or just knowing they’ll run into them at some point
I’d like to point you to this resource that do provide guidance on responding to criticisms of EA, if you don’t know about it. And I do think individuals are working to “debunk false ideas and over-generalizations related to EA” at their scale. I’m sad to read you’ve been discouraged to do so. I do agree that the EA community as a whole has not been communicative enough at the crucial moments when it made the news, and I have been (with other community builders) quite frank about it to CEA’s new CEO Zach Robinson, and CEA’s Communications Team. Hopefully, they are currently taking steps that go in this direction (as Zach’s talk at EAG London suggests). I also have suggested that community builders could also do a bigger part there, but it takes time.
I hope the EA community can step up to the task and better support each other in that endeavor.
To the second point, yes, I probably agree, and it’s an approach I find useful.
But sometimes you don’t get the chance to say so many words, and giving the opportunity to people to connect the dot “EA” to “your values and your actions” might increase the understanding one has of EA (as it would for feminism), without necessarily engaging in a lengthy conversation with all of the people that would be able to connect the dots otherwise by observing your actions from a bit further. I hope that makes sense.
Thank you for sharing this!
Were it the case that I thought that the community was mistaken on any or many of its determinations, I would still consider myself to be EA.
I think that’s nice.
One issue is likely the relative unipolarity of EA funding.
I agree with this. I think this is one of the major issues, and I’ve mentioned it in the past.
no person would be perfectly represented by its collective actions.
Yes, i’d guess one could say it’s the other side of the token problem, and why we might need to show a greater diversity of people “affiliating”.
Thank you for your comment.
On your first points, I think there are totally fair. I feel that’s the preconditions of the prisonner’s dilemma.
Then, I see your point on free-rider and I will reflect on it. I’d add that people mentioning how EA might have influence them or how an organization might make decisions influenced by EA principles seems completely different (to me) than “being an EA” (“identifying”) and orgs “being EA orgs”. I tend to think those latter framings are not necessarily useful, especially in the context of what I’m advocating for.
Thank you very much for taking the time to write this.
I generally don’t feel disagreement with what you say, I don’t think it’s completely opposed to what I’m advocating for. I feel that there’s a huge deal of interpretation around the words used, here “affiliation” (and as mentioned at the beginning of the post, not “identity”).
I do think more people “affiliating” will make EA less of an ingroup / outgroup, and more of a philosophy (a “general held value system” as you say in the beginning) people adhere to to some extent, esp if framed as “this is a community that inspires me, those are ideas that are influencing my actions”.
Thank you for sharing!
I would like to emphasize my choice of word (“affiliation” and not “identity”), as I do understand the offputting implications of “being an EA” (as opposed to “being part of the EA community” or another less identity-defining formulation).
I also want to add that I don’t think anyone can claim they endorse everything about a movement they consider themselves a part of (e.g. feminism or civil rights or...), I don’t think it’s possible to expect full alignment for anyone in any movement. I think it’s important people know that about EA. I think there are as many definitions of EA as there are “members”. But I also think not defining it for yourself will leave more space for others to define it for you (and most likely, wrongly). (That is to say, I probably won’t support anything along the lines of “you misunderstanding EA” or “you not being aligned with EA”, but I can’t say anything with certainty as I don’t have enough context on you)
I hope that makes sense.
Thank you!
I think people tend to trust you more if they notice your transparency and sincerity, especially over time. I think transparency has high long-term rewards. There is also a great deal of better sense-making: you can connect the dots that this bad thing happened but you know someone that was outspoken about an affiliated thing—how do those two things make sense together? is there some misunderstanding that one can gain clarity on?
A bit is captured here:
I’d guess their trust in the org is high particularly because they have always been transparent about something that is “courageous” to own. And they wanted to understand why people they trust still stand by something that seems so controversial.
Does that help?
3. More open affiliation would show the true diversity of the EA community and prevent “tokenization” of the few vocal members.
I would replace “vocal” with “visible” (e.g. I don’t think the members of the OpenAI board were especially vocal about their EA affiliation, people simply singled them out)
Thank you so, so much for sharing and writing that deep and compelling case.
I’m late to the party, I know, and I’ve known about that post for a while, and I’ve pushed reading it for some reason. Now I think it has something to do with being (at that time I heard about it) a EA community builder and feeling the heavy weight of educating people about that topic and being torn between the need of doing so, “for the greater good”, and many people will unconsciously reject those crucial conversations and it’s hard to know how to properly approach them with each different individual who might need to hear different things from the one before, making the weight being too heavy most of the time.
So thank you. I’m glad I ended up reading it. I hope we can pave the way to a community space where we share the burden, where the underrepresented communities and the survivors don’t bear the burden of education and change alone.
I’m also deeply aware that I’m writing this at the dawn of a sinister era for people at risk in one of the most powerful places in the world. I feel that the way we’ve been having those conversations might have driven the people that are unlikely to be at risk to reject the burden because it’s so complex and difficult to share it. It makes me sad, and I wonder how we can improve this together in a way that doesn’t push people to vote in a way that will have truly negative consequences for those at risks.