Announcing Alvea—A COVID Vaccine Project
We’ve had effective COVID vaccines for more than a year, but there are still countries where less than 10% of people have received a dose. Omicron has been spreading for almost three months, but pharma companies have only just started testing variant-specific shots. mRNA vaccines are highly effective, but they’re hard to manufacture and nearly impossible to distribute in parts of the developing world.
We won’t be ready for the next variant, or the next pandemic, until these problems are solved.
An ideal vaccine platform for pandemic preparedness would be fast, effective, cheap, robust, and scalable enough to reach a critical portion of the global population as soon as any new pathogen is identified. Ideas have been floating around the EA biosecurity community for years about what such a platform would look like, and what it would take to build it, but there hasn’t been much direct work in the space.
The most recent COVID wave convinced a few of us that it was time for that to change, so we launched Alvea to rapidly develop and deploy an Omicron-specific vaccine. In doing so, we’re aiming to build a platform for responding to future COVID variants or other novel pathogens, and to massively level up the technical, operational, clinical, organizational, and logistical capabilities of the longtermist biosecurity community.
We’ve initially structured Alvea as a three-month sprint to test the hypothesis that an exceptionally bright, dedicated group of people can quickly accomplish remarkable things in this space. In the past eight weeks, we’ve built a team of 35 drug developers, logistics experts, physicians, operators, and scientists to bring the project to fruition. We’re supported by a network of consultants and partners with deep experience in every aspect of vaccine development and infectious disease response.
Many of us are longtermist EAs who believe this project has a shot at transformative impact on biosecurity. All of us are committed to radically improving vaccine development, and are working around the clock to make this happen.
Our strategy at Alvea is simple: We’re building a streamlined platform for developing and deploying DNA vaccines using safe, well-validated components. DNA vaccines function similarly to mRNA vaccines, but are easier to produce and can be shipped around the world with no special storage requirements. Ultimately, we expect people to be able to easily self-administer our vaccines anywhere on the planet.
Getting our vaccines into the clinic—and out to the places where they’re needed—requires solving a staggering number of problems in preclinical development, manufacturing, international regulation, clinical trial design, logistics, and other areas, all at the same time. In many cases, the standard approaches to these problems are riddled with crippling inefficiencies. We’re eliminating those when we can, and building from scratch when we can’t.
In the 60 days since our inception, we’ve designed twelve versions of our Omicron vaccine, responded to the emergence of the BA.2 subvariant, produced hundreds of doses of our lead candidate, run preclinical experiments in mice and sheep, kicked off scalable manufacturing processes, planned Phase I and II clinical studies, and identified potential partner countries for accelerated trials. There’s an enormous amount of work still to be done, but we are well on our way.
Alvea is led by Ethan Alley and Grigory Khimulya (Co-CEOs), Cate Hall, and Kyle Fish. Our team is growing rapidly, and we’re particularly keen to expand in the following areas:
Wet laboratory (molecular biology, in vitro and in vivo development)
Clinical trial operations and logistics
General company operations
cGMP manufacturing and quality
Technical/scientific management
We’d love to hear from anyone who’s interested in dropping everything to get involved! Reach us at info@alveavax.com.
- List of Lists of Concrete Biosecurity Project Ideas by 16 Jul 2022 15:57 UTC; 212 points) (
- Mid-career people: strongly consider switching to EA work by 26 Apr 2022 11:22 UTC; 196 points) (
- EA’s Achievements in 2022 by 14 Dec 2022 14:33 UTC; 83 points) (
- Hinges and crises by 17 Mar 2022 13:43 UTC; 72 points) (
- Where would we set up the next EA hubs? by 16 Mar 2022 13:37 UTC; 55 points) (
- Even More Ambitious Altruistic Tech Efforts by 20 Nov 2021 14:59 UTC; 51 points) (
- Hinges and crises by 29 Mar 2022 11:11 UTC; 44 points) (LessWrong;
- 10 Apr 2022 14:35 UTC; 39 points) 's comment on Case for emergency response teams by (
- Covid 2/24/22: The Next War by 24 Feb 2022 14:30 UTC; 39 points) (LessWrong;
- 32 EA Forum Posts about Careers and Jobs (2020-2022) by 19 Mar 2022 22:27 UTC; 30 points) (
- EA Updates for March 2022 by 25 Feb 2022 15:42 UTC; 27 points) (
Hi all—Cate Hall from Alvea here. Just wanted to drop in to emphasize the “we’re hiring” part at the end there. We are still rapidly expanding and well funded. If in doubt, send us a CV.
Could you please post a specific hiring request on Twitter so we can share? Also, what skills are you looking for and are the jobs remote or if based somewhere, where?
Just want to say awesome you are doing this and I wish you success.
I am extremely impressed by this, and this is a great example of the kind of ambitious projects I would love to see more of in the EA community. I have added it to the list on my post Even More Ambitious Altruistic Tech Efforts.
Best of luck!
This seems like a really exciting project, I look forwards to seeing where it goes!
As I understand it, a lot of the difficulty with new medical technology is running big and expensive clinical trials, and going through the process of getting approved by regulators. What’s Alvea’s plan for getting the capital and expertise necessary to do this?
I’m really happy that you’re doing this and good luck!
I think this should inspire EAs to be more ambitious too.
What convinced you to dive in instead of relying on current development efforts?
Also, are you considering utilising challenge trials? Setting a precedent here could be very important for the future.
Do you have evidence for this claim? Are there specific countries you’re thinking of, where vaccination rates are low and there is either (a) survey data showing that most people have chosen not to get vaccinated, or (b) data showing that most people (or at least a lot more than 10%) have actually been offered the vaccine? Data on a country’s actual vaccine supply would also be helpful here, even if “supply” doesn’t mean “supply that actually reaches people in a well-organized way”.
FWIW, this is from yesterday: https://www.politico.com/news/2022/02/22/africa-asks-covid-vaccine-donation-pause-00010667
“The Africa CDC will ask that all Covid-19 vaccine donations be paused until the third or fourth quarter of this year, the director of the agency told POLITICO.
John Nkengasong, director of the Africa Centres for Disease Control and Prevention, said the primary challenge for vaccinating the continent is no longer supply shortages but logistics challenges and vaccine hesitancy — leading the agency and the African Vaccine Acquisition Trust to seek the delay.”
I think vaccine resistance imposes a ceiling, but the expense and sheer difficulty of distributing mRNA vaccines in cold storage is also a major problem (it’s why Covax refused donations for a bit), so a room temperature shelf stable vaccine is likely to be quite valuable.
Thanks! This is exactly the kind of thing I was looking for.
(in case anyone else was confused, this was a reply to a now-deleted comment)
I’m disturbed to see an EA project using animal testing. The decision to use someone without their consent and presumably take their life is a huge one but in this post it’s not presented like that. I agree with consequentialism and maximizing wellbeing/minimizing suffering but I think these frameworks can be used to justify anything as long as we believe it has some benefit in the long term. To protect against this I think we should have rules against killing others or using others against their will. I thought this was generally accepted within EA so I was surprised and disappointed to see this project present animal testing as a positive thing.
At present, it is basically impossible to advance any drug to market without extensive animal testing – certainly in the US, and I think everywhere else as well. The same applies to many other classes of biomedical intervention. A norm of EAs not doing animal testing basically blocks them from biomedical science and biotechnology; among other things, this would largely prevent them from making progress across large swathes of technical biosecurity.
This seems bad – the moral cost of failing to avert biocatastrophe, in my view, hugely outweigh the moral costs of animal testing. At the same time, speaking as a biologist who has spent a lot of time around (and on occasion conducting) animal testing, I do think that mainstream scientific culture around animal testing is deeply problematic, leading to large amounts of unnecessary suffering and a cavalier disregard for the welfare of sentient beings (not to mention a lot of pretty blatantly motivated argumentation). I don’t want EAs to fall into that mindset, and the reactions to this comment (and their karma totals) somewhat concern me.
I wouldn’t support a norm of EAs not doing animal testing. But I think I would support a norm of EAs approaching animal testing with much more solemnity, transparency, gratitude and regret than is normal in the life sciences. We need to remember at all times that we are dealing with living, feeling beings, who didn’t & couldn’t consent to be treated as we treat them, and who should be cared for and remembered. And we need to make sure we utilise animal testing as little as we can get away with, and make what testing we do use as painless as possible.
Finally, while I don’t know everyone on the Alvea team personally, those I do know have a strong track record of deeply believing in, and living out, EA values around impartial concern for all sentient beings. I expect that if I had detailed knowledge of their animal testing decisions, I would believe they were necessary and the right thing to do. As an early test case on EAs in animal testing, I think it would be worth the Alvea team responding to this and developing a transparent policy around animal testing – but as a way to set a good example, not because I think there is reason to be suspicious of their decisions or motives.
I strongly disagree with this framing as presented. Consequentialism should not be correctly used to justify greater harm (or the allowance of greater harm) to prevent a lesser harm, and if anything naive consequentialism ought to be more restrictive as an ethical philosophy than other common philosophies, not less.
This seems wrong to me given that only about 23% of EAs are vegan and about 48% eat meat of some form.
In addition even Peter Singer has indicated that animal testing can in some cases be justifiable research.
I agree with Rockwell however that you shouldn’t have been downvoted so much without explanation from people, and that the post should have at least acknowledged ethical concerns with animal testing.
I’m disappointed this comment was heavily downvoted as even if people have strong disagreements it is at least a valid perspective to raise. I would like to hear more from the Alvea team about why they went this route and if there were opportunities for harm reduction.
I mean, it seems like given the potential upside of the project, the downside from animal testing would have to be quite large to be worth avoiding (or the cost of avoiding it very low). The comment also implies a consensus about EA that seems straightforwardly wrong, i.e. that we have strong rules to avoid harm for other beings. Indeed, I feel like a very substantial part of the EA mindset is to be capable of considering tradeoffs that involve hurting some beings and causing some harm, if the benefits outweigh the costs.
EA Consensus
I agree that there is not a consensus and my impression is that this is an area of genuine inconsistency among EAs, though I can’t speak to the distribution. I have had conversations with several EAs who either share Marianne’s sentiments or feel a significant degree of uncertainty about where they stand, both specifically about Alvea and more generally about tradeoffs of this nature. I don’t see their perspectives typically expressed or represented here on the Forum.
Caveating as a Norm
My impression is that even among animal-focused EAs who agree with tradeoffs such as this one, there is still a concern for a cavalierness in how these actions are discussed. The general sentiment is something along the lines of, “EAs wouldn’t talk about this so flippantly if the individuals being harmed were human,” which may or may not be true. In the context of a post like the OP that is communicating a great deal of pressing information in a palatable three-minute read, I imagine a resolution to this could be as simple as a footnote along the lines of, “We recognize animal testing is an ethically loaded issue. Our reasons for employing it are beyond the scope of this post.”
Also, Gavin’s comment demonstrates there is seemingly some nuance to Alvea’s particular animal testing activities and if they have the capacity I would be interested in learning more.
(I should note as I haven’t said it elsewhere that despite these concerns, I am impressed with Alvea’s work and look forward to hearing more updates.)
I don’t think we should police other people’s mindset. This is both harmful directly and is destined to create groupthink at least in some ways.
I, personally, very much do not feel we should consider tradeoffs that include causing direct harm to others.
Not all animal testing is lethal or even entails suffering (just a risk of suffering). I don’t know about other participants, but the initial intake seem to be doing fine.
Discussions of wider EA community views aside, I would very much like to see a response to this in this particular context at least. Anyone from Alvea?
This an inspiring project, and one I have wondered why EA has not addressed before now. I assume the IP rights will be waived to increase the ability to scale? Giving up IP rights is so much more valuable than giving hours and money, and seems to me to be EA aligned.
A parallel project to consider would be evaluation of trust in vaccines in LMICs. I have seen full lots of vaccines wasted in LICs because people do not trust the government, big pharma, health care workers, etc. It may be exclusive to the conflict zones in which I have worked, but vaccine refusal was at least as big a problem as lack of vaccines. Vaccines only work if they are used.
(I am not from Alvea):
To my knowledge, IP wasn’t the limiting factor over the last two years. For the big two vaccines, it was the lack of facilities that could handle mRNA encapsulation. People say that the Gates Foundation did damage by making AZ proprietary, but in practice it was licensed very permissively and they ended up producing more than demand. (It could still have been the wrong thing ex ante, i.e. before we knew its disappointing effectiveness.)
Be that as it may, removal of IP barriers still makes pharmaceuticals more accessible; IP barriers were one of the main reasons for lack of access to HIV/AIDS medications, before they were challenged. I do not see a good reason for EA projects to withhold patent rights, if the purpose of creating the vaccine is doing the most good for the most people. A donation of patent rights is a donation of time and money.