I’m disturbed to see an EA project using animal testing. The decision to use someone without their consent and presumably take their life is a huge one but in this post it’s not presented like that. I agree with consequentialism and maximizing wellbeing/minimizing suffering but I think these frameworks can be used to justify anything as long as we believe it has some benefit in the long term. To protect against this I think we should have rules against killing others or using others against their will. I thought this was generally accepted within EA so I was surprised and disappointed to see this project present animal testing as a positive thing.
At present, it is basically impossible to advance any drug to market without extensive animal testing – certainly in the US, and I think everywhere else as well. The same applies to many other classes of biomedical intervention. A norm of EAs not doing animal testing basically blocks them from biomedical science and biotechnology; among other things, this would largely prevent them from making progress across large swathes of technical biosecurity.
This seems bad – the moral cost of failing to avert biocatastrophe, in my view, hugely outweigh the moral costs of animal testing. At the same time, speaking as a biologist who has spent a lot of time around (and on occasion conducting) animal testing, I do think that mainstream scientific culture around animal testing is deeply problematic, leading to large amounts of unnecessary suffering and a cavalier disregard for the welfare of sentient beings (not to mention a lot of pretty blatantly motivated argumentation). I don’t want EAs to fall into that mindset, and the reactions to this comment (and their karma totals) somewhat concern me.
I wouldn’t support a norm of EAs not doing animal testing. But I think I would support a norm of EAs approaching animal testing with much more solemnity, transparency, gratitude and regret than is normal in the life sciences. We need to remember at all times that we are dealing with living, feeling beings, who didn’t & couldn’t consent to be treated as we treat them, and who should be cared for and remembered. And we need to make sure we utilise animal testing as little as we can get away with, and make what testing we do use as painless as possible.
Finally, while I don’t know everyone on the Alvea team personally, those I do know have a strong track record of deeply believing in, and living out, EA values around impartial concern for all sentient beings. I expect that if I had detailed knowledge of their animal testing decisions, I would believe they were necessary and the right thing to do. As an early test case on EAs in animal testing, I think it would be worth the Alvea team responding to this and developing a transparent policy around animal testing – but as a way to set a good example, not because I think there is reason to be suspicious of their decisions or motives.
I agree with consequentialism and maximizing wellbeing/minimizing suffering but I think these frameworks can be used to justify anything as long as we believe it has some benefit in the long term.
I strongly disagree with this framing as presented. Consequentialism should not be correctly used to justify greater harm (or the allowance of greater harm) to prevent a lesser harm, and if anything naive consequentialism ought to be more restrictive as an ethical philosophy than other common philosophies, not less.
To protect against this I think we should have rules against killing others or using others against their will. I thought this was generally accepted within EA
I agree with Rockwell however that you shouldn’t have been downvoted so much without explanation from people, and that the post should have at least acknowledged ethical concerns with animal testing.
I’m disappointed this comment was heavily downvoted as even if people have strong disagreements it is at least a valid perspective to raise. I would like to hear more from the Alvea team about why they went this route and if there were opportunities for harm reduction.
I mean, it seems like given the potential upside of the project, the downside from animal testing would have to be quite large to be worth avoiding (or the cost of avoiding it very low). The comment also implies a consensus about EA that seems straightforwardly wrong, i.e. that we have strong rules to avoid harm for other beings. Indeed, I feel like a very substantial part of the EA mindset is to be capable of considering tradeoffs that involve hurting some beings and causing some harm, if the benefits outweigh the costs.
EA Consensus I agree that there is not a consensus and my impression is that this is an area of genuine inconsistency among EAs, though I can’t speak to the distribution. I have had conversations with several EAs who either share Marianne’s sentiments or feel a significant degree of uncertainty about where they stand, both specifically about Alvea and more generally about tradeoffs of this nature. I don’t see their perspectives typically expressed or represented here on the Forum.
Caveating as a Norm My impression is that even among animal-focused EAs who agree with tradeoffs such as this one, there is still a concern for a cavalierness in how these actions are discussed. The general sentiment is something along the lines of, “EAs wouldn’t talk about this so flippantly if the individuals being harmed were human,” which may or may not be true. In the context of a post like the OP that is communicating a great deal of pressing information in a palatable three-minute read, I imagine a resolution to this could be as simple as a footnote along the lines of, “We recognize animal testing is an ethically loaded issue. Our reasons for employing it are beyond the scope of this post.”
Also, Gavin’s comment demonstrates there is seemingly some nuance to Alvea’s particular animal testing activities and if they have the capacity I would be interested in learning more.
(I should note as I haven’t said it elsewhere that despite these concerns, I am impressed with Alvea’s work and look forward to hearing more updates.)
Not all animal testing is lethal or even entails suffering (just a risk of suffering). I don’t know about other participants, but the initial intake seem to be doing fine.
Discussions of wider EA community views aside, I would very much like to see a response to this in this particular context at least. Anyone from Alvea?
I’m disturbed to see an EA project using animal testing. The decision to use someone without their consent and presumably take their life is a huge one but in this post it’s not presented like that. I agree with consequentialism and maximizing wellbeing/minimizing suffering but I think these frameworks can be used to justify anything as long as we believe it has some benefit in the long term. To protect against this I think we should have rules against killing others or using others against their will. I thought this was generally accepted within EA so I was surprised and disappointed to see this project present animal testing as a positive thing.
At present, it is basically impossible to advance any drug to market without extensive animal testing – certainly in the US, and I think everywhere else as well. The same applies to many other classes of biomedical intervention. A norm of EAs not doing animal testing basically blocks them from biomedical science and biotechnology; among other things, this would largely prevent them from making progress across large swathes of technical biosecurity.
This seems bad – the moral cost of failing to avert biocatastrophe, in my view, hugely outweigh the moral costs of animal testing. At the same time, speaking as a biologist who has spent a lot of time around (and on occasion conducting) animal testing, I do think that mainstream scientific culture around animal testing is deeply problematic, leading to large amounts of unnecessary suffering and a cavalier disregard for the welfare of sentient beings (not to mention a lot of pretty blatantly motivated argumentation). I don’t want EAs to fall into that mindset, and the reactions to this comment (and their karma totals) somewhat concern me.
I wouldn’t support a norm of EAs not doing animal testing. But I think I would support a norm of EAs approaching animal testing with much more solemnity, transparency, gratitude and regret than is normal in the life sciences. We need to remember at all times that we are dealing with living, feeling beings, who didn’t & couldn’t consent to be treated as we treat them, and who should be cared for and remembered. And we need to make sure we utilise animal testing as little as we can get away with, and make what testing we do use as painless as possible.
Finally, while I don’t know everyone on the Alvea team personally, those I do know have a strong track record of deeply believing in, and living out, EA values around impartial concern for all sentient beings. I expect that if I had detailed knowledge of their animal testing decisions, I would believe they were necessary and the right thing to do. As an early test case on EAs in animal testing, I think it would be worth the Alvea team responding to this and developing a transparent policy around animal testing – but as a way to set a good example, not because I think there is reason to be suspicious of their decisions or motives.
I strongly disagree with this framing as presented. Consequentialism should not be correctly used to justify greater harm (or the allowance of greater harm) to prevent a lesser harm, and if anything naive consequentialism ought to be more restrictive as an ethical philosophy than other common philosophies, not less.
This seems wrong to me given that only about 23% of EAs are vegan and about 48% eat meat of some form.
In addition even Peter Singer has indicated that animal testing can in some cases be justifiable research.
I agree with Rockwell however that you shouldn’t have been downvoted so much without explanation from people, and that the post should have at least acknowledged ethical concerns with animal testing.
I’m disappointed this comment was heavily downvoted as even if people have strong disagreements it is at least a valid perspective to raise. I would like to hear more from the Alvea team about why they went this route and if there were opportunities for harm reduction.
I mean, it seems like given the potential upside of the project, the downside from animal testing would have to be quite large to be worth avoiding (or the cost of avoiding it very low). The comment also implies a consensus about EA that seems straightforwardly wrong, i.e. that we have strong rules to avoid harm for other beings. Indeed, I feel like a very substantial part of the EA mindset is to be capable of considering tradeoffs that involve hurting some beings and causing some harm, if the benefits outweigh the costs.
EA Consensus
I agree that there is not a consensus and my impression is that this is an area of genuine inconsistency among EAs, though I can’t speak to the distribution. I have had conversations with several EAs who either share Marianne’s sentiments or feel a significant degree of uncertainty about where they stand, both specifically about Alvea and more generally about tradeoffs of this nature. I don’t see their perspectives typically expressed or represented here on the Forum.
Caveating as a Norm
My impression is that even among animal-focused EAs who agree with tradeoffs such as this one, there is still a concern for a cavalierness in how these actions are discussed. The general sentiment is something along the lines of, “EAs wouldn’t talk about this so flippantly if the individuals being harmed were human,” which may or may not be true. In the context of a post like the OP that is communicating a great deal of pressing information in a palatable three-minute read, I imagine a resolution to this could be as simple as a footnote along the lines of, “We recognize animal testing is an ethically loaded issue. Our reasons for employing it are beyond the scope of this post.”
Also, Gavin’s comment demonstrates there is seemingly some nuance to Alvea’s particular animal testing activities and if they have the capacity I would be interested in learning more.
(I should note as I haven’t said it elsewhere that despite these concerns, I am impressed with Alvea’s work and look forward to hearing more updates.)
I don’t think we should police other people’s mindset. This is both harmful directly and is destined to create groupthink at least in some ways.
I, personally, very much do not feel we should consider tradeoffs that include causing direct harm to others.
Not all animal testing is lethal or even entails suffering (just a risk of suffering). I don’t know about other participants, but the initial intake seem to be doing fine.
Discussions of wider EA community views aside, I would very much like to see a response to this in this particular context at least. Anyone from Alvea?