Thanks a lot for collecting this survey! I think it’s valuable to solicit ‘external’ (to the EA community) views on important questions that affect our decision-making, especially from plausible expert groups.
I’m quite shocked at the vehemence and dismissiveness of many of the comments on this post responding to these results. Here are some quotes from other commenters:
1.
“Preventing a death is equally important irrespective of age” strikes me as a genuinely insane position… No one would be indifferent between extending someone’s life by an hour, even a very valuable hour, and extending another person’s ordinary life by 30 years. But it’s just really strange to endorse that, but not apply the same logic to saving a 20-year old person over a 100-year old person.
2.
Yeah, it’s just transparently stupid stuff like “Each life counts for one and that is why more count for more. For this reason we should give priority to saving as many lives as we can, not as many life-years.” [Caveat, this quote is slightly out of context… it’s actually responding to the comment above.]
3.
Results give some support to the notion that bioethicists are more like PR professionals, geared to reproducing common sentiments rather than a group that is OK with sometimes taking difficult stances. Questions 6 & 7 especially seem like vague left-wing truisms… I still can’t get over 40% thinking being blind would be not disadvantaging if society was “justly designed”.
4.
It’s really pretty shocking to me how badly this makes bioethicists look.
Here are some possible explanations for the supposedly crazy results:
There are reasons, logic, or evidence that are considered or known amongst (some? many?) bioethicists that you yourself are not familiar with.
There are reasons, logic, or evidence that you are familiar with that (some? many?) bioethicists are not.
There are multiple conflicting principles or heuristics that apply in a different case, and the respondents just weigh those differently to you. E.g. this strikes me as likely what’s happening with the “It is most important to prevent someone from dying at which of the following ages” question.
The respondents have different ethical systems and worldviews to you, e.g. placing more weight on virtue ethics relative to consequentialism. That doesn’t make them insane or unthoughtful; ethics is really tough and probably depends on a lot of things like your upbringing. (Otherwise people would have similar ethical views across cultures, which clearly isn’t true.)
I share the intuition that many of the results in the survey seem surprising, and very discrepant from my own views. But regardless of whether you understand the reasons, surely after seeing that a group of people holds substantially different views to your own, your all-things-considered belief should shift at least somewhat towards those views, even if your “independent impression” does not? Especially when that group has years of relevant thought or expertise; these facts make it more likely that there are valid reasons underpinning their beliefs. Where there are discrepancies, there’s a chance that they are right and you/we are wrong.
I’m worried that some of the quotes above represent something like cognitive dissonance or a boomerang effect. Or at least they seem more like “soldier mindset” than I’d expect here, although I note some exceptions where several commenters (including some of those I quoted above) ask others for input on helping to understand and steelman the bioethicists’ views.
[Edit: the following paragraph felt true at the time of writing but I regret writing it as it seems pointlessly offensive/inflammatory itself in hindsight. I apologise to the people I quoted above.] Honestly, seeing the prevalence of these kinds of reactions in the comments makes me feel less confident in the epistemic health of this community and more worried about groupthink type effects. (Maybe some of these commenters have reasons for their vehemence and dismissiveness that I’m missing?)
I strongly agree with this comment. I think it’s important to have a theory of mind of why people think like this. As a non-bioethicist, my impression is a lot of if has to do with the history of the field of bioethics itself, which emerged in response to the horrid abuses in medical research. One major overarching goal that is imbued in bioethics training, research, and writing is prevention of medical abuse, which leads to small-c conservative views that tend to favor, wherever possible, protection of human subjects/patients and an aversion to calculations that sound like they might single out the groups that historically bore the brunt of such abuse.
Like, we’ve all heard of the Tuskegee Syphilis Experiment, but there were a lot more really awful things done in the last century, which have lasting effects to this day. At 1Day, we’re working on trying to bring about safe, efficient human challenge studies to realize a hepatitis C vaccine. We’ve made great progress and it looks like they will begin within the next year! But the last time people did viral hepatitis human challenge studies, they did them on mentally disabled children! Just heinously evil. So I will not be surprised if some on the ethics boards when they review the proposed studies are quite skeptical at first! (Note: this doesn’t mean that the current IRB system is optimal, or even anywhere near so; I view it sort of like zoning and building codes: good in theory — I don’t want toxic waste dumps built near elementary schools — but the devil is in the details and how protections are operationalized.)
All of which is to say: like others here, I very strongly disagree with many prevalent views in bioethics. But as I’ve interacted more and more with this field as an outsider, my opinions have evolved from “wow, bioethics/research ethics is populated exclusively with morons” to “this is mostly a bunch of reasonable people whose frame of references are very different”. The latter view allows me to engage more productively to try to change some of the more problematic/wrongheaded views when it comes up in my work and has let me learn a lot, too!
At 1Day, we’re working on trying to bring about safe, efficient human challenge studies to realize a hepatitis C vaccine. We’ve made great progress and it looks like they will begin within the next year! But the last time people did viral hepatitis human challenge studies, they did them on mentally disabled children! Just heinously evil.
Based on the article you linked, it sounds like the parents of the disabled children consented to the challenge trial, and they received something valuable in return (access to a facility they wouldn’t otherwise have access to). Is your objection to the use of disabled children in studies at all, or to the payment?
(The article also makes it seem like the conditions in the facility were poor, but that seems basically unrelated to the trial).
Yes, the studies should not have used disabled children at all, because disabled children cannot meaningfully provide consent and were not absolutely necessary to achieve the studies’ aims. They were simply the easiest targets: they could not understand what was being done to them and their parents were coercible through misleading information and promises of better care, which should have been provided regardless. (More generally, I do not believe proxy consent from guardians is acceptable for any research that involves deliberate harm and no prospect of net benefit to children.)
The conditions of the facility are also materially relevant. If it were true that children inevitably would contract hepatitis, then a human challenge would not be truly necessary. More importantly, though, I am comfortable calling Krugman’s behavior evil because he spent 15 years running experiments at an institution that was managed with heinously little regard for its residents and evidently did not feel compelled to raise the issue with the public or authorities. Rather, he saw the immense suffering and neglect as perhaps unfortunate, but ultimately convenient leverage to acquire test subjects.
Huh, I thought that most of the disagreement between people around these parts and bioethicists is in the direction of people around here being more pro-freedoms of human subjects/patients. (Freedoms aren’t exactly the same as protections, but I interpret small-c conservative as being more about freedoms.)
Examples:
Right to sell my organs
Right to select my kids on the basis of non-medical features
Right to access unapproved treatments
Right to die if I am of sound mind and wish to do so
Right to sign up for arbitrary medical trials/studies, including being compensated and including potentially dangerous medical trials/studies. (Subject to sound mind constraints and maybe extortion constraints.)
Generally, I personally think that much more freedom in medicine would be better.
(In fact, totally free-for-all would plausibly be better than status quo I think though I’m pretty uncertain.)
I agree that there is a disagreement around how utilitarian the medical system should be vs some more fairness based principle.
However, if you go fully in the direction of individual liberties, government involvement in the medical system doesn’t matter much. E.g., in a simple system like:
Redistribute wealth as desired
People can buy whatever health care they want and sign up for whatever clinical trials they want with virtually no government regulation. (Clinical trials require actually informed consent.)
The state doesn’t need to make any tradeoffs in health care as it isn’t involved. Places like (e.g.) hospitals can do whatever they want with respect to prioritizing care and they could in principle compete etc.
(I’m not claiming that fully in the direction of individual liberties is the right move, e.g. it seems like people are often irrational about health care and hospitals often have monopolies which can cause issues.)
Sorry, that was ambiguous on my part. There’s a differentiation between research ethics issues (how trials are run, etc.) and clinical ethics (medical aid in dying, accessing unapproved treatments, how to treat a patient with X complicated issue, etc.). My work focuses on the former, not the latter, so I can’t speak much to that. I meant “conservative” in the sense of hesitance to adjust existing norms or systems in research ethics oversight and, for example, a very strong default orientation towards any measures that reduce risk (or seem to reduce risk) for research participants.
Thank you for this point, I tend to agree that at the very least people should be more surprised if they think a position is obviously correct but also think a sizable portion of people studying it for a living disagree. I haven’t gotten around to reading the paper doing concrete comparisons with the general public, but I also stand by my older claim that how different these views are from those of the general public is exaggerated. I see no one in the comments, for instance, pointing out areas they think bioethicists differ from the general public in a direction EAs tend to agree with more, for instance I would guess from these results that they are unusually in favor of trading off human with non-human welfare, treating children without parental approval, and assisted euthanasia. Some of the cited areas where people dislike where bioethicists lean also seem like areas they are just closer to the general public than us, for instance I think if you ask an average person on the street about the permissibility of paying people for their organs, or IVF embryo selection, they will also lean substantially more bioconservative than EAs.
I have finally gotten around to reading the paper, and it looks like I was wrong about almost every cited example of public opinion. On euthanasia and non-human/human tradeoffs bioethicists seem to have similar views to the public, and on organ donor compensation the general public seems to be considerably more aligned with the EA consensus than bioethicists. The public view on IVF wasn’t discussed and I would guess I am right about this (though considering the other results, not confidently). The only example I gave that seems more or less right is treatment of minors without parental approval. This paper updates me away from my previous views, and more towards “the general public is closer to EAs than bioethicists are on most of these issues” with the caveat that mostly they seem either similar to the general public or to the left of them on most of these issues. I still agree with aspects of my broad points here, but my update is substantial enough and my examples egregious enough that I am unendorsing this comment.
But regardless of whether you understand the reasons, surely after seeing that a group of people holds substantially different views to your own, your all-things-considered belief should shift at least somewhat towards those views, even if your “independent impression” does not?
If someone says that 2+2=5, you should very slightly increase your credence that 2+2=5, and greatly increase your credence that they are either bad at maths or knowingly being misleading. It’s not soldier mindset to update your opinion of something after seeing data about it; this is data about bioethicists so it makes sense to update your opinion of them based on it.
In my case, I was surprised how bad the data made bioethicists look, because their positions were more inconsistent than I would have expected. When something happens that surprises you, you should update your beliefs.
Maybe some of these commenters have reasons for their vehemence and dismissiveness that I’m missing?
Yes, I have academic expertise in (and adjacent to) the field, and was sharing my academic opinion of the quality of reasoning that I’ve come across in published work defending one of the views in question (which I’m very familiar with because I published a paper on that very topic in the leading Bioethics journal).
surely after seeing that a group of people holds substantially different views to your own, your all-things-considered belief should shift at least somewhat towards those views
Not if they’re clearly mistaken. For example, when geologists come across young-earth creationists, or climate scientists come across denialists, they may well be able to identify that the other group is mistaken. If so, there is no rational pressure for clear thinkers to “shift” in the direction of those that they correctly recognize to be incompetent.
It really just comes down to the first-order question of whether or not we are correct to judge many bioethicists to be incompetent. It would be obviously question-begging for you to assume that we’re wrong about this, and downgrade your opinion of our epistemics on that basis. You need to look at the arguments and form a first-order judgment of your own. Otherwise you’re just confidence-policing, which is itself a form of epistemic vice.
Thanks a lot for collecting this survey! I think it’s valuable to solicit ‘external’ (to the EA community) views on important questions that affect our decision-making, especially from plausible expert groups.
I’m quite shocked at the vehemence and dismissiveness of many of the comments on this post responding to these results. Here are some quotes from other commenters:
1.
2.
3.
4.
Here are some possible explanations for the supposedly crazy results:
There are reasons, logic, or evidence that are considered or known amongst (some? many?) bioethicists that you yourself are not familiar with.
There are reasons, logic, or evidence that you are familiar with that (some? many?) bioethicists are not.
There are multiple conflicting principles or heuristics that apply in a different case, and the respondents just weigh those differently to you. E.g. this strikes me as likely what’s happening with the “It is most important to prevent someone from dying at which of the following ages” question.
The respondents have different ethical systems and worldviews to you, e.g. placing more weight on virtue ethics relative to consequentialism. That doesn’t make them insane or unthoughtful; ethics is really tough and probably depends on a lot of things like your upbringing. (Otherwise people would have similar ethical views across cultures, which clearly isn’t true.)
I share the intuition that many of the results in the survey seem surprising, and very discrepant from my own views. But regardless of whether you understand the reasons, surely after seeing that a group of people holds substantially different views to your own, your all-things-considered belief should shift at least somewhat towards those views, even if your “independent impression” does not? Especially when that group has years of relevant thought or expertise; these facts make it more likely that there are valid reasons underpinning their beliefs. Where there are discrepancies, there’s a chance that they are right and you/we are wrong.
I’m worried that some of the quotes above represent something like cognitive dissonance or a boomerang effect. Or at least they seem more like “soldier mindset” than I’d expect here, although I note some exceptions where several commenters (including some of those I quoted above) ask others for input on helping to understand and steelman the bioethicists’ views.
[Edit: the following paragraph felt true at the time of writing but I regret writing it as it seems pointlessly offensive/inflammatory itself in hindsight. I apologise to the people I quoted above.] Honestly, seeing the prevalence of these kinds of reactions in the comments makes me feel less confident in the epistemic health of this community and more worried about groupthink type effects. (Maybe some of these commenters have reasons for their vehemence and dismissiveness that I’m missing?)
I strongly agree with this comment. I think it’s important to have a theory of mind of why people think like this. As a non-bioethicist, my impression is a lot of if has to do with the history of the field of bioethics itself, which emerged in response to the horrid abuses in medical research. One major overarching goal that is imbued in bioethics training, research, and writing is prevention of medical abuse, which leads to small-c conservative views that tend to favor, wherever possible, protection of human subjects/patients and an aversion to calculations that sound like they might single out the groups that historically bore the brunt of such abuse.
Like, we’ve all heard of the Tuskegee Syphilis Experiment, but there were a lot more really awful things done in the last century, which have lasting effects to this day. At 1Day, we’re working on trying to bring about safe, efficient human challenge studies to realize a hepatitis C vaccine. We’ve made great progress and it looks like they will begin within the next year! But the last time people did viral hepatitis human challenge studies, they did them on mentally disabled children! Just heinously evil. So I will not be surprised if some on the ethics boards when they review the proposed studies are quite skeptical at first! (Note: this doesn’t mean that the current IRB system is optimal, or even anywhere near so; I view it sort of like zoning and building codes: good in theory — I don’t want toxic waste dumps built near elementary schools — but the devil is in the details and how protections are operationalized.)
All of which is to say: like others here, I very strongly disagree with many prevalent views in bioethics. But as I’ve interacted more and more with this field as an outsider, my opinions have evolved from “wow, bioethics/research ethics is populated exclusively with morons” to “this is mostly a bunch of reasonable people whose frame of references are very different”. The latter view allows me to engage more productively to try to change some of the more problematic/wrongheaded views when it comes up in my work and has let me learn a lot, too!
Based on the article you linked, it sounds like the parents of the disabled children consented to the challenge trial, and they received something valuable in return (access to a facility they wouldn’t otherwise have access to). Is your objection to the use of disabled children in studies at all, or to the payment?
(The article also makes it seem like the conditions in the facility were poor, but that seems basically unrelated to the trial).
Yes, the studies should not have used disabled children at all, because disabled children cannot meaningfully provide consent and were not absolutely necessary to achieve the studies’ aims. They were simply the easiest targets: they could not understand what was being done to them and their parents were coercible through misleading information and promises of better care, which should have been provided regardless. (More generally, I do not believe proxy consent from guardians is acceptable for any research that involves deliberate harm and no prospect of net benefit to children.)
The conditions of the facility are also materially relevant. If it were true that children inevitably would contract hepatitis, then a human challenge would not be truly necessary. More importantly, though, I am comfortable calling Krugman’s behavior evil because he spent 15 years running experiments at an institution that was managed with heinously little regard for its residents and evidently did not feel compelled to raise the issue with the public or authorities. Rather, he saw the immense suffering and neglect as perhaps unfortunate, but ultimately convenient leverage to acquire test subjects.
Huh, I thought that most of the disagreement between people around these parts and bioethicists is in the direction of people around here being more pro-freedoms of human subjects/patients. (Freedoms aren’t exactly the same as protections, but I interpret small-c conservative as being more about freedoms.)
Examples:
Right to sell my organs
Right to select my kids on the basis of non-medical features
Right to access unapproved treatments
Right to die if I am of sound mind and wish to do so
Right to sign up for arbitrary medical trials/studies, including being compensated and including potentially dangerous medical trials/studies. (Subject to sound mind constraints and maybe extortion constraints.)
Generally, I personally think that much more freedom in medicine would be better.
(In fact, totally free-for-all would plausibly be better than status quo I think though I’m pretty uncertain.)
I agree that there is a disagreement around how utilitarian the medical system should be vs some more fairness based principle.
However, if you go fully in the direction of individual liberties, government involvement in the medical system doesn’t matter much. E.g., in a simple system like:
Redistribute wealth as desired
People can buy whatever health care they want and sign up for whatever clinical trials they want with virtually no government regulation. (Clinical trials require actually informed consent.)
The state doesn’t need to make any tradeoffs in health care as it isn’t involved. Places like (e.g.) hospitals can do whatever they want with respect to prioritizing care and they could in principle compete etc.
(I’m not claiming that fully in the direction of individual liberties is the right move, e.g. it seems like people are often irrational about health care and hospitals often have monopolies which can cause issues.)
Sorry, that was ambiguous on my part. There’s a differentiation between research ethics issues (how trials are run, etc.) and clinical ethics (medical aid in dying, accessing unapproved treatments, how to treat a patient with X complicated issue, etc.). My work focuses on the former, not the latter, so I can’t speak much to that. I meant “conservative” in the sense of hesitance to adjust existing norms or systems in research ethics oversight and, for example, a very strong default orientation towards any measures that reduce risk (or seem to reduce risk) for research participants.
Thank you for this point, I tend to agree that at the very least people should be more surprised if they think a position is obviously correct but also think a sizable portion of people studying it for a living disagree. I haven’t gotten around to reading the paper doing concrete comparisons with the general public, but I also stand by my older claim that how different these views are from those of the general public is exaggerated. I see no one in the comments, for instance, pointing out areas they think bioethicists differ from the general public in a direction EAs tend to agree with more, for instance I would guess from these results that they are unusually in favor of trading off human with non-human welfare, treating children without parental approval, and assisted euthanasia. Some of the cited areas where people dislike where bioethicists lean also seem like areas they are just closer to the general public than us, for instance I think if you ask an average person on the street about the permissibility of paying people for their organs, or IVF embryo selection, they will also lean substantially more bioconservative than EAs.
I have finally gotten around to reading the paper, and it looks like I was wrong about almost every cited example of public opinion. On euthanasia and non-human/human tradeoffs bioethicists seem to have similar views to the public, and on organ donor compensation the general public seems to be considerably more aligned with the EA consensus than bioethicists. The public view on IVF wasn’t discussed and I would guess I am right about this (though considering the other results, not confidently). The only example I gave that seems more or less right is treatment of minors without parental approval. This paper updates me away from my previous views, and more towards “the general public is closer to EAs than bioethicists are on most of these issues” with the caveat that mostly they seem either similar to the general public or to the left of them on most of these issues. I still agree with aspects of my broad points here, but my update is substantial enough and my examples egregious enough that I am unendorsing this comment.
If someone says that 2+2=5, you should very slightly increase your credence that 2+2=5, and greatly increase your credence that they are either bad at maths or knowingly being misleading. It’s not soldier mindset to update your opinion of something after seeing data about it; this is data about bioethicists so it makes sense to update your opinion of them based on it.
In my case, I was surprised how bad the data made bioethicists look, because their positions were more inconsistent than I would have expected. When something happens that surprises you, you should update your beliefs.
I agree with the basic point you’re making (I think) and I suspect either:
(1) we disagree about how much you should negatively update, i.e. how bad this data makes bioethicists look
Or
(2) we don’t actually disagree and this is just due to language being messy (or me misinterpreting you)
Yes, I have academic expertise in (and adjacent to) the field, and was sharing my academic opinion of the quality of reasoning that I’ve come across in published work defending one of the views in question (which I’m very familiar with because I published a paper on that very topic in the leading Bioethics journal).
Not if they’re clearly mistaken. For example, when geologists come across young-earth creationists, or climate scientists come across denialists, they may well be able to identify that the other group is mistaken. If so, there is no rational pressure for clear thinkers to “shift” in the direction of those that they correctly recognize to be incompetent.
It really just comes down to the first-order question of whether or not we are correct to judge many bioethicists to be incompetent. It would be obviously question-begging for you to assume that we’re wrong about this, and downgrade your opinion of our epistemics on that basis. You need to look at the arguments and form a first-order judgment of your own. Otherwise you’re just confidence-policing, which is itself a form of epistemic vice.
That all seems fair / I agree.