Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
I previously argued against EAs volunteering for challenge trials with pathogens without a known cure (like Zika) and with risk of long-term illness: https://forum.effectivealtruism.org/posts/kKidKRiCcZ5uGJ6w5/stop-thinking-about-ftx-think-about-getting-zika-instead?commentId=FTmfA8AWnQ7yta77P
I think HCV is a very different situation, and has a very acceptable risk profile given the available therapeutics.
Very interesting write-up, thank you for it.
As pointed out in an earlier comment, raising the compensation for challenge trials and/or seeking out participants with lower ‘willing participation price’ seems promising as a way to get enough participants.
I would be interested to see an analysis on the “donation equivalent” of participation.
E.G. if it would cost 10k to pay a willing participant, and an EA were willing to do it for free for social good, is this the “equivalent” of a 10k donation to an effective health cause? If not, approximately how much would it be worth? Putting a number on this would be interesting, and could help individuals decide whether to participate (comparing to their opportunity costs, etc).
Heck, maybe if we had a number, individuals who track donations could even log challenge participation as a number amount towards their donation goal (e.g. for those who donate 10% of their incomes), though that’s probably a whole different conversation.
Thanks for reading!
The donation equivalent aspect is pretty interesting. A study probably would not allow a participant not to take a donation, so in practice it might just be however much money from the study one chooses to donate to effective causes (minus taxes; trial income is usually treated as taxable income, which is probably bad policy). I might be misunderstanding your point, though.
I’ll reiterate (this probably should’ve been worded clearer in the post), one of the arguments we make here is that assuming all participants who make it into the study are about equally useful, we think EAs are more likely to be effective as pre-participants as well. This is because the study is still under consideration: there are decisions about the study’s design that may make it go faster, and informed advocacy from earnest pre-participants could be very persuasive for regulators and ethicists who might otherwise reject certain study design decisions on paternalistic grounds. The community and shared worldview of EA makes us think EAs will, on average, be more engaged when it comes to voicing their views on study design.
This interactive model app based on the paper we mention in footnote 4 lets you tinker with a bunch of variables related to challenge model development and vaccine deployment. Based on that, and after a conversation with the lead author, we get about 200 years of life saved for every day sooner the model is developed. (The app isn’t that granular/to the day yet but it is supposed to be updated soon.) So pushing for stud decisions that condense things even by a month or two could be huge.
Executive summary: The post argues effective altruists should volunteer for upcoming hepatitis C human challenge studies, which could accelerate vaccine development and avert substantial disease burden.
Key points:
Hepatitis C challenge studies have strong expert support and could validate the challenge model for faster vaccine development.
Participation from effective altruists could help optimize study design for both speed and scientific rigor.
Successful hepatitis C vaccine development could demonstrate the viability of a permanent “Warp Speed 2.0” model.
Participation involves real health risks like temporary symptoms and liver monitoring, though death risk is extremely low.
Timeline acceleration could avert hundreds of thousands of infections and over 300,000 years of life lost.
Counterarguments include uncertainty around long-term benefits, potential reputational risks, and other causes possibly having higher impact.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
As far as attempting to overcome coercion-based objections to higher compensation:
I wonder what the advantages and disadvantages of coupling increased compensation with financial exclusion criteria would be. If the potential volunteer is living paycheck-to-paycheck, or is deep in debt, the $20K offer might be an undue influence in a way it would not be for other potential volunteers. That sounds a bit paternalistic, but I’d submit that the source of the paternalism was the opponents of higher compensation in the first place.
Another coercion-reducing approach might be to put the extra compensation in a trust where it would be prudently invested but remain inaccessible to the participant for 10-30 years. The volunteer still gets the appropriate amount of compensation, but the fact that the compensation is significantly delayed makes coercion arguments significantly weaker to me. One can’t be coerced by present realities, because the extra compensation won’t help them. One potential complication is that the trust would need to be designed in a way that prevented assignment of the right of a payout to a third party (in exchange for immediate cash), but that is probably doable with the right pooled-trust design.
The extra “compensation” could be a donation by the study sponsor to their DAF, and allowing the donor to “recommend” (read: decide) what non-profit the money was regranted to. Note that many charitably-minded people (e.g., GWWC pledgers) could funge this allocation out by treating themselves as having earned the extra money and donated it to charity. In that case, they would end up with the same amount of money as in a direct payment.
And that may be a feature rather than a bug: this funging option is only open to people who were doing a decent bit to charity already. Given that trait, they are less likely to be doing the study primarily for financial gain. Moreover, if they really want more money, they have a much easier way to do that than being infected with Hep C;: they can quickly reduce their charitable contributions instead.
Finally, the study sponsor could fund an insurance policy paying out a fairly large sum to anyone who is later diagnosed with liver cancer or other serious liver dysfunction. This would not be contingent on a showing of a causal connection between the Hep C infection and liver problems, but (for moral hazard reasons) might would need a reduction or exclusion if the balance of probabilities showed that alcohol abuse was a material contributing factor.
This is less likely to look objectionable because it looks and feels like an injury-compensation scheme, and because it is both distant and conjectural.
Assuming that the increased risk of liver problems from study participation is near-zero, this is actually a morbid lottery ticket. Instead of getting an extra $15,000 upfront as compensation, you get a 5% chance of getting $300,000 (adjusted for inflation / investment gains).
Incidentally, it’s unfortunate that preliminary work on this can’t be done on chimpanzees anymore. Without delving into all the complexities of great-ape research ethics, I think research would generally be OK if the harm/risk is minimal and I could see volunteering myself as a test subject under similar circumstances to the chimpanzee (e.g., being young and having no significant medical conditions). Both of those are true here.
Why not just pay people more to do the trials or do them in parts of the world where people would do it for cheaper? The opportunity cost of EA time is really high relative to normies.
Part of our work has included pushing for higher compensation in general, both because we believe it can make recruitment easier (and faster) but also because we think that pay should be more commensurate with the social value generated. I and a few other former human challenge volunteers wrote this paper published in Clinical Infectious Diseases calling for US$20,000 in compensation as a baseline. That’s far higher than the norm for challenge studies; the highest I’ve seen is under $8,0000.
Re: Why EAs specifically, we delve into that a bit in footnote 9. In short, the study is still in a stage where it can be modified to substantially increase potential QALYs/DALYs saved. The voices of prospective participants could be very, very persuasive to researchers, regulators, ethicists when considering study design. Non-EAs are certainly capable of advocating and supporting changes as well, but we think EAs are much more likely to a) grasp the case for certain changes and b) be willing to advocate for them.
No one should feel like they’re obligated to be in a study as an EA (or as a “normie,” though I dislike that dichotomy with EAs). There are certainly people for whom time is better spent elsewhere, EA or not. But not everyone on the forum necessarily works for an EA organization, and there are also certainly people who feel they’d have spare capacity and time that they’d like to commit to this sort of thing.
Link doesn’t work for me.
This isn’t something I would do for personal medical reasons—but if it were, I would be much more interested in assurance of appropriate compensation if something went wrong than the amount of automatic compensation. For example, if I were to get really, really unlucky and die or become disabled from this somehow, I’d want to see most of my (public-sector law) salary covered at least until my son graduated college.
That could be tricky here; if I were to develop disabling liver trouble 10-15 years down the road, how would we know if the Hep C infection had anything to do with it? Maybe the risks of downstream unmeasurable ill effects are low enough to ignore here, though. Alternatively, there could be a rebuttable (or even irrebutable) presumption that if I develop significant liver trouble in the next X years, that was related to the Hep C infection. After all, presumably only those who were extremely unlikely to have liver trouble in the near to medium term were allowed to volunteer for this.
Woops, link fixed (here it is again). That article is part of a dedicated supplement to HCV challenge/CHIM.
Speaking in my personal capacity, I agree — I’d love for insurance/that sort of compensation to be the norm. That does not happen enough in medical research, challenge or otherwise.
I can see why an insurance agency would be very wary. Establishing causation of cancer in general is hard. Even if someone were screened and in perfect liver health during the CHIM, that doesn’t mean they won’t later adopt common habits (e.g. smoking or excessive drinking) that are risk factors for liver cancer.
Relatedly, another article in Clinical Infectious Diseases reviewed liver cancer risks due to CHIM, concluding that “[a]lthough it is difficult to precisely estimate HCC risk from an HCV CHIM, the data suggest the risk to be very low or negligible.” This was based on analysis of three separate cohorts/datasets of people who had previously been infected with hepatitis C in other contexts. Still, the risk cannot be discounted entirely, and there are risks other than liver cancer that our FAQ document discusses, too.
Perhaps a workaround could be to establish some sort of trust that pays out to any former CHIM participant who develops liver cancer not obviously traceable to something like alcohol abuse disorder, and have this fund liquidate its assets after a certain number of decades. That would be very novel, expensive, and probably legally complicated, and I don’t think it’s been raised before.
I agree that increasing compensation to a happy price might be better than relying on altruism, and that selecting for altruistic individuals might mean selecting for those with high opportunity costs.
However, I don’t like the language or sentiment behind calling non-EAs “normies” especially in a context like this. I think both the nomenclature and the blanket sentiment is bad epistemics, bad for the EA brand, and potential evidence of a problematic worldview.
Could you please elaborate?