My name is Jake. I’m the former comms director at 1Day Sooner.
jeeebz
Personally, I find the acronym frustrating because of how foreign all of it is me based on my own experience as a (fairly new — less than two years) EA in the DC area. I like to think I have an okay read on the community here, and the behaviors and beliefs described by “TESCREALism” just do not seem to map reliably onto how people I know actually think and behave, which has led me to believe that Torres’ criticisms are mostly bad faith or strawmen. I admittedly don’t interact very much with AI safety or what I sort of nebulously consider to be the “San Francisco faction” of EA (faction probably being too strong a word), so maybe all of y’all over there are just a bunch of weirdos (kidding (like 90%))!
Sorry, that was ambiguous on my part. There’s a differentiation between research ethics issues (how trials are run, etc.) and clinical ethics (medical aid in dying, accessing unapproved treatments, how to treat a patient with X complicated issue, etc.). My work focuses on the former, not the latter, so I can’t speak much to that. I meant “conservative” in the sense of hesitance to adjust existing norms or systems in research ethics oversight and, for example, a very strong default orientation towards any measures that reduce risk (or seem to reduce risk) for research participants.
Yes, the studies should not have used disabled children at all, because disabled children cannot meaningfully provide consent and were not absolutely necessary to achieve the studies’ aims. They were simply the easiest targets: they could not understand what was being done to them and their parents were coercible through misleading information and promises of better care, which should have been provided regardless. (More generally, I do not believe proxy consent from guardians is acceptable for any research that involves deliberate harm and no prospect of net benefit to children.)
The conditions of the facility are also materially relevant. If it were true that children inevitably would contract hepatitis, then a human challenge would not be truly necessary. More importantly, though, I am comfortable calling Krugman’s behavior evil because he spent 15 years running experiments at an institution that was managed with heinously little regard for its residents and evidently did not feel compelled to raise the issue with the public or authorities. Rather, he saw the immense suffering and neglect as perhaps unfortunate, but ultimately convenient leverage to acquire test subjects.
I strongly agree with this comment. I think it’s important to have a theory of mind of why people think like this. As a non-bioethicist, my impression is a lot of if has to do with the history of the field of bioethics itself, which emerged in response to the horrid abuses in medical research. One major overarching goal that is imbued in bioethics training, research, and writing is prevention of medical abuse, which leads to small-c conservative views that tend to favor, wherever possible, protection of human subjects/patients and an aversion to calculations that sound like they might single out the groups that historically bore the brunt of such abuse.
Like, we’ve all heard of the Tuskegee Syphilis Experiment, but there were a lot more really awful things done in the last century, which have lasting effects to this day. At 1Day, we’re working on trying to bring about safe, efficient human challenge studies to realize a hepatitis C vaccine. We’ve made great progress and it looks like they will begin within the next year! But the last time people did viral hepatitis human challenge studies, they did them on mentally disabled children! Just heinously evil. So I will not be surprised if some on the ethics boards when they review the proposed studies are quite skeptical at first! (Note: this doesn’t mean that the current IRB system is optimal, or even anywhere near so; I view it sort of like zoning and building codes: good in theory — I don’t want toxic waste dumps built near elementary schools — but the devil is in the details and how protections are operationalized.)All of which is to say: like others here, I very strongly disagree with many prevalent views in bioethics. But as I’ve interacted more and more with this field as an outsider, my opinions have evolved from “wow, bioethics/research ethics is populated exclusively with morons” to “this is mostly a bunch of reasonable people whose frame of references are very different”. The latter view allows me to engage more productively to try to change some of the more problematic/wrongheaded views when it comes up in my work and has let me learn a lot, too!
As someone who is not a bioethicist but interacts with many through work (though certainly not as many as Leah), I think that this position for many likely derives from a general opposition to treating people differently based on their intrinsic characteristics. In other words, If I know it’s bad to be ageist, I might interpret the thought experiment that nudges someone to save a younger life as ageist (I’ve heard this argument from one person in bioethics before, but, y’know, n=1) and reject the premise of the question. So for that subset of bioethicists it may not be a serious argument in favor of the proposition but rather a strong preference against making moral judgments involving people that touch upon their intrinsic characteristics.
Chiming in to note a tangentially related experience that somewhat lowered my opinion of IHME/GBD, though I’m not a health economist or anything. I interacted with several analysts after requesting information related to IHME’s estimates for global hepatitis C burden (which differed substantially from the WHO’s). After a meeting and some emails promising to followup, we were ghosted. I have heard from one other organization that they’ve had a really hard time getting similar information out of IHME as well. This may be more of an organizational/operational problem rather than a methodological one, but it wasn’t very confidence-inspiring.
I don’t think so, no, in part because I don’t think that there’s a linear relationship between hypothesized market size and the like likelihood of a product being developed. A breast cancer vaccine could be worth like a hundred billion dollars, but of course, there are real scientific obstacles there. Maybe we can get something like a universal breast cancer vaccine some day, but in the mean time, it seems rather absurd to argue that chemotherapy is net harmful because it suppresses the need for vaccine development.
Weight loss represents an enormous industry in the US. This has been true for decades. (Not devoting a lot of time to research, I found this figure cited in an FTC report based on research by the Atlanta Business Chronicle: ” consumers spent an estimated $34.7 billion in 2000 on weight-loss products and programs.”) But development of obesity drugs has been extremely difficult — historically, “a bottomless pit into which people shove money and time,” according to one journalist. In other words, there’s far more than market size at play.That a relatively small number of kidney donors somehow suppress (tens of?) billions of dollars worth of value does not seem plausible to me, and moreover, I still don’t think that extra hypothetical market size is likely to substantially influence whether artificial organ transplants are developed faster.
The “active harm by donating” argument is very unconvincing for me. Specifically, the analogy to blood donation does not strike me as adequate. It’s just not true that “the existence of altruistic blood donors means that ruthless capitalists are not going to invest in creating artificial blood” — a quick Google search shows that there have been numerous startups that have sought to do so and are seeking to do so. That very poorly attested $7.6 billion number is huge, especially considering that number is just for US sales! That’s about the value of US sales of Ozempic generated in 2022 — a drug that has resulted in such enormous flows of money to its Danish producers that it’s impacting the country’s monetary policy.
So that analogy really does not hold, and I think it doesn’t hold in the same ways it does not hold for kidney donation — even with altruistic kidney donors, thousands die every year waiting for a kidney, and there is enormous demand for organs in general (again, a casual Google search for “artificial organ market” suggests it’d be in the tens of billions of dollars per year). A single person donating their kidney could save someone’s life; it is very unclear how that marginal donation and life saved sets back artificial organ research.
Woops, link fixed (here it is again). That article is part of a dedicated supplement to HCV challenge/CHIM.
Speaking in my personal capacity, I agree — I’d love for insurance/that sort of compensation to be the norm. That does not happen enough in medical research, challenge or otherwise.
I can see why an insurance agency would be very wary. Establishing causation of cancer in general is hard. Even if someone were screened and in perfect liver health during the CHIM, that doesn’t mean they won’t later adopt common habits (e.g. smoking or excessive drinking) that are risk factors for liver cancer.
Relatedly, another article in Clinical Infectious Diseases reviewed liver cancer risks due to CHIM, concluding that “[a]lthough it is difficult to precisely estimate HCC risk from an HCV CHIM, the data suggest the risk to be very low or negligible.” This was based on analysis of three separate cohorts/datasets of people who had previously been infected with hepatitis C in other contexts. Still, the risk cannot be discounted entirely, and there are risks other than liver cancer that our FAQ document discusses, too.
Perhaps a workaround could be to establish some sort of trust that pays out to any former CHIM participant who develops liver cancer not obviously traceable to something like alcohol abuse disorder, and have this fund liquidate its assets after a certain number of decades. That would be very novel, expensive, and probably legally complicated, and I don’t think it’s been raised before.
Thanks for reading!
The donation equivalent aspect is pretty interesting. A study probably would not allow a participant not to take a donation, so in practice it might just be however much money from the study one chooses to donate to effective causes (minus taxes; trial income is usually treated as taxable income, which is probably bad policy). I might be misunderstanding your point, though.
I’ll reiterate (this probably should’ve been worded clearer in the post), one of the arguments we make here is that assuming all participants who make it into the study are about equally useful, we think EAs are more likely to be effective as pre-participants as well. This is because the study is still under consideration: there are decisions about the study’s design that may make it go faster, and informed advocacy from earnest pre-participants could be very persuasive for regulators and ethicists who might otherwise reject certain study design decisions on paternalistic grounds. The community and shared worldview of EA makes us think EAs will, on average, be more engaged when it comes to voicing their views on study design.
This interactive model app based on the paper we mention in footnote 4 lets you tinker with a bunch of variables related to challenge model development and vaccine deployment. Based on that, and after a conversation with the lead author, we get about 200 years of life saved for every day sooner the model is developed. (The app isn’t that granular/to the day yet but it is supposed to be updated soon.) So pushing for stud decisions that condense things even by a month or two could be huge.
Part of our work has included pushing for higher compensation in general, both because we believe it can make recruitment easier (and faster) but also because we think that pay should be more commensurate with the social value generated. I and a few other former human challenge volunteers wrote this paper published in Clinical Infectious Diseases calling for US$20,000 in compensation as a baseline. That’s far higher than the norm for challenge studies; the highest I’ve seen is under $8,0000.
Re: Why EAs specifically, we delve into that a bit in footnote 9. In short, the study is still in a stage where it can be modified to substantially increase potential QALYs/DALYs saved. The voices of prospective participants could be very, very persuasive to researchers, regulators, ethicists when considering study design. Non-EAs are certainly capable of advocating and supporting changes as well, but we think EAs are much more likely to a) grasp the case for certain changes and b) be willing to advocate for them.
No one should feel like they’re obligated to be in a study as an EA (or as a “normie,” though I dislike that dichotomy with EAs). There are certainly people for whom time is better spent elsewhere, EA or not. But not everyone on the forum necessarily works for an EA organization, and there are also certainly people who feel they’d have spare capacity and time that they’d like to commit to this sort of thing.
Why 1Day Sooner Needs EAs to Sign Up for Hep C Challenge Studies
For the Boyz: Zika Challenge in DC/Baltimore
I agree with this! People get filtered out of the studies for reasons completely beyond their control, even if they really want to join. You just can’t help it if your white blood cell count is a tad too low or you have a slight fever the day of study admission.
Shoutout to the 130-ish people in the UK who volunteered to be infected with malaria in two separate studies at various stages of the R21 development process! Those studies helped identify Matrix-M as the ideal adjuvant, and also provided insight into the optimal dose/vaccination schedule.
I feel motivated as a former due diligence/investigative research guy to expand briefly on where my frustration came from. I think it’s hard to understate how stunning a failure of due diligence this was in the first round.
Due diligence for corporate work involves much more than Googling, but, like, the first step is often just Googling. When you Google Nya Dagbladet, the Swedish Wikipedia page pops up. (The English one did not exist last year.)
Skimming the page as it existed circa fall 2022 thru Google Translate should have immediately raised several red flags, even for people not familiar with Swedish politics. These flags obviously would be taken with a grain of salt, because it’s Wikipedia, but it stuns me that they were ignored at first. These immediately apparent flags include:
The links to the far-right party Nationaldemokraterna/National Democrats
The use of at least once columnist noted for antisemitic conspiracy theories (this guy)
The “ethnopluralist” label
Irresponsible and misleading reporting related to vaccines(This was added after the letter of intent was signed, so assuming it was not clear)
Some of those flags don’t immediately check out — e.g., the ethnopluralist label is cited to the paper’s about page, but is not specifically there (nor was it there in archived version of the website). But unless we assume the Wikipedia page is a straight up hit job — which is unlikely, and would be ruled out by checking even a few of the references — then proper due diligence research would have started with a very, very heavy level of scrutiny.
But it sounds like what happened is they merely checked the Nya Dagbladet website and proposal and didn’t see anything suspicious (again, a due diligence failure, but the website is not quite as blatant at first glance as say, Breitbart News), and wrote off the evidence of far-right ties and views because “quality of public discourse worldwide has degraded so badly” such that you can’t be sure.
The baseline Wikipedia + sources check took about twenty minutes to do, including typing this up here. Strong due diligence work is really important. I get that they ultimately did not give the grant, but to me, it’s very disturbing that it even made it past the first half-hour sniff test.
If you’re not already aware of the University of Chicago’s Scav, I’d highly recommend poaching some ideas from them if you ever need inspiration. (E.g., Item 10 from 2021: “A collection of baseball cards for members of the Los Angeles Biblically Accurate Angels baseball team” or Item 262, 2015, “a series of cartoons [drawn] on at least 30 tissues such that when they are rapidly pulled out of a tissue box, they create an animation.”)
It’s great that you know the results. While relatively minor in the grand scheme of things, it’s frustrating that trials, at least here in the US, don’t often share results with participants, even though it’s theoretically as simple as a mass email along the lines of “here’s what we learned” — presumably an email they’re already sending to colleagues, funders, etc., in some form. I had to ask the people running the Shigella trial for my data (not available yet, but I really wanna see if I got the placebo or not)!
To what extent are the legal restrictions on psychedelics also obstacles to running trials with them in major pharmaceutical R&D countries like the US?