tl;dr In the last 6 months I started a forecasting org, got fairly depressed and decided it was best to step down indefinitely, and am now figuring out what to do next. I note some lessons I’m taking away and my future plans.
Hey Eli, just stumbled upon the post. Sorry that you had to go through bad times. Hope you got the chance to take at least a week off and that things are looking only up since then and from here on. <3 Was really nice to see you again in DC, btw.
Thanks Max! Was great seeing you as well. I did take some time off and was a bit more chill for a little while blogging however much I felt like. I’ve been doing a lot better for the past 2 months.
I wrote a draft outline on bottlenecks to more impactful crowd forecasting that I decided to share in its current form rather than clean up into a post [edited to add: I ended up revising into a post here].
I have some intuition that crowd forecasting could be a useful tool for important decisions like cause prioritization but feel uncertain
I’m not aware of many example success stories of crowd forecasts impacting important decisions, so I define a simple framework for how crowd forecasts could be impactful:
Organizations and individuals (stakeholders) making important decisions are willing to use crowd forecasting to help inform decision making
Forecasting questions are written such that their forecasts will affect the important decisions of stakeholders
The forecasts are good + well-reasoned enough that they are actually useful and trustworthy for stakeholders
I discuss 3 bottlenecks to success stories and possible solutions:
I really enjoyed your outline, thank you! I have a few questions/notes:
[Bottlenecks] You suggest “Organizations and individuals (stakeholders) making important decisions are willing to use crowd forecasting to help inform decision making” as a crucial step in the “story” of crowd forecasting’s success (the “pathway to impact”?) --- this seems very true to me. But then you write “I doubt this is the main bottleneck right now but it may be in the future” (and don’t really return to this).
Could you explain your reasoning here? My intuition was that important decision-makers’ willingness (and institutional ability) to use forecasting info would be a major bottleneck. (You listed Rethink Priorities and Open Phil as examples of institutions that” seem excited about using crowd forecasts to inform important decisions,” but my understanding was that their behavior was the exception, not the rule. )
If, say, the CDC (or important people there, etc.) were interested in using Metaculus to inform their decision-making, do you think they would be unable to do so due to a lack of interest (among forecasters) and/or a lack of relevant forecasting questions? (But then, could they not tell suggest questions they felt were relevant to their decisions?) Or do you think that the quality of answers they would get (or the amount of faith they would be able to put into those answers) wouldn’t be sufficient?
[Separate, minor confusion] You say: “Forecasts are impactful to the extent that they affect important decisions,” and then you suggest examples a-d (“from an EA perspective”) that range from career decisions or what seem like personal donation choices to widely applicable questions like “Should AI alignment researchers be preparing more for a world with shorter or longer timelines?” and “What actions should we recommend the US government take to minimize pandemic risk?” This makes me confused about the space (or range) of decisions and decision-makers that you are considering here.
Are you viewing group forecasting initiatives as a solution to personal life choices? (Or is the “I” in a/b a very generalized “I” somehow?) (Or even
I’d guess that an EA perspective on the possible impact of crowd forecasting should focus on decision-makers with large impacts whether or not they are EA-aligned (e.g. governmental institutions), but I may be very wrong.
[Side note] I loved the section “Idea for question creation process: double crux creation,” and in general the number of possible solutions that you list, and really hope that people try these out or study them more. (I also think you identify other really important bottlenecks).
Please note that I have no real relevant background (and am neither a forecast stakeholder nor a proper forecaster).
Hi Lizka, thanks for your feedback and think it touched on some of the sections that I’m most unsure about / could most use some revision which is great!
[Bottlenecks] You suggest “Organizations and individuals (stakeholders) making important decisions are willing to use crowd forecasting to help inform decision making” as a crucial step in the “story” of crowd forecasting’s success (the “pathway to impact”?) --- this seems very true to me. But then you write “I doubt this is the main bottleneck right now but it may be in the future” (and don’t really return to this).
I’ll say up front it’s possible I’m just wrong about the importance of the bottleneck here, and I think it also interacts with the other bottlenecks in a tricky way. E.g. if there were a clearer pipeline for creating important questions which get very high quality crowd forecasts which then affect decisions, more organizations would be interested.
That being said, my intuition that this is not the bottleneck comes from some personal experiences I’ve had with forecasts solicited by orgs that already are interested in using crowd forecasts to inform decision making. Speaking from the perspective of a forecaster, I personally wouldn’t have trusted the forecasts produced as an input into important decisions.
Some examples: [Disclaimer: These are my personal impressions. Creating impactful questions and incentivizing forecaster effort is really hard and I respect OP//RP/Metaculus a lot for giving it a shot, and would love to be proven wrong about the impact of current initiatives like these]
The Open Philanthropy/Metaculus Forecasting AI Progress Tournament is the most well-funded initiative I know of [ETA: potentially besides those contracting Good Judgment superforecasters], but my best guess is that the forecasts resulting from it will not be impactful. An example is the “deep learning” longest time horizon round, where despite Metaculus’ best efforts most questions have no-few comments and at least to me it felt like the bulk of the forecasting skill was forming a continuous distribution from trend extrapolation. See also this question where the community failed to fully update on record-breaking scores appropriately. Also note that each question attracted only 25-35 forecasters.
I feel less sure about this, but the RP’s animal welfare questions authored by Neil Dullaghan seem to have the majority of comments on them by Neil himself. I feel intuitively skeptical that most of the 25-45 forecasters per question are doing more than skimming and making minor adjustments to the current community forecast, and this feels like an area where getting up to speed on domain knowledge is important to accurate forecasts.
So my argument is: given that AFAIK we haven’t had consistent success using crowd forecasts to help institutions making important decisions, the main bottleneck seems to be helping the interested institutions rather than getting more institutions interested.
If, say, the CDC (or important people there, etc.) were interested in using Metaculus to inform their decision-making, do you think they would be unable to do so due to a lack of interest (among forecasters) and/or a lack of relevant forecasting questions? (But then, could they not tell suggest questions they felt were relevant to their decisions?) Or do you think that the quality of answers they would get (or the amount of faith they would be able to put into those answers) wouldn’t be sufficient?
[Caveat: I don’t feel too qualified too opine on this point since I’m not a stakeholder nor have I interviewed ones, but I’ll give my best guess.]
I think for the CDC example:
Creating impactful questions seems relatively easier here than in e.g. the AI safety domain, though it still may be non-trivial to identify and operationalize cruxes for which predictions would actually lead to different decisions.
I’d on average expect the forecasts to be a bit better than CDC models / domain experts. Perhaps substantially better on tail risks. Don’t think we have a lot of evidence here, we have some from Metaculus tournaments with a small sample size.
I think with better incentives to allocate more forecaster effort to this project, it’s possible the forecasts could be much better.
Overall, I’d expect slightly decent forecasts on good but not great questions and I think that this isn’t really enough to move the needle, so to speak. I also think there would need to be reasoning given behind the forecasts for stakeholder to understand and trust in crowd forecasts would need to be built up over time.
Part of the reason it seems tricky to have impactful forecasts is that often there are competing people/”camps” with different world models, and a person which the crowd forecast disagrees with may be reluctant to change their mind unless (a) the question is well targeted at cruxes of the disagreement and (b) they have built up trust of the forecasters and their reasoning process. To the extent this is true within the CDC, the harder it seems for forecasting questions to be impactful.
2. [Separate, minor confusion] You say: “Forecasts are impactful to the extent that they affect important decisions,” and then you suggest examples a-d (“from an EA perspective”) that range from career decisions or what seem like personal donation choices to widely applicable questions like “Should AI alignment researchers be preparing more for a world with shorter or longer timelines?” and “What actions should we recommend the US government take to minimize pandemic risk?” This makes me confused about the space (or range) of decisions and decision-makers that you are considering here.
Yeah I think this is basically right, I will edit the draft.
[Side note] I loved the section “Idea for question creation process: double crux creation,” and in general the number of possible solutions that you list, and really hope that people try these out or study them more. (I also think you identify other really important bottlenecks).
Speaking from the perspective of a forecaster, I personally wouldn’t have trusted the forecasts produced as an input into important decisions.
Fwiw, I expect to very often see forecasts as an input into important decisions, but also usually seem them as a somewhat/very crappy input. I just also think that, for many questions that are key to my decisions or to the decisions of stakeholders I seek to influence, most or all of the available inputs are (by themselves) somewhat/very crappy, and so often the best I can do is:
try to gather up a bunch of disparate crappy inputs with different weaknesses
try to figure out how much weight to give each
see how much that converges on a single coherent picture and if so what picture
I liked this document quite a bit, and I think it would be a reasonable Forum post even without further cleanup — you could basically copy over this Shortform, minus the bit about not cleaning it up. This lets the post be tagged, be visible to more people, etc. (Though I understand if you’d rather leave it in a less-trafficked area.)
Appreciate the compliment. I am interested in making it a Forum post, but might want to do some more editing/cleanup or writing over next few weeks/months (it got more interest than I was expecting so seems more likely to be worth it now). Might also post as is, will think about it more soon.
The efforts by https://1daysooner.org/ to use human challenge trials to speed up vaccine development make me think about the potential of advocacy for “human challenge” type experiments in other domains where consequentialists might conclude there hasn’t been enough “ethically questionable” randomized experimentation on humans. 2 examples come to mind:
My impression of the nutrition field is that it’s very hard to get causal evidence because people won’t change their diet at random for an experiment.
Why We Sleep has been a very influential book, but the sleep science research it draws upon is usually observational and/or relies on short time-spans. Alexey Guzey’s critique and self-experiment have both cast doubt on its conclusions to some extent.
Getting 1,000 people to sign up and randomly contracting 500 of them to do X for a year, where X is something like being vegan or sleeping for 6.5 hours per day, could be valuable.
EDIT (Jul 2022): I’m no longer nearly as confident in this idea, though if someone was excited about it it still might be cool.
Reflecting a little on my shortform from a few years ago, I think I wasn’t ambitious enough in trying to actually move this forward.
I want there to be an org that does “human challenge”-style RCTs across lots of important questions that are extremely hard to get at otherwise, e.g. (top 2 are repeated from previous shortform. edited to clarify: these are some quick examples off the top of my head, should be more consideration into which are the best for this org):
Health effects of veganism
Health effects of restricting sleep
Productivity of remote vs. in-person work
Productivity effects of blocking out focused/deep work
Edited to add: I no longer think “human challenge” is really the best way to refer to this idea (see comment that convinced me); I mean to say something like “large scale RCTs of important things on volunteers who sign up on an app to randomly try or not try an intervention.” I’m open to suggestions on succinct ways to refer to this.
I’d be very excited about such an org existing. I think it could even grow to become an effective megaproject, pending further analysis on how much it could increase wisdom relative to power. But, I don’t think it’s a good personal fit for me to found given my current interests and skills.
However, I think I could plausibly provide some useful advice/help to anyone who is interested in founding a many-domain human-challenge org. If you are interested in founding such an org or know someone who might be and want my advice, let me know. (I will also be linking this shortform to some people who might be able to help set this up.)
--
Some further inspiration I’m drawing on to be excited about this org:
Freakonomics’ RCT on measuring the effects of big life changes like quitting your job or breaking up with your partner. This makes me optimistic about the feasibility of getting lots of people to sign up.
Holden’s note on doing these type of experiments with digital people. He mentions some difficulties with running these types of RCTs today, but I think an org specializing in them could help. (edited to add: in particular, a mobile/web app for matching experiments to volunteers and tracking effects seems like it should be created)
Yeah these are interesting questions Eli. I’ve worked on a few big RCTs and they’re really hard and expensive to do. It’s also really hard to adequately power experiments for small effect sizes in noisy environments (e.g., productivity of remote/in-person work). Your suggestions to massively scale up those interventions and to do things online would make things easier. As Ozzie mentioned, the health ones require such long and slow feedback loops that I think they might not be better than well (statistically) controlled alternatives. I used to think RCTs were the only way to get definitive causal data. The problem is, because of biases that can be almost impossible to eliminate (https://sites.google.com/site/riskofbiastool/welcome/rob-2-0-tool) RCTs are seldom perfect causal data. Conversely, with good adjustment for confounding, observational data can provide very strong causal evidence (think smoking; I recommend my PhD students do this course for this reason https://www.coursera.org/learn/crash-course-in-causality). For the ones with fast feedback loops, I think some combination of “priors + best available evidence + lightweight tests in my own life” works pretty well to see if I should adopt something.
At a meta-level, in an ideal world, the NSF and NIH (and global equivalents) are probably designed to fund people to address questions that are most important and with the highest potential. There are probably dietetics/sleep/organisational psychology experts who have dedicated their careers to questions #1-4 above, and you’d hope that those people are getting funded if those questions are indeed critical to answer. In reality, science funding probably does not get distributed based on criteria that maximises impartial welfare, so maybe that’s why #1-4 would get missed. As mentioned in a recent forum post, I think the mega-org could be better focused nudging scientific incentives to focus on those questions rather than working on those questions ourselves https://forum.effectivealtruism.org/posts/JbddnNZHgySgj8qxj/improving-science-influencing-the-direction-of-research-and
On causal evidence of RCTs vs. observational data: I’m intuitively skeptical of this but the sources you linked seem interesting and worthwhile to think about more before setting an org up for this. (Edited to add:) Hearing your view already substantially updates mine, but I’d be really curious to hear more perspectives from others with lots of experience working on this type of stuff, to see if they’d agree, then I’d update more. If you have impressions of how much consensus there is on this question that would be valuable too.
On nudging scientific incentives to focus on important questions rather than working on them ourselves: this seems pretty reasonable to me. I think building an app to do this still seems plausibly very valuable and I’m not sure how much I trust others to do it, but maybe we combine the ideas and build an app then nudge other scientists to use this app to do important studies.
I should clarify: RCTs are obviously generally >> even a very well controlled propensity score matched quasi-experiment, but I just don’t think the former is ‘bulletproof’ anymore. The former should update your priors more but if you look at the variability among studies in meta-analyses, even among low-risk-of-bias RCTs, I’m now much less easily swayed by any single one.
I think the obvious answer is that doing controlled trials in these areas is a whole lot of work/expense for the benefit.
Some things like health effects can take a long time to play out; maybe 10-50 years. And I wouldn’t expect the difference to be particularly amazing. (I’d be surprised if the average person could increase their productivity by more than ~20% with any of those)
On “challenge trials”; I imagine the big question is how difficult it would be to convince people to accept a very different lifestyle for a long time. I’m not sure if it’s called “challenge trial” in this case.
I think the obvious answer is that doing controlled trials in these areas is a whole lot of work/expense for the benefit.
Some things like health effects can take a long time to play out; maybe 10-50 years. And I wouldn’t expect the difference to be particularly amazing. (I’d be surprised if the average person could increase their productivity by more than ~20% with any of those)
I think our main disagreement is around the likely effect sizes; e.g. I think blocking out focused work could easily have an effect size of >50% (but am pretty uncertain which is why I want the trial!). I agree about long-term effects being a concern, particularly depending on one’s TAI timelines.
On “challenge trials”; I imagine the big question is how difficult it would be to convince people to accept a very different lifestyle for a long time. I’m not sure if it’s called “challenge trial” in this case.
Yeah, I’m most excited about challenges that last more like a few months to a year, though this isn’t ideal in all domains (e.g. veganism), so maybe this wasn’t best as the top example. I have no strong views on terminology.
The health interventions seem very different to me than the productivity interventions.
The health interventions have issues with long time-scales, which productivity interventions don’t have as much.
However, productivity interventions have major challenges with generality. When I’ve looked into studies around productivity interventions, often they’re done in highly constrained environments, or environments very different from mine, and I have very little clue what to really make of them. If the results are highly promising, I’m particularly skeptical, so it would take multiple strong studies to make the case.
I think it’s really telling that Google and Amazon don’t have internal testing teams to study productivity/management techniques in isolation. In practice, I just don’t think you learn that much, for the cost of it.
What these companies do do, is to allow different managers to try things out, survey them, and promote the seemingly best practices throughout. This happens very quickly. I’m sure we could make tools to make this process go much faster. (Better elicitation, better data collection of what already happens, lots of small estimates of impact to see what to focus more on, etc).
In general, I think traditional scientific experimentation on humans is very inefficient, and we should be aiming for much more efficient setups. (But we should be working on these!)
This all makes sense to me overall. I’m still excited about this idea (slightly less so than before) but I think/agree there should be careful considerations on which interventions make the most sense to test.
I think it’s really telling that Google and Amazon don’t have internal testing teams to study productivity/management techniques in isolation. In practice, I just don’t think you learn that much, for the cost of it.
What these companies do do, is to allow different managers to try things out, survey them, and promote the seemingly best practices throughout. This happens very quickly. I’m sure we could make tools to make this process go much faster. (Better elicitation, better data collection of what already happens, lots of small estimates of impact to see what to focus more on, etc).
A few things come to mind here:
The point on the amount of evidence Google/Amazon not doing it provides feels related to the discussion around our corporate prediction market analysis. Note that I was the author who probably took the evidence that most corporations discontinued their prediction markets as the most weak (see my conclusion), though I still think it’s fairly substantial.
I also agree with the point in your reply that setting up prediction markets and learning from them has positive externalities, and a similar thing should apply here.
I agree that more data collection tools for what already happens and other innovations in that vein seem good as well!
A variant I’d also be excited about (could imagine even moreso, could go either way after more reflection) that could be contained within the same org or separate: the same thing but for companies (particularly, startups) edit to clarify: test policies/strategies across companies, not on people within companies
Sharing an update on my last 6 months that’s uncomfortably personal for me to want to share as more than a shortform for now, but I think is worth sharing somewhere on the Forum: Personal update: EA entrepreneurship, mental health, and what’s next
Hey Eli, just stumbled upon the post. Sorry that you had to go through bad times. Hope you got the chance to take at least a week off and that things are looking only up since then and from here on. <3 Was really nice to see you again in DC, btw.
Thanks Max! Was great seeing you as well. I did take some time off and was a bit more chill for a little while blogging however much I felt like. I’ve been doing a lot better for the past 2 months.
Nice, that’s good to hear. :)
I wrote a draft outline on bottlenecks to more impactful crowd forecasting that I decided to share in its current form rather than clean up into a post [edited to add: I ended up revising into a post here].
Link
Summary:
I have some intuition that crowd forecasting could be a useful tool for important decisions like cause prioritization but feel uncertain
I’m not aware of many example success stories of crowd forecasts impacting important decisions, so I define a simple framework for how crowd forecasts could be impactful:
Organizations and individuals (stakeholders) making important decisions are willing to use crowd forecasting to help inform decision making
Forecasting questions are written such that their forecasts will affect the important decisions of stakeholders
The forecasts are good + well-reasoned enough that they are actually useful and trustworthy for stakeholders
I discuss 3 bottlenecks to success stories and possible solutions:
Creating the important questions
Incentivizing time spent on important questions
Incentivizing forecasters to collaborate
I really enjoyed your outline, thank you! I have a few questions/notes:
[Bottlenecks] You suggest “Organizations and individuals (stakeholders) making important decisions are willing to use crowd forecasting to help inform decision making” as a crucial step in the “story” of crowd forecasting’s success (the “pathway to impact”?) --- this seems very true to me. But then you write “I doubt this is the main bottleneck right now but it may be in the future” (and don’t really return to this).
Could you explain your reasoning here? My intuition was that important decision-makers’ willingness (and institutional ability) to use forecasting info would be a major bottleneck. (You listed Rethink Priorities and Open Phil as examples of institutions that” seem excited about using crowd forecasts to inform important decisions,” but my understanding was that their behavior was the exception, not the rule. )
If, say, the CDC (or important people there, etc.) were interested in using Metaculus to inform their decision-making, do you think they would be unable to do so due to a lack of interest (among forecasters) and/or a lack of relevant forecasting questions? (But then, could they not tell suggest questions they felt were relevant to their decisions?) Or do you think that the quality of answers they would get (or the amount of faith they would be able to put into those answers) wouldn’t be sufficient?
[Separate, minor confusion] You say: “Forecasts are impactful to the extent that they affect important decisions,” and then you suggest examples a-d (“from an EA perspective”) that range from career decisions or what seem like personal donation choices to widely applicable questions like “Should AI alignment researchers be preparing more for a world with shorter or longer timelines?” and “What actions should we recommend the US government take to minimize pandemic risk?” This makes me confused about the space (or range) of decisions and decision-makers that you are considering here.
Are you viewing group forecasting initiatives as a solution to personal life choices? (Or is the “I” in a/b a very generalized “I” somehow?) (Or even
I’d guess that an EA perspective on the possible impact of crowd forecasting should focus on decision-makers with large impacts whether or not they are EA-aligned (e.g. governmental institutions), but I may be very wrong.
[Side note] I loved the section “Idea for question creation process: double crux creation,” and in general the number of possible solutions that you list, and really hope that people try these out or study them more. (I also think you identify other really important bottlenecks).
Please note that I have no real relevant background (and am neither a forecast stakeholder nor a proper forecaster).
Hi Lizka, thanks for your feedback and think it touched on some of the sections that I’m most unsure about / could most use some revision which is great!
I’ll say up front it’s possible I’m just wrong about the importance of the bottleneck here, and I think it also interacts with the other bottlenecks in a tricky way. E.g. if there were a clearer pipeline for creating important questions which get very high quality crowd forecasts which then affect decisions, more organizations would be interested.
That being said, my intuition that this is not the bottleneck comes from some personal experiences I’ve had with forecasts solicited by orgs that already are interested in using crowd forecasts to inform decision making. Speaking from the perspective of a forecaster, I personally wouldn’t have trusted the forecasts produced as an input into important decisions.
Some examples: [Disclaimer: These are my personal impressions. Creating impactful questions and incentivizing forecaster effort is really hard and I respect OP//RP/Metaculus a lot for giving it a shot, and would love to be proven wrong about the impact of current initiatives like these]
The Open Philanthropy/Metaculus Forecasting AI Progress Tournament is the most well-funded initiative I know of [ETA: potentially besides those contracting Good Judgment superforecasters], but my best guess is that the forecasts resulting from it will not be impactful. An example is the “deep learning” longest time horizon round, where despite Metaculus’ best efforts most questions have no-few comments and at least to me it felt like the bulk of the forecasting skill was forming a continuous distribution from trend extrapolation. See also this question where the community failed to fully update on record-breaking scores appropriately. Also note that each question attracted only 25-35 forecasters.
I feel less sure about this, but the RP’s animal welfare questions authored by Neil Dullaghan seem to have the majority of comments on them by Neil himself. I feel intuitively skeptical that most of the 25-45 forecasters per question are doing more than skimming and making minor adjustments to the current community forecast, and this feels like an area where getting up to speed on domain knowledge is important to accurate forecasts.
So my argument is: given that AFAIK we haven’t had consistent success using crowd forecasts to help institutions making important decisions, the main bottleneck seems to be helping the interested institutions rather than getting more institutions interested.
[Caveat: I don’t feel too qualified too opine on this point since I’m not a stakeholder nor have I interviewed ones, but I’ll give my best guess.]
I think for the CDC example:
Creating impactful questions seems relatively easier here than in e.g. the AI safety domain, though it still may be non-trivial to identify and operationalize cruxes for which predictions would actually lead to different decisions.
I’d on average expect the forecasts to be a bit better than CDC models / domain experts. Perhaps substantially better on tail risks. Don’t think we have a lot of evidence here, we have some from Metaculus tournaments with a small sample size.
I think with better incentives to allocate more forecaster effort to this project, it’s possible the forecasts could be much better.
Overall, I’d expect slightly decent forecasts on good but not great questions and I think that this isn’t really enough to move the needle, so to speak. I also think there would need to be reasoning given behind the forecasts for stakeholder to understand and trust in crowd forecasts would need to be built up over time.
Part of the reason it seems tricky to have impactful forecasts is that often there are competing people/”camps” with different world models, and a person which the crowd forecast disagrees with may be reluctant to change their mind unless (a) the question is well targeted at cruxes of the disagreement and (b) they have built up trust of the forecasters and their reasoning process. To the extent this is true within the CDC, the harder it seems for forecasting questions to be impactful.
Yeah I think this is basically right, I will edit the draft.
I hope so too, appreciate it!
Fwiw, I expect to very often see forecasts as an input into important decisions, but also usually seem them as a somewhat/very crappy input. I just also think that, for many questions that are key to my decisions or to the decisions of stakeholders I seek to influence, most or all of the available inputs are (by themselves) somewhat/very crappy, and so often the best I can do is:
try to gather up a bunch of disparate crappy inputs with different weaknesses
try to figure out how much weight to give each
see how much that converges on a single coherent picture and if so what picture
(See also consilience.)
(I really appreciated your draft outline and left a bunch of comments there. Just jumping in here with one small point.)
I liked this document quite a bit, and I think it would be a reasonable Forum post even without further cleanup — you could basically copy over this Shortform, minus the bit about not cleaning it up. This lets the post be tagged, be visible to more people, etc. (Though I understand if you’d rather leave it in a less-trafficked area.)
Appreciate the compliment. I am interested in making it a Forum post, but might want to do some more editing/cleanup or writing over next few weeks/months (it got more interest than I was expecting so seems more likely to be worth it now). Might also post as is, will think about it more soon.
The efforts by https://1daysooner.org/ to use human challenge trials to speed up vaccine development make me think about the potential of advocacy for “human challenge” type experiments in other domains where consequentialists might conclude there hasn’t been enough “ethically questionable” randomized experimentation on humans. 2 examples come to mind:
My impression of the nutrition field is that it’s very hard to get causal evidence because people won’t change their diet at random for an experiment.
Why We Sleep has been a very influential book, but the sleep science research it draws upon is usually observational and/or relies on short time-spans. Alexey Guzey’s critique and self-experiment have both cast doubt on its conclusions to some extent.
Getting 1,000 people to sign up and randomly contracting 500 of them to do X for a year, where X is something like being vegan or sleeping for 6.5 hours per day, could be valuable.
Challenge trials face resistance for very valid historical reasons—this podcast has a good summary. https://80000hours.org/podcast/episodes/marc-lipsitch-winning-or-losing-against-covid19-and-epidemiology/
EDIT (Jul 2022): I’m no longer nearly as confident in this idea, though if someone was excited about it it still might be cool.
Reflecting a little on my shortform from a few years ago, I think I wasn’t ambitious enough in trying to actually move this forward.
I want there to be an org that does “human challenge”-style RCTs across lots of important questions that are extremely hard to get at otherwise, e.g. (top 2 are repeated from previous shortform. edited to clarify: these are some quick examples off the top of my head, should be more consideration into which are the best for this org):
Health effects of veganism
Health effects of restricting sleep
Productivity of remote vs. in-person work
Productivity effects of blocking out focused/deep work
Edited to add: I no longer think “human challenge” is really the best way to refer to this idea (see comment that convinced me); I mean to say something like “large scale RCTs of important things on volunteers who sign up on an app to randomly try or not try an intervention.” I’m open to suggestions on succinct ways to refer to this.
I’d be very excited about such an org existing. I think it could even grow to become an effective megaproject, pending further analysis on how much it could increase wisdom relative to power. But, I don’t think it’s a good personal fit for me to found given my current interests and skills.
However, I think I could plausibly provide some useful advice/help to anyone who is interested in founding a many-domain human-challenge org. If you are interested in founding such an org or know someone who might be and want my advice, let me know. (I will also be linking this shortform to some people who might be able to help set this up.)
--
Some further inspiration I’m drawing on to be excited about this org:
Freakonomics’ RCT on measuring the effects of big life changes like quitting your job or breaking up with your partner. This makes me optimistic about the feasibility of getting lots of people to sign up.
Holden’s note on doing these type of experiments with digital people. He mentions some difficulties with running these types of RCTs today, but I think an org specializing in them could help. (edited to add: in particular, a mobile/web app for matching experiments to volunteers and tracking effects seems like it should be created)
Yeah these are interesting questions Eli. I’ve worked on a few big RCTs and they’re really hard and expensive to do. It’s also really hard to adequately power experiments for small effect sizes in noisy environments (e.g., productivity of remote/in-person work). Your suggestions to massively scale up those interventions and to do things online would make things easier. As Ozzie mentioned, the health ones require such long and slow feedback loops that I think they might not be better than well (statistically) controlled alternatives. I used to think RCTs were the only way to get definitive causal data. The problem is, because of biases that can be almost impossible to eliminate (https://sites.google.com/site/riskofbiastool/welcome/rob-2-0-tool) RCTs are seldom perfect causal data. Conversely, with good adjustment for confounding, observational data can provide very strong causal evidence (think smoking; I recommend my PhD students do this course for this reason https://www.coursera.org/learn/crash-course-in-causality). For the ones with fast feedback loops, I think some combination of “priors + best available evidence + lightweight tests in my own life” works pretty well to see if I should adopt something.
At a meta-level, in an ideal world, the NSF and NIH (and global equivalents) are probably designed to fund people to address questions that are most important and with the highest potential. There are probably dietetics/sleep/organisational psychology experts who have dedicated their careers to questions #1-4 above, and you’d hope that those people are getting funded if those questions are indeed critical to answer. In reality, science funding probably does not get distributed based on criteria that maximises impartial welfare, so maybe that’s why #1-4 would get missed. As mentioned in a recent forum post, I think the mega-org could be better focused nudging scientific incentives to focus on those questions rather than working on those questions ourselves https://forum.effectivealtruism.org/posts/JbddnNZHgySgj8qxj/improving-science-influencing-the-direction-of-research-and
Really appreciate hearing your perspective!
On causal evidence of RCTs vs. observational data: I’m intuitively skeptical of this but the sources you linked seem interesting and worthwhile to think about more before setting an org up for this. (Edited to add:) Hearing your view already substantially updates mine, but I’d be really curious to hear more perspectives from others with lots of experience working on this type of stuff, to see if they’d agree, then I’d update more. If you have impressions of how much consensus there is on this question that would be valuable too.
On nudging scientific incentives to focus on important questions rather than working on them ourselves: this seems pretty reasonable to me. I think building an app to do this still seems plausibly very valuable and I’m not sure how much I trust others to do it, but maybe we combine the ideas and build an app then nudge other scientists to use this app to do important studies.
I should clarify: RCTs are obviously generally >> even a very well controlled propensity score matched quasi-experiment, but I just don’t think the former is ‘bulletproof’ anymore. The former should update your priors more but if you look at the variability among studies in meta-analyses, even among low-risk-of-bias RCTs, I’m now much less easily swayed by any single one.
I think the obvious answer is that doing controlled trials in these areas is a whole lot of work/expense for the benefit.
Some things like health effects can take a long time to play out; maybe 10-50 years. And I wouldn’t expect the difference to be particularly amazing. (I’d be surprised if the average person could increase their productivity by more than ~20% with any of those)
On “challenge trials”; I imagine the big question is how difficult it would be to convince people to accept a very different lifestyle for a long time. I’m not sure if it’s called “challenge trial” in this case.
It wouldn’t shock me if an average vegan diet decreased lifetime productivity by more than 20% by malnutrition → mental health link.
I think our main disagreement is around the likely effect sizes; e.g. I think blocking out focused work could easily have an effect size of >50% (but am pretty uncertain which is why I want the trial!). I agree about long-term effects being a concern, particularly depending on one’s TAI timelines.
Yeah, I’m most excited about challenges that last more like a few months to a year, though this isn’t ideal in all domains (e.g. veganism), so maybe this wasn’t best as the top example. I have no strong views on terminology.
The health interventions seem very different to me than the productivity interventions.
The health interventions have issues with long time-scales, which productivity interventions don’t have as much.
However, productivity interventions have major challenges with generality. When I’ve looked into studies around productivity interventions, often they’re done in highly constrained environments, or environments very different from mine, and I have very little clue what to really make of them. If the results are highly promising, I’m particularly skeptical, so it would take multiple strong studies to make the case.
I think it’s really telling that Google and Amazon don’t have internal testing teams to study productivity/management techniques in isolation. In practice, I just don’t think you learn that much, for the cost of it.
What these companies do do, is to allow different managers to try things out, survey them, and promote the seemingly best practices throughout. This happens very quickly. I’m sure we could make tools to make this process go much faster. (Better elicitation, better data collection of what already happens, lots of small estimates of impact to see what to focus more on, etc).
In general, I think traditional scientific experimentation on humans is very inefficient, and we should be aiming for much more efficient setups. (But we should be working on these!)
This post is relevant: https://www.lesswrong.com/posts/vCQpJLNFpDdHyikFy/are-the-social-sciences-challenging-because-of-fundamental
This all makes sense to me overall. I’m still excited about this idea (slightly less so than before) but I think/agree there should be careful considerations on which interventions make the most sense to test.
A few things come to mind here:
The point on the amount of evidence Google/Amazon not doing it provides feels related to the discussion around our corporate prediction market analysis. Note that I was the author who probably took the evidence that most corporations discontinued their prediction markets as the most weak (see my conclusion), though I still think it’s fairly substantial.
I also agree with the point in your reply that setting up prediction markets and learning from them has positive externalities, and a similar thing should apply here.
I agree that more data collection tools for what already happens and other innovations in that vein seem good as well!
A variant I’d also be excited about (could imagine even moreso, could go either way after more reflection) that could be contained within the same org or separate: the same thing but for companies (particularly, startups) edit to clarify: test policies/strategies across companies, not on people within companies
Votes/considerations on why this is a good or bad idea are also appreciated!