I agree with the sentiment that is epitomized in the section that Micheal quoted. That said:
There are a million other things that the founders of the Against Malaria Foundation could have done, but they took the risk of riding on distributing bed nets, even though they had yet to see it actually work.
In 2004 they already had a large body of evidence to draw on to make the educated guess that if it has worked before, it will probably work again. And I’m also using AMF as an analogy here. It’s common practice to test an intervention through RCTs and other trials and if it works, then to roll it out at large scale without any more trails (apart from some cheap proxy measures without control group). It’s this experience that allows the incarcerated EAs to make educated guesses without further feedback loops.
AI risk, however, is novel and unusual in many ways, so there is little experience like that to inform any guesses, little experience that extrapolates to the field. We’re at the stage where J-PAL would come up with interventions and run RCTs on them to see if any of them have any positive effect, but we can’t do that.
But “little experience” was not meant as facetious overstatement. There are some interventions were many people have somewhat more solidly positive priors, like awareness-raising among AI researchers.
So while I agree with Jeff that the extreme dearth of feedback loops in the field is a great handicap for any proposed intervention, I also agree with you that we should tend to that dying person first and then fix the tire.
I agree with this. It’s the right way to take this further by getting rid of leaky generalizations like ’Evidence is good, no evidence is bad,” and also to point out what you pointed out: is the evidence still virtuous if it’s from the past and you’re reasoning from it? Confused questions like that are a sign that things have been oversimplified. I’ve thought about the more general issues behind this since I wrote this, since I actually posted this on LW over two weeks ago. (I’ve been waiting for karma.) In the interim, I found an essay on Facebook by Eliezer Yudkowsky that gets to the core of why these are bad heuristics, among other things.
I agree with the sentiment that is epitomized in the section that Micheal quoted. That said:
In 2004 they already had a large body of evidence to draw on to make the educated guess that if it has worked before, it will probably work again. And I’m also using AMF as an analogy here. It’s common practice to test an intervention through RCTs and other trials and if it works, then to roll it out at large scale without any more trails (apart from some cheap proxy measures without control group). It’s this experience that allows the incarcerated EAs to make educated guesses without further feedback loops.
AI risk, however, is novel and unusual in many ways, so there is little experience like that to inform any guesses, little experience that extrapolates to the field. We’re at the stage where J-PAL would come up with interventions and run RCTs on them to see if any of them have any positive effect, but we can’t do that.
But “little experience” was not meant as facetious overstatement. There are some interventions were many people have somewhat more solidly positive priors, like awareness-raising among AI researchers.
So while I agree with Jeff that the extreme dearth of feedback loops in the field is a great handicap for any proposed intervention, I also agree with you that we should tend to that dying person first and then fix the tire.
I agree with this. It’s the right way to take this further by getting rid of leaky generalizations like ’Evidence is good, no evidence is bad,” and also to point out what you pointed out: is the evidence still virtuous if it’s from the past and you’re reasoning from it? Confused questions like that are a sign that things have been oversimplified. I’ve thought about the more general issues behind this since I wrote this, since I actually posted this on LW over two weeks ago. (I’ve been waiting for karma.) In the interim, I found an essay on Facebook by Eliezer Yudkowsky that gets to the core of why these are bad heuristics, among other things.