Yeah, I even mentioned this idea (about preventing someone from “wasting” time on a dead end you already explored) in a blog post a while back. :-D
Much of the bulk of the iceberg is research, which has the interesting property that often negative results – if they are the result of a high-quality, sufficiently powered study – can be useful. If the 100 EAs from the introduction (under 1.a.) are researchers, they know that one of the plausible ideas got to be right, and 99 of them have already been shown not to be useful, then the final EA researcher can already eliminate 99% of the work with very little effort by relying on what the others have already done. The bulk of that impact iceberg was thanks to the other researchers. Insofar as research is a component of the iceberg, it’s a particularly strong investment.
It’s also important to be transparent about one’s rigor and to make the negative results findable for others. The second is obvious. The first is because the dead end may not actually be a dead end but only looked that way given the particular way in which you had resolved the optimal stopping problem of investigating it (even) further.
It’s also important to be transparent about one’s rigor and to make the negative results findable for others. The second is obvious. The first is because the dead end may not actually be a dead end but only looked that way given the particular way in which you had resolved the optimal stopping problem of investigating it (even) further.
I agree with these points, and think that they might sometimes be under-appreciated (both in and outside of EA).
To sort-of restate your points:
I think it’s common for people to not publish explorations that turned out to seem to “not reveal anything important” (except of course that this direction of exploration might be worth skipping).
Much has been written about this sort of issue, and there can be valid reasons for that behaviour, but sometimes it seems unfortunate.
I think another failure mode is to provide some sort of public info of your belief that this direction of exploration seems worth skipping, but without sufficient reasoning transparency, which could make people rule this out too much/too early.
Again, there can be valid reasons for this (if you’re sufficiently confident that it’s worth ruling out this direction and you have sufficiently high-value other things to do, it might not be worth spending time on a write-up with high reasoning transparency), but sometimes it seems unfortunate.
Yeah, I even mentioned this idea (about preventing someone from “wasting” time on a dead end you already explored) in a blog post a while back. :-D
It’s also important to be transparent about one’s rigor and to make the negative results findable for others. The second is obvious. The first is because the dead end may not actually be a dead end but only looked that way given the particular way in which you had resolved the optimal stopping problem of investigating it (even) further.
I agree with these points, and think that they might sometimes be under-appreciated (both in and outside of EA).
To sort-of restate your points:
I think it’s common for people to not publish explorations that turned out to seem to “not reveal anything important” (except of course that this direction of exploration might be worth skipping).
Much has been written about this sort of issue, and there can be valid reasons for that behaviour, but sometimes it seems unfortunate.
I think another failure mode is to provide some sort of public info of your belief that this direction of exploration seems worth skipping, but without sufficient reasoning transparency, which could make people rule this out too much/too early.
Again, there can be valid reasons for this (if you’re sufficiently confident that it’s worth ruling out this direction and you have sufficiently high-value other things to do, it might not be worth spending time on a write-up with high reasoning transparency), but sometimes it seems unfortunate.