Examples to somewhat illustrate the last two points:
This year, in some so-far-unpublished work, I wrote about some ideas that:
I initially wasnât confident about the importance of
Seemed like they shouldâve been obvious to relevant groups, but seemed not to have been discussed by them. And that generally seems like (at least) weak evidence that an idea either (a) actually isnât important or (b) has been in essence discussed in some other form or place that I just am not familiar with.
So when I had the initial forms of these ideas and wasnât sure how much time (if any) to spend on them, I took roughly the following approach:
I developed some thoughts on some of the ideas. Then I shared those thoughts verbally or as very rough drafts with a small set of people who seemed like theyâd have decent intuitions on whether the ideas were important vs unimportant, somewhat novel vs already covered, etc.
In most cases, this early feedback indicated that it was at least plausible that the ideas were somewhat important and somewhat novel. Thisâcombined with my independent impression that these ideas might be somewhat important and novelâseemed to provide sufficient reason to flesh those ideas out further, as well as to flesh out related ideas (which seemed like theyâd probably also be important and novel if the other ideas were, and vice versa).
So I did so, then shared that slightly more widely. Then I got more positive feedback, so I bothered to invest the time to polish the writings up a bit more.
Meanwhile, when I fleshed one of the ideas out a little, it seemed like that one turned out to probably not be very important at all. So with that one, I just made sure that my write-up made it clear early on that my current view was that this idea probably didnât matter, and I neatened up the write-up just a bit, because I still thought the write-up might be a bit useful either to:
Explain to others why they shouldnât bother exploring the same thing
Make it easy for others to see if they disagreed with my reasoning for why this probably didnât matter, because I might be wrong about that, and it might be good for others to quickly check that reasoning
Having spent time on that idea sort-of felt in hindsight silly or like a mistake. But I think I probably shouldnât see that as having been a bad decision ex ante, given that:
It seems plausible that, if not for my write-up, someone else wouldâve eventually âwastedâ time on a similar idea
This was just one out of a set of ideas that I tried to flesh out and write up, many/âmost of which still (in hindsight) seem like they were worth spending time on
So maybe itâs very roughly like I gave 60% predictions for each of 10 things, and decided that thatâd mean the expected value of betting on those 10 things was good, and then 6 of those things happened, suggesting I was well-calibrated and was right to bet on those things
(I didnât actually make quantitative predictions)
And some of the other ideas were in betweenâno strong reason to believe they were important or that they werenâtâso I just fleshed them out a bit and left it there, pending further feedback. (I also had other things to work on.)
Yeah, I even mentioned this idea (about preventing someone from âwastingâ time on a dead end you already explored) in a blog post a while back. :-D
Much of the bulk of the iceberg is research, which has the interesting property that often negative results â if they are the result of a high-quality, sufficiently powered study â can be useful. If the 100 EAs from the introduction (under 1.a.) are researchers, they know that one of the plausible ideas got to be right, and 99 of them have already been shown not to be useful, then the final EA researcher can already eliminate 99% of the work with very little effort by relying on what the others have already done. The bulk of that impact iceberg was thanks to the other researchers. Insofar as research is a component of the iceberg, itâs a particularly strong investment.
Itâs also important to be transparent about oneâs rigor and to make the negative results findable for others. The second is obvious. The first is because the dead end may not actually be a dead end but only looked that way given the particular way in which you had resolved the optimal stopping problem of investigating it (even) further.
Itâs also important to be transparent about oneâs rigor and to make the negative results findable for others. The second is obvious. The first is because the dead end may not actually be a dead end but only looked that way given the particular way in which you had resolved the optimal stopping problem of investigating it (even) further.
I agree with these points, and think that they might sometimes be under-appreciated (both in and outside of EA).
To sort-of restate your points:
I think itâs common for people to not publish explorations that turned out to seem to ânot reveal anything importantâ (except of course that this direction of exploration might be worth skipping).
Much has been written about this sort of issue, and there can be valid reasons for that behaviour, but sometimes it seems unfortunate.
I think another failure mode is to provide some sort of public info of your belief that this direction of exploration seems worth skipping, but without sufficient reasoning transparency, which could make people rule this out too much/âtoo early.
Again, there can be valid reasons for this (if youâre sufficiently confident that itâs worth ruling out this direction and you have sufficiently high-value other things to do, it might not be worth spending time on a write-up with high reasoning transparency), but sometimes it seems unfortunate.
Examples to somewhat illustrate the last two points:
This year, in some so-far-unpublished work, I wrote about some ideas that:
I initially wasnât confident about the importance of
Seemed like they shouldâve been obvious to relevant groups, but seemed not to have been discussed by them. And that generally seems like (at least) weak evidence that an idea either (a) actually isnât important or (b) has been in essence discussed in some other form or place that I just am not familiar with.
So when I had the initial forms of these ideas and wasnât sure how much time (if any) to spend on them, I took roughly the following approach:
I developed some thoughts on some of the ideas. Then I shared those thoughts verbally or as very rough drafts with a small set of people who seemed like theyâd have decent intuitions on whether the ideas were important vs unimportant, somewhat novel vs already covered, etc.
In most cases, this early feedback indicated that it was at least plausible that the ideas were somewhat important and somewhat novel. Thisâcombined with my independent impression that these ideas might be somewhat important and novelâseemed to provide sufficient reason to flesh those ideas out further, as well as to flesh out related ideas (which seemed like theyâd probably also be important and novel if the other ideas were, and vice versa).
So I did so, then shared that slightly more widely. Then I got more positive feedback, so I bothered to invest the time to polish the writings up a bit more.
Meanwhile, when I fleshed one of the ideas out a little, it seemed like that one turned out to probably not be very important at all. So with that one, I just made sure that my write-up made it clear early on that my current view was that this idea probably didnât matter, and I neatened up the write-up just a bit, because I still thought the write-up might be a bit useful either to:
Explain to others why they shouldnât bother exploring the same thing
Make it easy for others to see if they disagreed with my reasoning for why this probably didnât matter, because I might be wrong about that, and it might be good for others to quickly check that reasoning
Having spent time on that idea sort-of felt in hindsight silly or like a mistake. But I think I probably shouldnât see that as having been a bad decision ex ante, given that:
It seems plausible that, if not for my write-up, someone else wouldâve eventually âwastedâ time on a similar idea
This was just one out of a set of ideas that I tried to flesh out and write up, many/âmost of which still (in hindsight) seem like they were worth spending time on
So maybe itâs very roughly like I gave 60% predictions for each of 10 things, and decided that thatâd mean the expected value of betting on those 10 things was good, and then 6 of those things happened, suggesting I was well-calibrated and was right to bet on those things
(I didnât actually make quantitative predictions)
And some of the other ideas were in betweenâno strong reason to believe they were important or that they werenâtâso I just fleshed them out a bit and left it there, pending further feedback. (I also had other things to work on.)
Yeah, I even mentioned this idea (about preventing someone from âwastingâ time on a dead end you already explored) in a blog post a while back. :-D
Itâs also important to be transparent about oneâs rigor and to make the negative results findable for others. The second is obvious. The first is because the dead end may not actually be a dead end but only looked that way given the particular way in which you had resolved the optimal stopping problem of investigating it (even) further.
I agree with these points, and think that they might sometimes be under-appreciated (both in and outside of EA).
To sort-of restate your points:
I think itâs common for people to not publish explorations that turned out to seem to ânot reveal anything importantâ (except of course that this direction of exploration might be worth skipping).
Much has been written about this sort of issue, and there can be valid reasons for that behaviour, but sometimes it seems unfortunate.
I think another failure mode is to provide some sort of public info of your belief that this direction of exploration seems worth skipping, but without sufficient reasoning transparency, which could make people rule this out too much/âtoo early.
Again, there can be valid reasons for this (if youâre sufficiently confident that itâs worth ruling out this direction and you have sufficiently high-value other things to do, it might not be worth spending time on a write-up with high reasoning transparency), but sometimes it seems unfortunate.