2 and 3. Self-consciousness and Is there something interesting here?
These questions definitely resonate with me, and I imagine theyâd resonate with most/âall researchers.
I have a tendency to continually wonder if what Iâm doing is what I should be doing, or if I should change my priorities. I think this is good in some ways. But sometimes Iâd make better decisions faster if I just actually pursued an idea more âconfidentlyâ for a bit, to get more info on whether itâs worth pursuing, rather than just âwonderingâ about it repeatedly and going back and forth without much new info to work with. Basically, I might do too much self-doubt-style armchair reasoning, with too little actual empirical info.
Also, pursuing an idea more âconfidentlyâ for a bit will not only inform me about whether to continue pursuing it further, but also might result in outputs that are useful for others. So I try to sometimes switch into âjust commit and focus modeâ for a given time period, or until I hit a given milestone, and mostly minimise reflection on what I should prioritise during that time. But so far this has been like a grab bag of heuristics and habits I use, rather than a more precise guideline for myself.
Things that help me with this include, and/âor some scattered related thoughts, include:
Talking to others and getting feedback, including on early-stage ideas
I liked David and Jasonâs remarks on this in their comments
A sort-of minimum viable product and quick feedback loop approach has often seemed useful for meâsomething like:
First getting verbal feedback from a couple people on a messy, verbal description of an idea
Then writing up a rough draft about the idea and circulating it to a couple more people for a bit more feedback
Then polishing and fleshing out that draft and circulating it to a few more people for more feedback
Then posting publicly
(But only proceeding to the next step if evidence from the prior oneâplus oneâs own intuitionsâsuggested this would be worthwhile)
Feedback has often helped me determine whether an idea is worth pursuing further, feel more comfortable/âmotivated with pursuing an idea further (rather than being mired in unproductive self-doubt), develop the idea, work out which angles of it are most worth pursuing, and work out how to express it more clearly
Reminding myself that I havenât really gathered any new info since the last time I thought âShould this really be what I spend my time on?â, so thinking about that again is unlikely to reveal new insights, and is probably just a stupid part of my psychology rather than something Iâd endorse.
I might think to myself something like âIf a friend was doing this, youâd think itâs irrational, and gently advise them to just actually commit for a bit and get new info, right? So shouldnât you do the same yourself?â
Remembering Algorithms to Live By drawing an analogy to a failure mode in which computers continually reprioritise tasks and the reprioritisation takes up just enough processing power to mean no actual progress on any of the tasks occurs, and this can just cycle forever. And the way to get out of this is to at some point just do tasks, even without having confidence that these âshouldâ be top priority.
This is just my half-remembered version of that part of the book, and might be wrong somehow.
Remembering that Iâd be deeply uncertain about the âactualâ value of any project I could pursue, because the world is very complicated and my ambitions (contribute to improving the long-term future) are pretty lofty. The best I can do is something that seems good in expected value but with large error bars. So the fact I feel some uncertainty and doubt provides basically no evidence that this project isnât worth pursuing. (Though feeling an unusually large amount of uncertainty and doubt might.)
Remembering that, if the idea ends up seeming to have not been important but there was a reasonable ex ante case that it mightâve been important, thereâs a decent chance someone else would end up pursuing it if I donât. So if I pursue it, then find out it seems to not be important, then write about what I found, that might still have the effect of causing an important project to get done, because it might cause someone else to do that important project rather than doing something similar to what I did.
Examples to somewhat illustrate the last two points:
This year, in some so-far-unpublished work, I wrote about some ideas that:
I initially wasnât confident about the importance of
Seemed like they shouldâve been obvious to relevant groups, but seemed not to have been discussed by them. And that generally seems like (at least) weak evidence that an idea either (a) actually isnât important or (b) has been in essence discussed in some other form or place that I just am not familiar with.
So when I had the initial forms of these ideas and wasnât sure how much time (if any) to spend on them, I took roughly the following approach:
I developed some thoughts on some of the ideas. Then I shared those thoughts verbally or as very rough drafts with a small set of people who seemed like theyâd have decent intuitions on whether the ideas were important vs unimportant, somewhat novel vs already covered, etc.
In most cases, this early feedback indicated that it was at least plausible that the ideas were somewhat important and somewhat novel. Thisâcombined with my independent impression that these ideas might be somewhat important and novelâseemed to provide sufficient reason to flesh those ideas out further, as well as to flesh out related ideas (which seemed like theyâd probably also be important and novel if the other ideas were, and vice versa).
So I did so, then shared that slightly more widely. Then I got more positive feedback, so I bothered to invest the time to polish the writings up a bit more.
Meanwhile, when I fleshed one of the ideas out a little, it seemed like that one turned out to probably not be very important at all. So with that one, I just made sure that my write-up made it clear early on that my current view was that this idea probably didnât matter, and I neatened up the write-up just a bit, because I still thought the write-up might be a bit useful either to:
Explain to others why they shouldnât bother exploring the same thing
Make it easy for others to see if they disagreed with my reasoning for why this probably didnât matter, because I might be wrong about that, and it might be good for others to quickly check that reasoning
Having spent time on that idea sort-of felt in hindsight silly or like a mistake. But I think I probably shouldnât see that as having been a bad decision ex ante, given that:
It seems plausible that, if not for my write-up, someone else wouldâve eventually âwastedâ time on a similar idea
This was just one out of a set of ideas that I tried to flesh out and write up, many/âmost of which still (in hindsight) seem like they were worth spending time on
So maybe itâs very roughly like I gave 60% predictions for each of 10 things, and decided that thatâd mean the expected value of betting on those 10 things was good, and then 6 of those things happened, suggesting I was well-calibrated and was right to bet on those things
(I didnât actually make quantitative predictions)
And some of the other ideas were in betweenâno strong reason to believe they were important or that they werenâtâso I just fleshed them out a bit and left it there, pending further feedback. (I also had other things to work on.)
Yeah, I even mentioned this idea (about preventing someone from âwastingâ time on a dead end you already explored) in a blog post a while back. :-D
Much of the bulk of the iceberg is research, which has the interesting property that often negative results â if they are the result of a high-quality, sufficiently powered study â can be useful. If the 100 EAs from the introduction (under 1.a.) are researchers, they know that one of the plausible ideas got to be right, and 99 of them have already been shown not to be useful, then the final EA researcher can already eliminate 99% of the work with very little effort by relying on what the others have already done. The bulk of that impact iceberg was thanks to the other researchers. Insofar as research is a component of the iceberg, itâs a particularly strong investment.
Itâs also important to be transparent about oneâs rigor and to make the negative results findable for others. The second is obvious. The first is because the dead end may not actually be a dead end but only looked that way given the particular way in which you had resolved the optimal stopping problem of investigating it (even) further.
Itâs also important to be transparent about oneâs rigor and to make the negative results findable for others. The second is obvious. The first is because the dead end may not actually be a dead end but only looked that way given the particular way in which you had resolved the optimal stopping problem of investigating it (even) further.
I agree with these points, and think that they might sometimes be under-appreciated (both in and outside of EA).
To sort-of restate your points:
I think itâs common for people to not publish explorations that turned out to seem to ânot reveal anything importantâ (except of course that this direction of exploration might be worth skipping).
Much has been written about this sort of issue, and there can be valid reasons for that behaviour, but sometimes it seems unfortunate.
I think another failure mode is to provide some sort of public info of your belief that this direction of exploration seems worth skipping, but without sufficient reasoning transparency, which could make people rule this out too much/âtoo early.
Again, there can be valid reasons for this (if youâre sufficiently confident that itâs worth ruling out this direction and you have sufficiently high-value other things to do, it might not be worth spending time on a write-up with high reasoning transparency), but sometimes it seems unfortunate.
Part of this reminds me a lot of CFARâs approach here (I canât quite tell whether Julia Galef is interviewer, interviewee, or both):
For example, when Iâve decided to take a calculated risk, knowing that I might well fail but that itâs still worth it to try, I often find myself worrying about failure even after having made the decision to try. And I might be tempted to lie to myself and say, âDonât worry! This is going to work!â so that I can be relaxed and motivated enough to push forward.
But instead, in those situations I like to use a framework CFAR sometimes calls âWorker-me versus CEO-me.â I remind myself that CEO-me has thought carefully about this decision, and for now Iâm in worker mode, with the goal of executing CEO-meâs decision. Now is not the time to second-guess the CEO or worry about failure.
Your approach to gathering feedback and iterating on the output, making it more and more refined with every iteration but also deciding whether itâs worth another iteration, that process sounds great!
I think a lot of people aim for such a process or want to after reading your comment, but will held back from showing their first draft to their first round of reviewers because they worry the reviewers will think badly of them for addressing a topic of this particular level of perceived difficulty or relevance (maybe itâs too difficult or too irrelevant in the reviewerâs opinion), or think badly of them for a particular wording, or think badly of them because they think you shouldâve anticipated a negative effect of writing about the topic and not done so (e.g., some complex acausal trade or social dynamics thing that didnât occur to you), or just generally have diffuse fears holding them back. Such worries are probably disproportionate, but still, overcoming them will probably require particular tricks or training.
I like that âWorker-me versus CEO-meâ framing, and hadnât heard of it or seen that page, so thanks for sharing that. It does seem related to what I said in the parent comment.
I share the view that itâll be decently common for a range of disproportionate worries to hold people back from striking out into areas that seem good in expected value but very uncertain and with real counterarguments, and from sharing early-stage results from such pursuits. I also think there can be a range of good reasons to hold back from those things, and that it can be hard to tell when the worries are disproportionate!
I imagine itâd be hard (though not impossible) to generate advice on this thatâs quite generally useful without being vague/âlittered with caveats. People will probably have to experiment to some extent, get advice from trusted people on their general approach, and continue reflecting, or something like that.
2 and 3. Self-consciousness and Is there something interesting here?
These questions definitely resonate with me, and I imagine theyâd resonate with most/âall researchers.
I have a tendency to continually wonder if what Iâm doing is what I should be doing, or if I should change my priorities. I think this is good in some ways. But sometimes Iâd make better decisions faster if I just actually pursued an idea more âconfidentlyâ for a bit, to get more info on whether itâs worth pursuing, rather than just âwonderingâ about it repeatedly and going back and forth without much new info to work with. Basically, I might do too much self-doubt-style armchair reasoning, with too little actual empirical info.
Also, pursuing an idea more âconfidentlyâ for a bit will not only inform me about whether to continue pursuing it further, but also might result in outputs that are useful for others. So I try to sometimes switch into âjust commit and focus modeâ for a given time period, or until I hit a given milestone, and mostly minimise reflection on what I should prioritise during that time. But so far this has been like a grab bag of heuristics and habits I use, rather than a more precise guideline for myself.
See also When to focus and when to re-evaluate.
Things that help me with this include, and/âor some scattered related thoughts, include:
Talking to others and getting feedback, including on early-stage ideas
I liked David and Jasonâs remarks on this in their comments
A sort-of minimum viable product and quick feedback loop approach has often seemed useful for meâsomething like:
First getting verbal feedback from a couple people on a messy, verbal description of an idea
Then writing up a rough draft about the idea and circulating it to a couple more people for a bit more feedback
Then polishing and fleshing out that draft and circulating it to a few more people for more feedback
Then posting publicly
(But only proceeding to the next step if evidence from the prior oneâplus oneâs own intuitionsâsuggested this would be worthwhile)
Feedback has often helped me determine whether an idea is worth pursuing further, feel more comfortable/âmotivated with pursuing an idea further (rather than being mired in unproductive self-doubt), develop the idea, work out which angles of it are most worth pursuing, and work out how to express it more clearly
Reminding myself that I havenât really gathered any new info since the last time I thought âShould this really be what I spend my time on?â, so thinking about that again is unlikely to reveal new insights, and is probably just a stupid part of my psychology rather than something Iâd endorse.
I might think to myself something like âIf a friend was doing this, youâd think itâs irrational, and gently advise them to just actually commit for a bit and get new info, right? So shouldnât you do the same yourself?â
Remembering Algorithms to Live By drawing an analogy to a failure mode in which computers continually reprioritise tasks and the reprioritisation takes up just enough processing power to mean no actual progress on any of the tasks occurs, and this can just cycle forever. And the way to get out of this is to at some point just do tasks, even without having confidence that these âshouldâ be top priority.
This is just my half-remembered version of that part of the book, and might be wrong somehow.
Remembering that Iâd be deeply uncertain about the âactualâ value of any project I could pursue, because the world is very complicated and my ambitions (contribute to improving the long-term future) are pretty lofty. The best I can do is something that seems good in expected value but with large error bars. So the fact I feel some uncertainty and doubt provides basically no evidence that this project isnât worth pursuing. (Though feeling an unusually large amount of uncertainty and doubt might.)
Remembering that, if the idea ends up seeming to have not been important but there was a reasonable ex ante case that it mightâve been important, thereâs a decent chance someone else would end up pursuing it if I donât. So if I pursue it, then find out it seems to not be important, then write about what I found, that might still have the effect of causing an important project to get done, because it might cause someone else to do that important project rather than doing something similar to what I did.
Examples to somewhat illustrate the last two points:
This year, in some so-far-unpublished work, I wrote about some ideas that:
I initially wasnât confident about the importance of
Seemed like they shouldâve been obvious to relevant groups, but seemed not to have been discussed by them. And that generally seems like (at least) weak evidence that an idea either (a) actually isnât important or (b) has been in essence discussed in some other form or place that I just am not familiar with.
So when I had the initial forms of these ideas and wasnât sure how much time (if any) to spend on them, I took roughly the following approach:
I developed some thoughts on some of the ideas. Then I shared those thoughts verbally or as very rough drafts with a small set of people who seemed like theyâd have decent intuitions on whether the ideas were important vs unimportant, somewhat novel vs already covered, etc.
In most cases, this early feedback indicated that it was at least plausible that the ideas were somewhat important and somewhat novel. Thisâcombined with my independent impression that these ideas might be somewhat important and novelâseemed to provide sufficient reason to flesh those ideas out further, as well as to flesh out related ideas (which seemed like theyâd probably also be important and novel if the other ideas were, and vice versa).
So I did so, then shared that slightly more widely. Then I got more positive feedback, so I bothered to invest the time to polish the writings up a bit more.
Meanwhile, when I fleshed one of the ideas out a little, it seemed like that one turned out to probably not be very important at all. So with that one, I just made sure that my write-up made it clear early on that my current view was that this idea probably didnât matter, and I neatened up the write-up just a bit, because I still thought the write-up might be a bit useful either to:
Explain to others why they shouldnât bother exploring the same thing
Make it easy for others to see if they disagreed with my reasoning for why this probably didnât matter, because I might be wrong about that, and it might be good for others to quickly check that reasoning
Having spent time on that idea sort-of felt in hindsight silly or like a mistake. But I think I probably shouldnât see that as having been a bad decision ex ante, given that:
It seems plausible that, if not for my write-up, someone else wouldâve eventually âwastedâ time on a similar idea
This was just one out of a set of ideas that I tried to flesh out and write up, many/âmost of which still (in hindsight) seem like they were worth spending time on
So maybe itâs very roughly like I gave 60% predictions for each of 10 things, and decided that thatâd mean the expected value of betting on those 10 things was good, and then 6 of those things happened, suggesting I was well-calibrated and was right to bet on those things
(I didnât actually make quantitative predictions)
And some of the other ideas were in betweenâno strong reason to believe they were important or that they werenâtâso I just fleshed them out a bit and left it there, pending further feedback. (I also had other things to work on.)
Yeah, I even mentioned this idea (about preventing someone from âwastingâ time on a dead end you already explored) in a blog post a while back. :-D
Itâs also important to be transparent about oneâs rigor and to make the negative results findable for others. The second is obvious. The first is because the dead end may not actually be a dead end but only looked that way given the particular way in which you had resolved the optimal stopping problem of investigating it (even) further.
I agree with these points, and think that they might sometimes be under-appreciated (both in and outside of EA).
To sort-of restate your points:
I think itâs common for people to not publish explorations that turned out to seem to ânot reveal anything importantâ (except of course that this direction of exploration might be worth skipping).
Much has been written about this sort of issue, and there can be valid reasons for that behaviour, but sometimes it seems unfortunate.
I think another failure mode is to provide some sort of public info of your belief that this direction of exploration seems worth skipping, but without sufficient reasoning transparency, which could make people rule this out too much/âtoo early.
Again, there can be valid reasons for this (if youâre sufficiently confident that itâs worth ruling out this direction and you have sufficiently high-value other things to do, it might not be worth spending time on a write-up with high reasoning transparency), but sometimes it seems unfortunate.
Part of this reminds me a lot of CFARâs approach here (I canât quite tell whether Julia Galef is interviewer, interviewee, or both):
Your approach to gathering feedback and iterating on the output, making it more and more refined with every iteration but also deciding whether itâs worth another iteration, that process sounds great!
I think a lot of people aim for such a process or want to after reading your comment, but will held back from showing their first draft to their first round of reviewers because they worry the reviewers will think badly of them for addressing a topic of this particular level of perceived difficulty or relevance (maybe itâs too difficult or too irrelevant in the reviewerâs opinion), or think badly of them for a particular wording, or think badly of them because they think you shouldâve anticipated a negative effect of writing about the topic and not done so (e.g., some complex acausal trade or social dynamics thing that didnât occur to you), or just generally have diffuse fears holding them back. Such worries are probably disproportionate, but still, overcoming them will probably require particular tricks or training.
I like that âWorker-me versus CEO-meâ framing, and hadnât heard of it or seen that page, so thanks for sharing that. It does seem related to what I said in the parent comment.
I share the view that itâll be decently common for a range of disproportionate worries to hold people back from striking out into areas that seem good in expected value but very uncertain and with real counterarguments, and from sharing early-stage results from such pursuits. I also think there can be a range of good reasons to hold back from those things, and that it can be hard to tell when the worries are disproportionate!
I imagine itâd be hard (though not impossible) to generate advice on this thatâs quite generally useful without being vague/âlittered with caveats. People will probably have to experiment to some extent, get advice from trusted people on their general approach, and continue reflecting, or something like that.