There is a corporate motto: “10% of decisions need to be right. 90% of decisions just need to be taken!” which resonates perfectly with this post.
To put this in an EA context—if you’re unsure which of two initiatives to work on, that probably means that (to the best of your available knowledge) they are likely to have similar impacts. So, in the grand scheme of things, it probably doesn’t matter which you choose. But the time you spend deciding is time that you are NOT dedicating to either of the initiatives.
However, this is a good rule-of-thumb, but you need to be wary of exceptions. There are those 10% of cases where your decision matters a lot. In my case, as a chemical engineer, decisions about safety would typically be in that 10%. In an EA context, maybe it’s decisions where you really are not sure if a particular initiative might be doing more harm than good which fall into this 10%.
How to decide whether you can already take a decision?
Does any decision have potentially very bad consequences? Not just wasted time, but actual harm, or major investments wasted or whatever.
How much of a difference is there likely to be depending on which decision you take?
What new information are you likely to get (and when) which could help you make a better decision?
Put the pros and cons on a sheet of paper and discuss with a friend or colleague. Often times, this exercise alone, even before you discuss, will enable you to make a decision.
This is my first venture at a comment on this forum. I have recently joined, and this was one of the recommended articles. Sorry my comment is many years after the fact.
I was not aware that EA was so “fixated” on RCT’s. It seems a very limiting position to take, and somewhat inconsistent with the idea of doing the most good. Surely deciding that you’ll only invest in things that can be validated by RCT’s is no different than saying you’ll only invest in things which start with a consonant? The criterion seems almost irrelevant to the potential value of the intervention. I’m sure I’m vastly oversimplifying the position. But nonetheless, I wanted to make a comment on how to handle testability as a parameter.
In many areas of life (e.g. financial markets) there is an acceptance that testability / provability comes with a cost. A low-volatility stock costs more than a stock with the same expected value but higher volatility. People who are willing to accept more uncertainty are, on average, rewarded.
Is there a similar situation here. Insisting on RCTability is a very conservative approach. Laudable if the goal is to prove (for our own satisfaction) that we’re adding value, but not necessarily the option which does the most good, because some options have been excluded because they are not testable.
In my career in business, this is a constant challenge. Successful companies certainly do RCT’s when they can, but they are very careful to distinguish between ideas which RCT’s show to be inefficient and ideas which are not RCTable. For the latter, it is normal to look at an analysis like the one in this article and use judgment to sometimes take non RCTed decisions.
I am sure this is something that could be quantified, much in the way that the cost of volatility is quantified; that one could reach a reasonable position relating how much extra “expected value” would be necessary to compensate for the lack of RCTability of that expected value.
It’s kind of analogous to the way that you compare high-risk / high-reward vs. low-risk / low-reward scenarios by using expected value, except here even the expected value is very uncertain for reasons related not to risk but to testability.
I will go and learn more about this, and probably realise that what I’m writing here is misguided. (feel free not to publish this comment if it doesn’t make sense :D—or otherwise, just delete this last part!)