This is my first venture at a comment on this forum. I have recently joined, and this was one of the recommended articles. Sorry my comment is many years after the fact.
I was not aware that EA was so “fixated” on RCT’s. It seems a very limiting position to take, and somewhat inconsistent with the idea of doing the most good. Surely deciding that you’ll only invest in things that can be validated by RCT’s is no different than saying you’ll only invest in things which start with a consonant? The criterion seems almost irrelevant to the potential value of the intervention. I’m sure I’m vastly oversimplifying the position. But nonetheless, I wanted to make a comment on how to handle testability as a parameter.
In many areas of life (e.g. financial markets) there is an acceptance that testability / provability comes with a cost. A low-volatility stock costs more than a stock with the same expected value but higher volatility. People who are willing to accept more uncertainty are, on average, rewarded.
Is there a similar situation here. Insisting on RCTability is a very conservative approach. Laudable if the goal is to prove (for our own satisfaction) that we’re adding value, but not necessarily the option which does the most good, because some options have been excluded because they are not testable.
In my career in business, this is a constant challenge. Successful companies certainly do RCT’s when they can, but they are very careful to distinguish between ideas which RCT’s show to be inefficient and ideas which are not RCTable. For the latter, it is normal to look at an analysis like the one in this article and use judgment to sometimes take non RCTed decisions.
I am sure this is something that could be quantified, much in the way that the cost of volatility is quantified; that one could reach a reasonable position relating how much extra “expected value” would be necessary to compensate for the lack of RCTability of that expected value.
It’s kind of analogous to the way that you compare high-risk / high-reward vs. low-risk / low-reward scenarios by using expected value, except here even the expected value is very uncertain for reasons related not to risk but to testability.
I will go and learn more about this, and probably realise that what I’m writing here is misguided. (feel free not to publish this comment if it doesn’t make sense :D—or otherwise, just delete this last part!)
You probably know this by now, but what the heck. I don’t think EA as a whole is RCT-only. GiveWell is, AFAIK, very randomista. But there are other EA-affiliated organizations that are not as randomista as GiveWell, notably Open Philanthropy and anything with a more x-risk or long-termist focus.
This is my first venture at a comment on this forum. I have recently joined, and this was one of the recommended articles. Sorry my comment is many years after the fact.
I was not aware that EA was so “fixated” on RCT’s. It seems a very limiting position to take, and somewhat inconsistent with the idea of doing the most good. Surely deciding that you’ll only invest in things that can be validated by RCT’s is no different than saying you’ll only invest in things which start with a consonant? The criterion seems almost irrelevant to the potential value of the intervention. I’m sure I’m vastly oversimplifying the position. But nonetheless, I wanted to make a comment on how to handle testability as a parameter.
In many areas of life (e.g. financial markets) there is an acceptance that testability / provability comes with a cost. A low-volatility stock costs more than a stock with the same expected value but higher volatility. People who are willing to accept more uncertainty are, on average, rewarded.
Is there a similar situation here. Insisting on RCTability is a very conservative approach. Laudable if the goal is to prove (for our own satisfaction) that we’re adding value, but not necessarily the option which does the most good, because some options have been excluded because they are not testable.
In my career in business, this is a constant challenge. Successful companies certainly do RCT’s when they can, but they are very careful to distinguish between ideas which RCT’s show to be inefficient and ideas which are not RCTable. For the latter, it is normal to look at an analysis like the one in this article and use judgment to sometimes take non RCTed decisions.
I am sure this is something that could be quantified, much in the way that the cost of volatility is quantified; that one could reach a reasonable position relating how much extra “expected value” would be necessary to compensate for the lack of RCTability of that expected value.
It’s kind of analogous to the way that you compare high-risk / high-reward vs. low-risk / low-reward scenarios by using expected value, except here even the expected value is very uncertain for reasons related not to risk but to testability.
I will go and learn more about this, and probably realise that what I’m writing here is misguided. (feel free not to publish this comment if it doesn’t make sense :D—or otherwise, just delete this last part!)
You probably know this by now, but what the heck. I don’t think EA as a whole is RCT-only. GiveWell is, AFAIK, very randomista. But there are other EA-affiliated organizations that are not as randomista as GiveWell, notably Open Philanthropy and anything with a more x-risk or long-termist focus.
Thanks Michael!