I think we could probably invest a lot more time and resources in interventions that are plausibly good, in order to get more evidence about them. We should probably do more research, although I realise this point is somewhat self-serving. For larger donors, this probably means diversifying their giving more if the value of information diminishes steeply enough, which I think might be the case.
Psychologically, I think we should be a bit more resilient to failure and change. When people consider the idea that they might be giving to cause areas that could turnout to be completely fruitless, I think they find it psychologically difficult. In some ways, just thinking “Look, I’m just exploring this to get the information about how good it its, and if it’s bad, I’ll just change. Or, if it doesn’t do as well as I thought, I’ll just change.” can be quite comforting if you worry about these things.
The extreme view that you could have is “We should just start investing time and money in interventions with high expected value, but little or no evidential support.” A more modest proposal, that I tentatively endorse, is “We should probably start explicitly including the value of information, and assessment of causes and interventions, rather than treating it as an afterthought to concrete value.” In my experience, information value can swamp concrete value; and if that is the case, it really shouldn’t be an afterthought. Instead it should be one of the primary drivers of values, not an afterthought in your calculation summary.
Amanda is talking about the philosophical principle, whereas I’m talking about the algorithm that roughly satisfies it. The principle is that a non-myopic Bayesian will take into account not just the immediate payoff, but also the information value of an action. The algorithm—upper confidence bound—efficiently approximates this behaviour. The fact that UCB is optimistic (about its impact) suggests that we might want to behave similarly, in order capture the information value. (“Information value of an action” and “exploration value” are synonymous here.)
Do you have a sense how this argument relates to Amanda Askell’s argument for the importance of value of information?
Amanda is talking about the philosophical principle, whereas I’m talking about the algorithm that roughly satisfies it. The principle is that a non-myopic Bayesian will take into account not just the immediate payoff, but also the information value of an action. The algorithm—upper confidence bound—efficiently approximates this behaviour. The fact that UCB is optimistic (about its impact) suggests that we might want to behave similarly, in order capture the information value. (“Information value of an action” and “exploration value” are synonymous here.)
Thanks!