i think any estimate would have a confidence interval so wide that it would be useless. (I said “variance” before; maybe that’s a less well known term)
I am aware of what you mean by variance, but I don’t think this challenges my point: I dispute the idea that you can both say “we can’t make any useful estimate on the likelihood of success” and still claim “it’s worth funding (despite any opportunity costs and other potential drawbacks).”
As the rest of this comment gets into, even a really wide (initial/early-stage) confidence interval can be useful as long as the other variables involved are sufficiently large that you can credibly say “it seems very likely that the probability is at least X%, which is enough to make this very cost effective in expectation.”
(This line of reasoning is very pronounced in longtermism)
Curious where the crux of our disagreement is: Would you agree that some things that can’t be measured are still worth doing? And is your belief also that pushing the abundance agenda can’t possibly be more cost-effective than donations to AMF?
I think one crux/sticking point for me is: I believe that you could make a highly-simplistic but illustrative 3-variable plausibility model involving the following questions:
How much funding/resources should be devoted
What is the probability of achieving X outcome if we devote the above-given amount of resources
How valuable is X outcome (e.g., in terms of QALYs).
This is obviously oversimplified (the actual claims are more distributions rather than point estimates), but it requires you to explicate/stake claims like “even under conservative assumptions X, Y, and Z, the expected value of this intervention is still really large.” Relatedly, it allows you to establish breakeven points. Consider the following:
Let’s suppose you claim achieving some policy agenda outcome would produce somewhere between $1T and $10T of value.
Suppose you argue that spending $100M on some kind of movement/systemic change campaign would increase the likelihood of achieving that outcome by somewhere between 0.1% and 10%.
Those confidence intervals are rather large (the probability estimate spans two orders of magnitude), but even with such wide confidence intervals you can claim that a conservative estimate of the expected value is “at least $1B,” which is “at least a 10x return on investment.” And that’s a claim that I and others can at least dissect.
However, my concern/suspicion is that upon explicating these estimates, the “conservative estimate” of expected value will actually not look very large—and in fact I suspect that even my median estimate will probably be lower than global health and development charities.
Would you agree that some things that can’t be measured are still worth doing?
I would push back against the focus on the word “measured” here: “measured” typically is used to refer to estimates which are so objective, verifiable, and/or otherwise defensible that they get thought of as this special category of knowledge, like “we’ve empirically measured and verified that the average return on investment is X.”
I wholly agree that some things which can’t be “measured” are still worth doing, nor are measurements infallible. It’s not about measurements, it’s about estimates. Going back to the point I made at the beginning, the problem I see with your stance is that (based on my limited interaction here) you seem to both be asserting that no reliable estimates can be made, yet asserting a claim that your estimate finds it is worthwhile. But I’m unclear on what your estimate is, and thus I can’t evaluate it.
Regarding “luck,” I will just redirect back to my claim about breakeven points and reference class estimations: does the reliance on “luck” (fortunate circumstances) set the overall likelihood of success at something like 1%? 0.1%?
What is the breakeven point? And does a quick review of the historical frequency of such “luck” produce an estimate which exceeds that conservative breakeven point?
I am aware of what you mean by variance, but I don’t think this challenges my point: I dispute the idea that you can both say “we can’t make any useful estimate on the likelihood of success” and still claim “it’s worth funding (despite any opportunity costs and other potential drawbacks).”
As the rest of this comment gets into, even a really wide (initial/early-stage) confidence interval can be useful as long as the other variables involved are sufficiently large that you can credibly say “it seems very likely that the probability is at least X%, which is enough to make this very cost effective in expectation.”
(This line of reasoning is very pronounced in longtermism)
I think one crux/sticking point for me is: I believe that you could make a highly-simplistic but illustrative 3-variable plausibility model involving the following questions:
How much funding/resources should be devoted
What is the probability of achieving X outcome if we devote the above-given amount of resources
How valuable is X outcome (e.g., in terms of QALYs).
This is obviously oversimplified (the actual claims are more distributions rather than point estimates), but it requires you to explicate/stake claims like “even under conservative assumptions X, Y, and Z, the expected value of this intervention is still really large.” Relatedly, it allows you to establish breakeven points. Consider the following:
Let’s suppose you claim achieving some policy agenda outcome would produce somewhere between $1T and $10T of value.
Suppose you argue that spending $100M on some kind of movement/systemic change campaign would increase the likelihood of achieving that outcome by somewhere between 0.1% and 10%.
Those confidence intervals are rather large (the probability estimate spans two orders of magnitude), but even with such wide confidence intervals you can claim that a conservative estimate of the expected value is “at least $1B,” which is “at least a 10x return on investment.” And that’s a claim that I and others can at least dissect.
However, my concern/suspicion is that upon explicating these estimates, the “conservative estimate” of expected value will actually not look very large—and in fact I suspect that even my median estimate will probably be lower than global health and development charities.
I would push back against the focus on the word “measured” here: “measured” typically is used to refer to estimates which are so objective, verifiable, and/or otherwise defensible that they get thought of as this special category of knowledge, like “we’ve empirically measured and verified that the average return on investment is X.”
I wholly agree that some things which can’t be “measured” are still worth doing, nor are measurements infallible. It’s not about measurements, it’s about estimates. Going back to the point I made at the beginning, the problem I see with your stance is that (based on my limited interaction here) you seem to both be asserting that no reliable estimates can be made, yet asserting a claim that your estimate finds it is worthwhile. But I’m unclear on what your estimate is, and thus I can’t evaluate it.
Regarding “luck,” I will just redirect back to my claim about breakeven points and reference class estimations: does the reliance on “luck” (fortunate circumstances) set the overall likelihood of success at something like 1%? 0.1%?
What is the breakeven point? And does a quick review of the historical frequency of such “luck” produce an estimate which exceeds that conservative breakeven point?