Sure, but I don’t think those are the only options.
Possible alternative option: come up with a granular theory of change; use that theory to inform decision-making.
I think this is basically what MIRI does. As far as I know, MIRI didn’t use cost-effectiveness analysis to decide on its research agenda (apart from very zoomed-out astronomical waste considerations).
Instead, it used a chain of theoretical reasoning to arrive at the intervention it’s focusing on.
I’m not sure I understand the distinction you’re making. In what sense is this compatible with your contention that “Any model that includes far-future effects isn’t believable because these effects are very difficult to predict accurately”? Is this “chain of theoretical reasoning” a “model that includes far-future effects”?
We do have a fair amount of documentation regarding successful forecasters, see e.g. the book Superforecasting. The most successful forecasters tend to rely less on a single theoretical model and more on an ensemble of models (hedgehogs vs foxes, to use Phil Tetlock’s terminology). Ensembles of models are also essential for winning machine learning competitions. (A big part of the reason I am studying machine learning, aside from AI safety, is its relevance to forecasting. Several of the top forecasters on Metaculus seem to be stats/ML folks, which makes sense because stats/ML is the closest thing we have to “the math of forecasting”.)
I’m not sure I understand the distinction you’re making...
I’m trying to distinguish between cost-effectiveness analyses (quantitative work that takes a bunch of inputs and arrives at a output, usually in the form of a best-guess cost-per-outcome), and theoretical reasoning (often qualitative, doesn’t arrive at a numerical cost-per-outcome, instead arrives at something like ”...and so this thing is probably best”).
Perhaps all theoretical reasoning is just a kind of imprecise cost-effect analysis, but I think they’re actually using pretty different mental processes.
The most successful forecasters tend to rely less on a single theoretical model and more on an ensemble of models...
Sure, but forecasters are working with pretty tight time horizons. I’ve never heard of a forecaster making predictions about what will happen 1000 years from now. (And even if one did, what could we make of such a prediction?)
My argument is that what we care about (the entire course of the future) extends far beyond what we can predict (the next few years, perhaps the next few decades).
Another way of saying it is “Sometimes pulling numbers out of your arse and using them to make a decision is better than pulling a decision out of your arse.” It’s taken from http://slatestarcodex.com/2013/05/02/if-its-worth-doing-its-worth-doing-with-made-up-statistics/ which is relevant here.
Sure, but I don’t think those are the only options.
Possible alternative option: come up with a granular theory of change; use that theory to inform decision-making.
I think this is basically what MIRI does. As far as I know, MIRI didn’t use cost-effectiveness analysis to decide on its research agenda (apart from very zoomed-out astronomical waste considerations).
Instead, it used a chain of theoretical reasoning to arrive at the intervention it’s focusing on.
I’m not sure I understand the distinction you’re making. In what sense is this compatible with your contention that “Any model that includes far-future effects isn’t believable because these effects are very difficult to predict accurately”? Is this “chain of theoretical reasoning” a “model that includes far-future effects”?
We do have a fair amount of documentation regarding successful forecasters, see e.g. the book Superforecasting. The most successful forecasters tend to rely less on a single theoretical model and more on an ensemble of models (hedgehogs vs foxes, to use Phil Tetlock’s terminology). Ensembles of models are also essential for winning machine learning competitions. (A big part of the reason I am studying machine learning, aside from AI safety, is its relevance to forecasting. Several of the top forecasters on Metaculus seem to be stats/ML folks, which makes sense because stats/ML is the closest thing we have to “the math of forecasting”.)
I’m trying to distinguish between cost-effectiveness analyses (quantitative work that takes a bunch of inputs and arrives at a output, usually in the form of a best-guess cost-per-outcome), and theoretical reasoning (often qualitative, doesn’t arrive at a numerical cost-per-outcome, instead arrives at something like ”...and so this thing is probably best”).
Perhaps all theoretical reasoning is just a kind of imprecise cost-effect analysis, but I think they’re actually using pretty different mental processes.
Sure, but forecasters are working with pretty tight time horizons. I’ve never heard of a forecaster making predictions about what will happen 1000 years from now. (And even if one did, what could we make of such a prediction?)
My argument is that what we care about (the entire course of the future) extends far beyond what we can predict (the next few years, perhaps the next few decades).