I basically agree with your critique, though I’d say my assumptions are more naïve than arbitrary (mostly semantic; the issues persist either way). On reflection, I don’t think I’ve arrived at any solid conclusions here, and this exercise’s main fruit is a renewed appreciation of how tangled these questions are.
I’m getting hung up on your last paragraph: “However, if it’s 10 or 20, then you’re probably going to be led astray by spurious results.”
This is pretty unsatisfying – thinking about the future is necessarily speculative, so people are going to have to use “arbitrary” inputs in their models for want of empirical data. If they only use a few arbitrary inputs, their models will likely be too simplistic to be meaningful. But if they use many arbitrary inputs, their models will give spurious results? It sort of feels like an impossible bind for the project of modeling the future.
Or maybe I’m misunderstanding your definition of “arbitrary” inputs, and there is another class of speculative input that we should be using for model building.
Sure. When I say “arbitrary”, I mean not based on evidence, or on any kind of robust reasoning. I think that’s the same as your conception of it.
The “conclusion” of your model is a recommendation between giving now vs. giving later, though I acknowledge that you don’t go as far as to actually make a recommendation.
To explain the problem with arbitrary inputs, when working with a model, I often try to think about how I would defend any conclusions from the model against someone who wants to argue against me. If my model contains a number that I have simply chosen because it “felt” right to me, then that person could quite reasonably suggest a different number be used. If they are able to choose some other reasonable number that produces different conclusions, then they have shown that my conclusions are not reliable. The key test for arbitrary assumptions is: will the conclusions change if I assume other values?
Otherwise, arbitrary assumptions might be helpful if you want to conduct a hypothetical “if this, then that” analysis, to help understand a particular dynamic at play, like bayesian probability. But this is really hard if you’ve made lots of arbitrary assumptions (say 10-20); it’s difficult to get any helpful insights from “if this and this and this and this and........, then that”.
So yes, we are in a bind when we want to make predictions about the future where there is no data. Who was it that said “prediction is difficult, especially about the future”? ;-) But models that aren’t sufficiently grounded in reality have limited benefit, and might even be counterproductive. The challenge with modelling is always to find ways to draw robust and useful conclusions given what we have.
I basically agree with your critique, though I’d say my assumptions are more naïve than arbitrary (mostly semantic; the issues persist either way). On reflection, I don’t think I’ve arrived at any solid conclusions here, and this exercise’s main fruit is a renewed appreciation of how tangled these questions are.
I’m getting hung up on your last paragraph: “However, if it’s 10 or 20, then you’re probably going to be led astray by spurious results.”
This is pretty unsatisfying – thinking about the future is necessarily speculative, so people are going to have to use “arbitrary” inputs in their models for want of empirical data. If they only use a few arbitrary inputs, their models will likely be too simplistic to be meaningful. But if they use many arbitrary inputs, their models will give spurious results? It sort of feels like an impossible bind for the project of modeling the future.
Or maybe I’m misunderstanding your definition of “arbitrary” inputs, and there is another class of speculative input that we should be using for model building.
Sure. When I say “arbitrary”, I mean not based on evidence, or on any kind of robust reasoning. I think that’s the same as your conception of it.
The “conclusion” of your model is a recommendation between giving now vs. giving later, though I acknowledge that you don’t go as far as to actually make a recommendation.
To explain the problem with arbitrary inputs, when working with a model, I often try to think about how I would defend any conclusions from the model against someone who wants to argue against me. If my model contains a number that I have simply chosen because it “felt” right to me, then that person could quite reasonably suggest a different number be used. If they are able to choose some other reasonable number that produces different conclusions, then they have shown that my conclusions are not reliable. The key test for arbitrary assumptions is: will the conclusions change if I assume other values?
Otherwise, arbitrary assumptions might be helpful if you want to conduct a hypothetical “if this, then that” analysis, to help understand a particular dynamic at play, like bayesian probability. But this is really hard if you’ve made lots of arbitrary assumptions (say 10-20); it’s difficult to get any helpful insights from “if this and this and this and this and........, then that”.
So yes, we are in a bind when we want to make predictions about the future where there is no data. Who was it that said “prediction is difficult, especially about the future”? ;-) But models that aren’t sufficiently grounded in reality have limited benefit, and might even be counterproductive. The challenge with modelling is always to find ways to draw robust and useful conclusions given what we have.