Sorry, this is going to be a “you’re doing it wrong” comment. I will try to criticize constructively!
There are too many arbitrary assumptions. Your chosen numbers, your categorization scheme, your assumption about whether giving now or giving later is better in each scenario, your assumption that there can’t be some split between giving now and later, your failure to incorporate any interest rate into the calculations, your assumption that the now/later decision can’t influence the scenarios’ probabilities. Any of these could have decisive influence over your conclusion.
But there’s also a problem with your calculation. Your conclusion is based on the fact that you expect higher utility to result from scenarios in which you believe giving now will be better. That’s not actually an argument for deciding to give now, as it doesn’t assess whether the world will be happier as a result of the giving decision. You would need to estimate the relative impact of giving now vs. giving later under each of those scenarios, and then weight the relative impacts by the probabilities of the scenarios.
Don’t stop trying to quantify things. But remember the pitfalls. In particular, simplicity is paramount. You want to have as few “weak links” in your model as possible; i.e. moving parts that are not supported by evidence and that have significant influence on your conclusion. If it’s just one or two numbers or assumptions that are arbitrary, then the model can help you understand the implications of your uncertainty about them, and you might also be able to draw some kind of conclusion after appropriate sensitivity testing. However, if it’s 10 or 20, then you’re probably going to be led astray by spurious results.
I basically agree with your critique, though I’d say my assumptions are more naïve than arbitrary (mostly semantic; the issues persist either way). On reflection, I don’t think I’ve arrived at any solid conclusions here, and this exercise’s main fruit is a renewed appreciation of how tangled these questions are.
I’m getting hung up on your last paragraph: “However, if it’s 10 or 20, then you’re probably going to be led astray by spurious results.”
This is pretty unsatisfying – thinking about the future is necessarily speculative, so people are going to have to use “arbitrary” inputs in their models for want of empirical data. If they only use a few arbitrary inputs, their models will likely be too simplistic to be meaningful. But if they use many arbitrary inputs, their models will give spurious results? It sort of feels like an impossible bind for the project of modeling the future.
Or maybe I’m misunderstanding your definition of “arbitrary” inputs, and there is another class of speculative input that we should be using for model building.
Sure. When I say “arbitrary”, I mean not based on evidence, or on any kind of robust reasoning. I think that’s the same as your conception of it.
The “conclusion” of your model is a recommendation between giving now vs. giving later, though I acknowledge that you don’t go as far as to actually make a recommendation.
To explain the problem with arbitrary inputs, when working with a model, I often try to think about how I would defend any conclusions from the model against someone who wants to argue against me. If my model contains a number that I have simply chosen because it “felt” right to me, then that person could quite reasonably suggest a different number be used. If they are able to choose some other reasonable number that produces different conclusions, then they have shown that my conclusions are not reliable. The key test for arbitrary assumptions is: will the conclusions change if I assume other values?
Otherwise, arbitrary assumptions might be helpful if you want to conduct a hypothetical “if this, then that” analysis, to help understand a particular dynamic at play, like bayesian probability. But this is really hard if you’ve made lots of arbitrary assumptions (say 10-20); it’s difficult to get any helpful insights from “if this and this and this and this and........, then that”.
So yes, we are in a bind when we want to make predictions about the future where there is no data. Who was it that said “prediction is difficult, especially about the future”? ;-) But models that aren’t sufficiently grounded in reality have limited benefit, and might even be counterproductive. The challenge with modelling is always to find ways to draw robust and useful conclusions given what we have.
Sorry, this is going to be a “you’re doing it wrong” comment. I will try to criticize constructively!
There are too many arbitrary assumptions. Your chosen numbers, your categorization scheme, your assumption about whether giving now or giving later is better in each scenario, your assumption that there can’t be some split between giving now and later, your failure to incorporate any interest rate into the calculations, your assumption that the now/later decision can’t influence the scenarios’ probabilities. Any of these could have decisive influence over your conclusion.
But there’s also a problem with your calculation. Your conclusion is based on the fact that you expect higher utility to result from scenarios in which you believe giving now will be better. That’s not actually an argument for deciding to give now, as it doesn’t assess whether the world will be happier as a result of the giving decision. You would need to estimate the relative impact of giving now vs. giving later under each of those scenarios, and then weight the relative impacts by the probabilities of the scenarios.
Don’t stop trying to quantify things. But remember the pitfalls. In particular, simplicity is paramount. You want to have as few “weak links” in your model as possible; i.e. moving parts that are not supported by evidence and that have significant influence on your conclusion. If it’s just one or two numbers or assumptions that are arbitrary, then the model can help you understand the implications of your uncertainty about them, and you might also be able to draw some kind of conclusion after appropriate sensitivity testing. However, if it’s 10 or 20, then you’re probably going to be led astray by spurious results.
I basically agree with your critique, though I’d say my assumptions are more naïve than arbitrary (mostly semantic; the issues persist either way). On reflection, I don’t think I’ve arrived at any solid conclusions here, and this exercise’s main fruit is a renewed appreciation of how tangled these questions are.
I’m getting hung up on your last paragraph: “However, if it’s 10 or 20, then you’re probably going to be led astray by spurious results.”
This is pretty unsatisfying – thinking about the future is necessarily speculative, so people are going to have to use “arbitrary” inputs in their models for want of empirical data. If they only use a few arbitrary inputs, their models will likely be too simplistic to be meaningful. But if they use many arbitrary inputs, their models will give spurious results? It sort of feels like an impossible bind for the project of modeling the future.
Or maybe I’m misunderstanding your definition of “arbitrary” inputs, and there is another class of speculative input that we should be using for model building.
Sure. When I say “arbitrary”, I mean not based on evidence, or on any kind of robust reasoning. I think that’s the same as your conception of it.
The “conclusion” of your model is a recommendation between giving now vs. giving later, though I acknowledge that you don’t go as far as to actually make a recommendation.
To explain the problem with arbitrary inputs, when working with a model, I often try to think about how I would defend any conclusions from the model against someone who wants to argue against me. If my model contains a number that I have simply chosen because it “felt” right to me, then that person could quite reasonably suggest a different number be used. If they are able to choose some other reasonable number that produces different conclusions, then they have shown that my conclusions are not reliable. The key test for arbitrary assumptions is: will the conclusions change if I assume other values?
Otherwise, arbitrary assumptions might be helpful if you want to conduct a hypothetical “if this, then that” analysis, to help understand a particular dynamic at play, like bayesian probability. But this is really hard if you’ve made lots of arbitrary assumptions (say 10-20); it’s difficult to get any helpful insights from “if this and this and this and this and........, then that”.
So yes, we are in a bind when we want to make predictions about the future where there is no data. Who was it that said “prediction is difficult, especially about the future”? ;-) But models that aren’t sufficiently grounded in reality have limited benefit, and might even be counterproductive. The challenge with modelling is always to find ways to draw robust and useful conclusions given what we have.