Great, I think we’ve gotten to the crux. I agree we have much worse understanding in the AGI case but I think we easily have enough understanding to assign positive probabilities, and likely substantial ones. I agree more detailed models are ideal but in some cases they’re impractical and you have to do the best you’ve got with the evidence you have. Also, this is a matter of degree and not binary, and I think often people take explicit models too literally/seriously and don’t account enough for model uncertainty e.g. putting too much faith in oversimplified economic models, underestimating how much explicit climate models might be missing out on tail risks or unknown unknowns.
I’d be extremely curious to get your take on why AGI forecasting is so different from the long-term speculative forecasts in the piece Nuno linked above, of which many turned out to be true.
I don’t have a fully-formed opinion here, but for now I’ll just note that the task that the examined futurists are implicitly given is very different from assigning a probability distribution to a variable based on parameters. Rather, the implicit task is to say some stuff that you think will happen. Then we’re judging whether those things happen. But I’m not sure how to translate the output from the task into action. (E.g., Asimov says X will happen, and so we should do Y.)
Agree that these are different; I think they aren’t different enough to come anywhere close to meaning that longtermism can’t be action-guiding though!
Would love to hear more from you when you’ve had a chance to form more of an opinion :)
Edit: also, it seems like one could mostly refute this objection by just finding times when someone did something with the intention of affecting the future in 10-20 years (which many people give some weight to for AGI timelines), and the action had the intended effect? This seems trivial.
Great, I think we’ve gotten to the crux. I agree we have much worse understanding in the AGI case but I think we easily have enough understanding to assign positive probabilities, and likely substantial ones. I agree more detailed models are ideal but in some cases they’re impractical and you have to do the best you’ve got with the evidence you have. Also, this is a matter of degree and not binary, and I think often people take explicit models too literally/seriously and don’t account enough for model uncertainty e.g. putting too much faith in oversimplified economic models, underestimating how much explicit climate models might be missing out on tail risks or unknown unknowns.
I’d be extremely curious to get your take on why AGI forecasting is so different from the long-term speculative forecasts in the piece Nuno linked above, of which many turned out to be true.
I don’t have a fully-formed opinion here, but for now I’ll just note that the task that the examined futurists are implicitly given is very different from assigning a probability distribution to a variable based on parameters. Rather, the implicit task is to say some stuff that you think will happen. Then we’re judging whether those things happen. But I’m not sure how to translate the output from the task into action. (E.g., Asimov says X will happen, and so we should do Y.)
Agree that these are different; I think they aren’t different enough to come anywhere close to meaning that longtermism can’t be action-guiding though!
Would love to hear more from you when you’ve had a chance to form more of an opinion :)
Edit: also, it seems like one could mostly refute this objection by just finding times when someone did something with the intention of affecting the future in 10-20 years (which many people give some weight to for AGI timelines), and the action had the intended effect? This seems trivial.