I don’t have a fully-formed opinion here, but for now I’ll just note that the task that the examined futurists are implicitly given is very different from assigning a probability distribution to a variable based on parameters. Rather, the implicit task is to say some stuff that you think will happen. Then we’re judging whether those things happen. But I’m not sure how to translate the output from the task into action. (E.g., Asimov says X will happen, and so we should do Y.)
Agree that these are different; I think they aren’t different enough to come anywhere close to meaning that longtermism can’t be action-guiding though!
Would love to hear more from you when you’ve had a chance to form more of an opinion :)
Edit: also, it seems like one could mostly refute this objection by just finding times when someone did something with the intention of affecting the future in 10-20 years (which many people give some weight to for AGI timelines), and the action had the intended effect? This seems trivial.
I don’t have a fully-formed opinion here, but for now I’ll just note that the task that the examined futurists are implicitly given is very different from assigning a probability distribution to a variable based on parameters. Rather, the implicit task is to say some stuff that you think will happen. Then we’re judging whether those things happen. But I’m not sure how to translate the output from the task into action. (E.g., Asimov says X will happen, and so we should do Y.)
Agree that these are different; I think they aren’t different enough to come anywhere close to meaning that longtermism can’t be action-guiding though!
Would love to hear more from you when you’ve had a chance to form more of an opinion :)
Edit: also, it seems like one could mostly refute this objection by just finding times when someone did something with the intention of affecting the future in 10-20 years (which many people give some weight to for AGI timelines), and the action had the intended effect? This seems trivial.