Key point of this comment: It seems to me a mistake to think forecasting questions are usually useful only if itās feasible to influence whether the asked-about event happens. I think there are just many ways in which our actions can be improved by knowing more about the worldās past, present, and likely future states.
As an analogy, Iād find a map more useful if it notes where booby traps are even if I canāt disable the traps (since I can side step them), and Iām better able to act in the world if Iām aware that China exists and that its GDP is larger than that of most countries (even though I canāt really influence that), and a huge amount of learning is about things that happened previously and yet is still useful.
Your footnote nods to that idea, but as if thatās like a special case.
For example, the first of those questions could be relevant to decisions like how much to invest in reducing Israel-Palestine tensions or the chance of a major nuclear weapons buildup by Israel or Iran.
I also think people influenced directly or indirectly by Metaculus could take actions with substantial leverage over major events, so Iād focus less on ālargeā and more on āneglectedness /ā crowdednessā. E.g., EAs seem to be some of the biggest players for extreme AI risk, extreme biorisk, and possibly nuclear risk, which are all actually very large in terms of complexity and impact, but are sufficiently uncrowded that a big impact can still be made.
(Though I do of course agree that questions can differ hugely in decision-relevance, that considering who will be directly or indirectly influenced by the questions matters, that those questions you highlighted are probably less impactful than e.g. many AI risk or nuclear risk questions on Metaculus.)
In particular, Iād tend to think that the changing decisions and influencing the event which is being forecasted might be the main pathways to impact, but I could be wrong.
Maybe āChanging decisionsā should be āChanging other decisionsā? Since I think influencing the forecasting event occurs via influencing decisions about that event?
Key point of this comment: It seems to me a mistake to think forecasting questions are usually useful only if itās feasible to influence whether the asked-about event happens. I think there are just many ways in which our actions can be improved by knowing more about the worldās past, present, and likely future states.
As an analogy, Iād find a map more useful if it notes where booby traps are even if I canāt disable the traps (since I can side step them), and Iām better able to act in the world if Iām aware that China exists and that its GDP is larger than that of most countries (even though I canāt really influence that), and a huge amount of learning is about things that happened previously and yet is still useful.
Your footnote nods to that idea, but as if thatās like a special case.
For example, the first of those questions could be relevant to decisions like how much to invest in reducing Israel-Palestine tensions or the chance of a major nuclear weapons buildup by Israel or Iran.
I also think people influenced directly or indirectly by Metaculus could take actions with substantial leverage over major events, so Iād focus less on ālargeā and more on āneglectedness /ā crowdednessā. E.g., EAs seem to be some of the biggest players for extreme AI risk, extreme biorisk, and possibly nuclear risk, which are all actually very large in terms of complexity and impact, but are sufficiently uncrowded that a big impact can still be made.
(Though I do of course agree that questions can differ hugely in decision-relevance, that considering who will be directly or indirectly influenced by the questions matters, that those questions you highlighted are probably less impactful than e.g. many AI risk or nuclear risk questions on Metaculus.)
Here is this point in a picture.
In particular, Iād tend to think that the changing decisions and influencing the event which is being forecasted might be the main pathways to impact, but I could be wrong.
Version 2:
I think this is cool.
Maybe āChanging decisionsā should be āChanging other decisionsā? Since I think influencing the forecasting event occurs via influencing decisions about that event?