I do alignment research at the Alignment Research Center. Learn more about me at markxu.com/about
Mark Xu
How many EA forum posts will there be with greater than or equal to 10 karma submitted in August of 2020?
metaculus link is broken
In what meaningful ways can forecasting questions be categorized?
This is really broad, but one possible categorization might be questions that have inside view predictions versus questions that have outside view predictions.
How optimistic about “amplification” forecast schemes, where forecasters answer questions like “will a panel of experts say <answer> when considering <question> in <n> years?”
When I look at most forecasting questions, they seem goodharty in a very strong sense. For example, the goodhart tower for COVID might look something like:
1. How hard should I quarantine?
2. How hard I should quarantine is affected by how “bad” COVID will be.
3. How “bad” COVID should be caches out into something like “how many people”, “when vaccine coming”, “what is death rate”, etc.
By the time something I care about becomes specific enough to be predictable/forecastable, it seems like most of the thing I actually cared about has been lost.
Do you have a sense of how questions can be better constructed to lose less of the thing that might have inspired the question?
This is an interesting stategic consideration! Thanks for writing it up.
Note that the probability of AsianTAI/AsianAwarenessNeeded depends on whether or not there is an AI risk hub in Asia. In the extreme, if you expect making aligned AI to take much longer than unaligned AI, then making Asia concerened about AI risk might drive the probability of AsianTAI close to 0. Given how rough the model is, I don’t think this matters that much.