Forum readers who are not frequently on Metaculus may be interested in knowing that there are a number of biases and internal validity issues for long-term predictions on Metaculus, potentially more so than for short term questions there. For example, arguably the most important long-term question on Metaculus:
The optimal strategy on this question should be to assign the lowest possible probability. In this way, if humanity is not extinguished by 2100, as many points as possible will be awarded while if it is extinguished, no one will be interested in the outcome. Note: This is a pseudo humorous comment.
I think nonzero predictors take these comments quite seriously, or for other reasons are fairly flippant about finding out accurate answers to these long-term questions. Thus, forum readers should be extra careful before deferring blindly to Metaculus about such questions, and thus rely more on other sources over Metaculus.
The strongest counterargument to my reasoning above might be something like “Metaculus is unusually public and quantitative as a platform. To the extent that Metaculus has visible errors, we may expect that other epistemic sources have other, potentially larger, invisible errors.”(Analogy: the concept of “not even wrong” in science). I take this reasoning quite seriously but do not consider it overwhelming.
The reasoning in the comment you quoted is actually not very persuasive, because it’s virtually certain that the user will be dead by 2100, Metaculus won’t exist by then, or MIPs will have ceased to be valuable to them. Even the slightest concern for accuracy should trump the minuscule expected benefit from pursuing this alleged “optimal strategy”. (Though I guess some would derive great pleasure from being able to truly say “I predicted that humanity had a 99% chance of surviving the century 80 years ago and, low and behold, here we are, alive and kicking!”).
Unfortunately, for questions with a shorter time horizon, that kind of argument may have some force. I feel ambivalent about discussing these issues, since I’m not sure how to balance the benefit of alerting others to the potential biases in Metaculus against the cost of exacerbating those biases, either by drawing attention to this strategy among predictors who hadn’t considered it, or by creating the impression that other predictors are using it and thereby eroding the social norm to predict honestly. I guess one can try to emphasize that, at least with questions whose answers have social value, adopting the MIP-maximizing strategy when it is in conflict with accuracy should be seen as a form of defection and those who do it should feel bad about it.
it’s virtually certain that the user will be dead by 2100, Metaculus won’t exist by then, or MIPs will have ceased to be valuable to them
This is a good point that in retrospect seems obvious, and I’m a bit disappointed I hadn’t thought of it when I previously considered this issue or saw the comment Linch quoted. (That said, “virtually certain” maybe seems a bit strong to me.)
“virtually certain” maybe seems a bit strong to me
6% chance of Metaculus existing in 2100, from anthropic reasoning
1% chance of user alive in 2100, from eyeballing actuarial life tables
Given independence, that’s ~0.05%, and I’d say conditional on that combination of events obtaining, maybe 15% chance the user cares (not caring includes not just a change in preferences but also a failure to fulfil the preconditions for caring, such as not remembering the prediction, being too senile to understand things, etc). So something in the order of one in 10k.
Forum readers who are not frequently on Metaculus may be interested in knowing that there are a number of biases and internal validity issues for long-term predictions on Metaculus, potentially more so than for short term questions there. For example, arguably the most important long-term question on Metaculus:
has comments like:
I think nonzero predictors take these comments quite seriously, or for other reasons are fairly flippant about finding out accurate answers to these long-term questions. Thus, forum readers should be extra careful before deferring blindly to Metaculus about such questions, and thus rely more on other sources over Metaculus.
The strongest counterargument to my reasoning above might be something like “Metaculus is unusually public and quantitative as a platform. To the extent that Metaculus has visible errors, we may expect that other epistemic sources have other, potentially larger, invisible errors.”(Analogy: the concept of “not even wrong” in science). I take this reasoning quite seriously but do not consider it overwhelming.
The reasoning in the comment you quoted is actually not very persuasive, because it’s virtually certain that the user will be dead by 2100, Metaculus won’t exist by then, or MIPs will have ceased to be valuable to them. Even the slightest concern for accuracy should trump the minuscule expected benefit from pursuing this alleged “optimal strategy”. (Though I guess some would derive great pleasure from being able to truly say “I predicted that humanity had a 99% chance of surviving the century 80 years ago and, low and behold, here we are, alive and kicking!”).
Unfortunately, for questions with a shorter time horizon, that kind of argument may have some force. I feel ambivalent about discussing these issues, since I’m not sure how to balance the benefit of alerting others to the potential biases in Metaculus against the cost of exacerbating those biases, either by drawing attention to this strategy among predictors who hadn’t considered it, or by creating the impression that other predictors are using it and thereby eroding the social norm to predict honestly. I guess one can try to emphasize that, at least with questions whose answers have social value, adopting the MIP-maximizing strategy when it is in conflict with accuracy should be seen as a form of defection and those who do it should feel bad about it.
This is a good point that in retrospect seems obvious, and I’m a bit disappointed I hadn’t thought of it when I previously considered this issue or saw the comment Linch quoted. (That said, “virtually certain” maybe seems a bit strong to me.)
6% chance of Metaculus existing in 2100, from anthropic reasoning
1% chance of user alive in 2100, from eyeballing actuarial life tables
Given independence, that’s ~0.05%, and I’d say conditional on that combination of events obtaining, maybe 15% chance the user cares (not caring includes not just a change in preferences but also a failure to fulfil the preconditions for caring, such as not remembering the prediction, being too senile to understand things, etc). So something in the order of one in 10k.
I think somewhat higher chance of users being alive than that, because of the big correlated stuff that EAs care about.