The reasoning in the comment you quoted is actually not very persuasive, because it’s virtually certain that the user will be dead by 2100, Metaculus won’t exist by then, or MIPs will have ceased to be valuable to them. Even the slightest concern for accuracy should trump the minuscule expected benefit from pursuing this alleged “optimal strategy”. (Though I guess some would derive great pleasure from being able to truly say “I predicted that humanity had a 99% chance of surviving the century 80 years ago and, low and behold, here we are, alive and kicking!”).
Unfortunately, for questions with a shorter time horizon, that kind of argument may have some force. I feel ambivalent about discussing these issues, since I’m not sure how to balance the benefit of alerting others to the potential biases in Metaculus against the cost of exacerbating those biases, either by drawing attention to this strategy among predictors who hadn’t considered it, or by creating the impression that other predictors are using it and thereby eroding the social norm to predict honestly. I guess one can try to emphasize that, at least with questions whose answers have social value, adopting the MIP-maximizing strategy when it is in conflict with accuracy should be seen as a form of defection and those who do it should feel bad about it.
it’s virtually certain that the user will be dead by 2100, Metaculus won’t exist by then, or MIPs will have ceased to be valuable to them
This is a good point that in retrospect seems obvious, and I’m a bit disappointed I hadn’t thought of it when I previously considered this issue or saw the comment Linch quoted. (That said, “virtually certain” maybe seems a bit strong to me.)
“virtually certain” maybe seems a bit strong to me
6% chance of Metaculus existing in 2100, from anthropic reasoning
1% chance of user alive in 2100, from eyeballing actuarial life tables
Given independence, that’s ~0.05%, and I’d say conditional on that combination of events obtaining, maybe 15% chance the user cares (not caring includes not just a change in preferences but also a failure to fulfil the preconditions for caring, such as not remembering the prediction, being too senile to understand things, etc). So something in the order of one in 10k.
The reasoning in the comment you quoted is actually not very persuasive, because it’s virtually certain that the user will be dead by 2100, Metaculus won’t exist by then, or MIPs will have ceased to be valuable to them. Even the slightest concern for accuracy should trump the minuscule expected benefit from pursuing this alleged “optimal strategy”. (Though I guess some would derive great pleasure from being able to truly say “I predicted that humanity had a 99% chance of surviving the century 80 years ago and, low and behold, here we are, alive and kicking!”).
Unfortunately, for questions with a shorter time horizon, that kind of argument may have some force. I feel ambivalent about discussing these issues, since I’m not sure how to balance the benefit of alerting others to the potential biases in Metaculus against the cost of exacerbating those biases, either by drawing attention to this strategy among predictors who hadn’t considered it, or by creating the impression that other predictors are using it and thereby eroding the social norm to predict honestly. I guess one can try to emphasize that, at least with questions whose answers have social value, adopting the MIP-maximizing strategy when it is in conflict with accuracy should be seen as a form of defection and those who do it should feel bad about it.
This is a good point that in retrospect seems obvious, and I’m a bit disappointed I hadn’t thought of it when I previously considered this issue or saw the comment Linch quoted. (That said, “virtually certain” maybe seems a bit strong to me.)
6% chance of Metaculus existing in 2100, from anthropic reasoning
1% chance of user alive in 2100, from eyeballing actuarial life tables
Given independence, that’s ~0.05%, and I’d say conditional on that combination of events obtaining, maybe 15% chance the user cares (not caring includes not just a change in preferences but also a failure to fulfil the preconditions for caring, such as not remembering the prediction, being too senile to understand things, etc). So something in the order of one in 10k.
I think somewhat higher chance of users being alive than that, because of the big correlated stuff that EAs care about.