Do you have any plans to follow this up with versions of the model that relax/âchange some assumptions, or that try to capture things like attractor states or the future likely being far âlargerâ (i.e., containing more sentient beings, at least in the absence of an extinction event)?
Of course, Tarsneyâs paper already provides nice models for interventions focused on reaching/âavoiding attractor states. But I wonder if it could be fruitful to see how the model structure here could be adapted to account for such interventions, and see if this generates different implications or intuitions than Tarsneyâs model does.
Somewhat relatedly, do you know what the results would be if we model noise as increasing sublinearly (rather than linearly or superlinearly), in a situation where the signal isnât suggesting the longtermist intervention has benefits for an infinite length of time?
Obviously this would increase the relative value of the longtermist intervention compared to the neartermist one, but I wonder if there are other implications as well, if the difference that that would make would be huge or moderate or small, what that depends on, etc.
One reason this seems interesting to me is that Iâd currently guess that noise does indeed increase sublinearly. This is based on two things:
I think a small amount of data from Tetlock gives weak evidence of this, if Iâm interpreting it and Muehlhauserâs commentary on it (here) correctly.
See in particular footnote 17 from that link. Hereâs the key quote (but without the useful graph, methodological info, etc.):
âFor our purposes here, the key results shown above are, roughly speaking, that (1) regular forecasters did approximately no better than chance on this metric at ~375 days before each question closed, (2) superforecasters did substantially better than chance on this metric at ~375 days before each question closed, (3) both regular forecasters and superforecasters were almost always âon the right side of maybeâ immediately before each question closed, and (4) superforecasters were roughly as accurate on this metric at ~125 days before each question closed as they were at ~375 days before each question closed.
If GJP had involved questions with substantially longer time horizons, how quickly would superforecaster accuracy declined with longer time horizons? We canât know, but an extrapolation of the results above is at least compatible with an answer of âfairly slowly.ââ
I think that, a priori, Iâd predict that noise would increase sublinearly. But I havenât tried to work out whatâs driving that intuition, and itâs possible that itâs just that Iâve now seen that data from Tetlock and Muehlhauserâs commentary.
Do you have any plans to follow this up with versions of the model that relax/âchange some assumptions, or that try to capture things like attractor states or the future likely being far âlargerâ (i.e., containing more sentient beings, at least in the absence of an extinction event)?
Of course, Tarsneyâs paper already provides nice models for interventions focused on reaching/âavoiding attractor states. But I wonder if it could be fruitful to see how the model structure here could be adapted to account for such interventions, and see if this generates different implications or intuitions than Tarsneyâs model does.
Somewhat relatedly, do you know what the results would be if we model noise as increasing sublinearly (rather than linearly or superlinearly), in a situation where the signal isnât suggesting the longtermist intervention has benefits for an infinite length of time?
Obviously this would increase the relative value of the longtermist intervention compared to the neartermist one, but I wonder if there are other implications as well, if the difference that that would make would be huge or moderate or small, what that depends on, etc.
One reason this seems interesting to me is that Iâd currently guess that noise does indeed increase sublinearly. This is based on two things:
I think a small amount of data from Tetlock gives weak evidence of this, if Iâm interpreting it and Muehlhauserâs commentary on it (here) correctly.
See in particular footnote 17 from that link. Hereâs the key quote (but without the useful graph, methodological info, etc.):
âFor our purposes here, the key results shown above are, roughly speaking, that (1) regular forecasters did approximately no better than chance on this metric at ~375 days before each question closed, (2) superforecasters did substantially better than chance on this metric at ~375 days before each question closed, (3) both regular forecasters and superforecasters were almost always âon the right side of maybeâ immediately before each question closed, and (4) superforecasters were roughly as accurate on this metric at ~125 days before each question closed as they were at ~375 days before each question closed.
If GJP had involved questions with substantially longer time horizons, how quickly would superforecaster accuracy declined with longer time horizons? We canât know, but an extrapolation of the results above is at least compatible with an answer of âfairly slowly.ââ
I think that, a priori, Iâd predict that noise would increase sublinearly. But I havenât tried to work out whatâs driving that intuition, and itâs possible that itâs just that Iâve now seen that data from Tetlock and Muehlhauserâs commentary.