it would still result in a high likelihood of very short timelines to superintelligence (there can be inconsistencies between Metaculus forecasts, e.g. with
as others have pointed out before). I’m not claiming we should only rely on these Metaculus forecasts or that we should only plan for [very] short timelines, but I’m getting the impression the community as a whole and OpenPhil in particular haven’t really updated their spending plans with respect to these considerations (or at least this hasn’t been made public, to the best of my awareness), even after updating to shorter timelines.
That question’s definition of AGI is probably too weak—it will probably resolve true a good deal before we have a dangerously powerful AI.
Maybe, though e.g. combined with
it would still result in a high likelihood of very short timelines to superintelligence (there can be inconsistencies between Metaculus forecasts, e.g. with
as others have pointed out before). I’m not claiming we should only rely on these Metaculus forecasts or that we should only plan for [very] short timelines, but I’m getting the impression the community as a whole and OpenPhil in particular haven’t really updated their spending plans with respect to these considerations (or at least this hasn’t been made public, to the best of my awareness), even after updating to shorter timelines.