Fair question. I say little weight but if it was far enough from my view I would update a little. My view also may not be representative of other forecasters, as is evident from Misha’s comment.
For example is there any level of consensus against ~AGI by 2070 (or some other date) that would be strong enough to move your forecast by 10 percentage points?
From the original Grace et al. survey (and I think the more recent ones as well? but haven’t read as closely) the ML researchers clearly had very incoherent views depending on the question being asked and elicitation techniques, which I think provides some evidence they haven’t thought about it that deeply and we shouldn’t take it too seriously (some incoherence is expected, but I think they gave wildly different answers for HLMI (human-level machine intelligence) and full automation of labor).
So I think I’d split up the thresholds by somewhat coherent vs. still very incoherent.
My current forecast for ~AGI by 2100 barring pre-AGI catastrophe is 80%. To move it to 70% based just a survey of ML experts, I think I’d have to see something like one of:
ML experts still appear to be very incoherent, but are giving a ~10% chance of ~AGI by 2100 on average across framings.
ML experts appear to be somewhat coherent, and are giving a ~25% chance of ~AGI by 2100.
(but I haven’t thought about this a lot, these numbers could change substantially on reflection or discussion/debate)
Fair question. I say little weight but if it was far enough from my view I would update a little. My view also may not be representative of other forecasters, as is evident from Misha’s comment.
From the original Grace et al. survey (and I think the more recent ones as well? but haven’t read as closely) the ML researchers clearly had very incoherent views depending on the question being asked and elicitation techniques, which I think provides some evidence they haven’t thought about it that deeply and we shouldn’t take it too seriously (some incoherence is expected, but I think they gave wildly different answers for HLMI (human-level machine intelligence) and full automation of labor).
So I think I’d split up the thresholds by somewhat coherent vs. still very incoherent.
My current forecast for ~AGI by 2100 barring pre-AGI catastrophe is 80%. To move it to 70% based just a survey of ML experts, I think I’d have to see something like one of:
ML experts still appear to be very incoherent, but are giving a ~10% chance of ~AGI by 2100 on average across framings.
ML experts appear to be somewhat coherent, and are giving a ~25% chance of ~AGI by 2100.
(but I haven’t thought about this a lot, these numbers could change substantially on reflection or discussion/debate)