I’m not sold on how well calibrated their predictions of catastrophe are, but I think they have contributed a large number of novel & important ideas to the field.
I don’t think they would claim to have significantly better predictive models in a positive sense, they just have far stronger models of what isn’t possible and cannot work for ASI, and it constrains their expectations about the long term far more. (I’m not sure I agree with, say, Eliezer about his view of uselessness of governance, for example—but he has a very clear model, which is unusual.) I also don’t think their view about timelines or takeoff speeds is really a crux—they have claimed that even if ASI is decades away, we still can’t rely on current approaches to scale.
“Believe” being the operative word here. I really don’t think they do.
I’m not sold on how well calibrated their predictions of catastrophe are, but I think they have contributed a large number of novel & important ideas to the field.
I don’t think they would claim to have significantly better predictive models in a positive sense, they just have far stronger models of what isn’t possible and cannot work for ASI, and it constrains their expectations about the long term far more. (I’m not sure I agree with, say, Eliezer about his view of uselessness of governance, for example—but he has a very clear model, which is unusual.) I also don’t think their view about timelines or takeoff speeds is really a crux—they have claimed that even if ASI is decades away, we still can’t rely on current approaches to scale.