I agree that it’s not too concerning, which is why I consider it weak evidence. Nevertheless, there are some changes which don’t fit the patterns you described. For example, it seems to me that newer AI safety researchers tend to consider intelligence explosions less likely, despite them being a key component of argument 1. For more details along these lines, check out the exchange between me and Wei Dai in the comments on the version of this post on the alignment forum.
I agree that it’s not too concerning, which is why I consider it weak evidence. Nevertheless, there are some changes which don’t fit the patterns you described. For example, it seems to me that newer AI safety researchers tend to consider intelligence explosions less likely, despite them being a key component of argument 1. For more details along these lines, check out the exchange between me and Wei Dai in the comments on the version of this post on the alignment forum.