Some off-topic comments, not specific to you or Yudkowsky:
the belief was so analogous to his current belief about AI… since he had thought a lot about the subject and was already highly engaged in the relevant intellectual community
It seems to me (but I could be mistaken) like I see the phrase “has thought a lot about X” fairly often in EA contexts, where it is taken to imply being very well-informed about X. I don’t think this is good reasoning. Thinking about something is probably required for understanding it well, but is certainly not enough.
When an idea or theory is very fringe, there’s a strong selection effect for people in the relevant intellectual community. This means even their average views are sometimes not good evidence for something. For example, to answer a question about the probability of doom from AI in this century, are alignment researchers a good reference class? They all naturally believe AI is an existential risk to begin with. I’m not sure I have the solution, since “AI researchers in general” isn’t a good reference class either—many might have not given any thought to whether AI is dangerous.
Strong +1 on this. It in fact seems like the more someone thinks about something and takes a public position on it with strong confidence the more incentive they have to stick to the position they have. It’s why making explicit forecasts and creating a forecasting track record is so important in countering this tendency. If arguments cannot be resolved by events happening in the real world then there is not much incentive for one to change their mind especially if it’s about something speculative and abstract that one can generate arguments for ad infinitum by engaging in more speculation.
On your example. The question of AI existential risk this century seems downstream to the question of the probability of AGI this century and one can find some potential reference classes for that: AI safety research, general AI research, computer science research, scientific research, technological innovation etc. None of these are perfect reference classes but are at least something to work with. Contingent on AGI being possible this century one can form an opinion on how low/high the probability of doom be to warrant concern.
Some off-topic comments, not specific to you or Yudkowsky:
It seems to me (but I could be mistaken) like I see the phrase “has thought a lot about X” fairly often in EA contexts, where it is taken to imply being very well-informed about X. I don’t think this is good reasoning. Thinking about something is probably required for understanding it well, but is certainly not enough.
When an idea or theory is very fringe, there’s a strong selection effect for people in the relevant intellectual community. This means even their average views are sometimes not good evidence for something. For example, to answer a question about the probability of doom from AI in this century, are alignment researchers a good reference class? They all naturally believe AI is an existential risk to begin with. I’m not sure I have the solution, since “AI researchers in general” isn’t a good reference class either—many might have not given any thought to whether AI is dangerous.
Strong +1 on this. It in fact seems like the more someone thinks about something and takes a public position on it with strong confidence the more incentive they have to stick to the position they have. It’s why making explicit forecasts and creating a forecasting track record is so important in countering this tendency. If arguments cannot be resolved by events happening in the real world then there is not much incentive for one to change their mind especially if it’s about something speculative and abstract that one can generate arguments for ad infinitum by engaging in more speculation.
On your example. The question of AI existential risk this century seems downstream to the question of the probability of AGI this century and one can find some potential reference classes for that: AI safety research, general AI research, computer science research, scientific research, technological innovation etc. None of these are perfect reference classes but are at least something to work with. Contingent on AGI being possible this century one can form an opinion on how low/high the probability of doom be to warrant concern.