I think this might not be irrationality, but a genuine difference in values.
In particular, I think something like a discount rate disagreement is at the core of a lot of disagreements on AI safety, and to be blunt, you shouldn’t expect convergence unless you successfully persuade them of this.
I don’t think it’s discount rate (esp given short timelines); I think it’s more that people haven’t really thought about why their p(doom|ASI) is low. But people seem remarkably resistant to actually tackle the cruxes of the object level arguments, or fully extrapolate the implications of what they do agree on. When they do, they invariably come up short.
I think this might not be irrationality, but a genuine difference in values.
In particular, I think something like a discount rate disagreement is at the core of a lot of disagreements on AI safety, and to be blunt, you shouldn’t expect convergence unless you successfully persuade them of this.
I don’t think it’s discount rate (esp given short timelines); I think it’s more that people haven’t really thought about why their p(doom|ASI) is low. But people seem remarkably resistant to actually tackle the cruxes of the object level arguments, or fully extrapolate the implications of what they do agree on. When they do, they invariably come up short.