I have no opinion on what Bostrom did or didn’t say, to be clear. I’ve never even spoken to him. Which is why I said ‘Bostrom-style’. But I have heard this argument, in person, from many of the AI risk advocates I’ve spoken to.
Look, any group in any area can present a primary argument X, be met by (narrow) counterargument Y, and then say ‘but Y doesn’t answer our other arguments A, B, C!’. I can understand why that sequence might be frustrating if you believe A, B, C and don’t personally put much weight on X, but I just feel like that’s not an interesting interaction.
It seems like Rob is arguing against people using Y (the Pascal’s Mugging analogy) as a general argument against working on AI safety, rather than as a narrow response to X.
Presumably we can all agree with him on that. But I’m just not sure I’ve seen people do this. Rob, I guess you have?
I have no opinion on what Bostrom did or didn’t say, to be clear. I’ve never even spoken to him. Which is why I said ‘Bostrom-style’. But I have heard this argument, in person, from many of the AI risk advocates I’ve spoken to.
Look, any group in any area can present a primary argument X, be met by (narrow) counterargument Y, and then say ‘but Y doesn’t answer our other arguments A, B, C!’. I can understand why that sequence might be frustrating if you believe A, B, C and don’t personally put much weight on X, but I just feel like that’s not an interesting interaction.
It seems like Rob is arguing against people using Y (the Pascal’s Mugging analogy) as a general argument against working on AI safety, rather than as a narrow response to X.
Presumably we can all agree with him on that. But I’m just not sure I’ve seen people do this. Rob, I guess you have?