It might not be a strong response to the whole cause area, but isn’t it the only response to the Bostrom-style arguments linked below? Which in my experience covers the majority of the arguments I hear in favour of x-risk.
Very few one line arguments are strong responses to whole world views that smart people actually believe, so I sort of feel like there’s nothing to see here.
I asked Bostrom about this and he said he never even made this argument in this way to the journalist. Given my experience the the media misrepresenting everything you say and wanting to put sexy ideas into their pieces, I believe him.
The New Yorker writer got it straight out of this paper of Bostrom’s (paragraph starting “Even if we use the most conservative of these estimates”). I’ve seen a couple of people report that Bostrom made a similar argument at EA Global.
Look, no doubt the argument has been made by people in the past, including Bostrom who wrote it up for consideration as a counterargument. I do think the ‘astronomical waste’ argument should be considered, and it’s far from obvious that ‘this is a Pascal’s Mugging’ is enough to overcome its strength.
But it’s also not the main reason, only reason, or best reason, many people who work on these problems could ground their choice to do so.
So if you dismiss this argument, before you dismiss the work, move on to look at what you think is the strongest argument, not the weakest.
I actually think there’s an appropriate sense in which it is the strongest argument—not in that it’s the most robust, but in that it has the strongest implications. I think this is why it gets brought up (and that it’s appropriate to do so).
If I were debating you on the topic, it would be wrong to say that you think it’s a Pascal’s mugging. But I read your post as being a commentary on the broader public debate over AI risk research, trying to shift it away from “tiny probability of gigantic benefit” in the way that you (and others) have tried to shift perceptions of EA as a whole or the focus of 80k. And in that broader debate, Bostrom gets cited repeatedly as the respectable, mainstream academic who puts the subject on a solid intellectual footing.
(This is in contrast to MIRI, which as SIAI was utterly woeful and which in its current incarnation still didn’t look like a research institute worthy of the name when I last checked in during the great Tumblr debate of 2014; maybe they’re better now, I don’t know.)
In that context, you’ll have to keep politely telling people that you think the case is stronger than the position your most prominent academic supporter argues from, because the “Pascal’s mugging” thing isn’t going to disappear from the public debate.
I have no opinion on what Bostrom did or didn’t say, to be clear. I’ve never even spoken to him. Which is why I said ‘Bostrom-style’. But I have heard this argument, in person, from many of the AI risk advocates I’ve spoken to.
Look, any group in any area can present a primary argument X, be met by (narrow) counterargument Y, and then say ‘but Y doesn’t answer our other arguments A, B, C!’. I can understand why that sequence might be frustrating if you believe A, B, C and don’t personally put much weight on X, but I just feel like that’s not an interesting interaction.
It seems like Rob is arguing against people using Y (the Pascal’s Mugging analogy) as a general argument against working on AI safety, rather than as a narrow response to X.
Presumably we can all agree with him on that. But I’m just not sure I’ve seen people do this. Rob, I guess you have?
It might not be a strong response to the whole cause area, but isn’t it the only response to the Bostrom-style arguments linked below? Which in my experience covers the majority of the arguments I hear in favour of x-risk.
Very few one line arguments are strong responses to whole world views that smart people actually believe, so I sort of feel like there’s nothing to see here.
I asked Bostrom about this and he said he never even made this argument in this way to the journalist. Given my experience the the media misrepresenting everything you say and wanting to put sexy ideas into their pieces, I believe him.
The New Yorker writer got it straight out of this paper of Bostrom’s (paragraph starting “Even if we use the most conservative of these estimates”). I’ve seen a couple of people report that Bostrom made a similar argument at EA Global.
Look, no doubt the argument has been made by people in the past, including Bostrom who wrote it up for consideration as a counterargument. I do think the ‘astronomical waste’ argument should be considered, and it’s far from obvious that ‘this is a Pascal’s Mugging’ is enough to overcome its strength.
But it’s also not the main reason, only reason, or best reason, many people who work on these problems could ground their choice to do so.
So if you dismiss this argument, before you dismiss the work, move on to look at what you think is the strongest argument, not the weakest.
I actually think there’s an appropriate sense in which it is the strongest argument—not in that it’s the most robust, but in that it has the strongest implications. I think this is why it gets brought up (and that it’s appropriate to do so).
Agreed—despite being counterintuitive, it’s not obviously a flawed argument.
If I were debating you on the topic, it would be wrong to say that you think it’s a Pascal’s mugging. But I read your post as being a commentary on the broader public debate over AI risk research, trying to shift it away from “tiny probability of gigantic benefit” in the way that you (and others) have tried to shift perceptions of EA as a whole or the focus of 80k. And in that broader debate, Bostrom gets cited repeatedly as the respectable, mainstream academic who puts the subject on a solid intellectual footing.
(This is in contrast to MIRI, which as SIAI was utterly woeful and which in its current incarnation still didn’t look like a research institute worthy of the name when I last checked in during the great Tumblr debate of 2014; maybe they’re better now, I don’t know.)
In that context, you’ll have to keep politely telling people that you think the case is stronger than the position your most prominent academic supporter argues from, because the “Pascal’s mugging” thing isn’t going to disappear from the public debate.
I have no opinion on what Bostrom did or didn’t say, to be clear. I’ve never even spoken to him. Which is why I said ‘Bostrom-style’. But I have heard this argument, in person, from many of the AI risk advocates I’ve spoken to.
Look, any group in any area can present a primary argument X, be met by (narrow) counterargument Y, and then say ‘but Y doesn’t answer our other arguments A, B, C!’. I can understand why that sequence might be frustrating if you believe A, B, C and don’t personally put much weight on X, but I just feel like that’s not an interesting interaction.
It seems like Rob is arguing against people using Y (the Pascal’s Mugging analogy) as a general argument against working on AI safety, rather than as a narrow response to X.
Presumably we can all agree with him on that. But I’m just not sure I’ve seen people do this. Rob, I guess you have?