A thought about AI x-risk discourse and the debate on how “Pascal’s Mugging”-like AIXR concerns are, and where this causes confusion between those concerned and sceptical.
I recognise a pattern where a sceptic will say “AI x-risk concerns are like Pascal’s wager/are Pascalian and not valid” and then an x-risk advocate will say “But the probabilities aren’t Pascalian. They’re actually fairly large”[1], which usually devolves into a “These percentages come from nowhere!” “But Hinton/Bengio/Russell...” “Just useful idiots for regulatory capture...” discourse doom spiral.
I think a fundamental miscommunication here is that, while the sceptic is using/implying the term “Pascallian” they aren’t concerned[2] with the percentage of risk being incredibly small but high impact, they’re instead concerned about trying to take actions in the world—especially ones involving politics and power—on the basis of subjective beliefs alone.
In the original wager, we don’t need to know anything about the evidence record for a certain God existing or not, if we simply Pascal’s framing and premisses then we end up with the belief that we ought to believe in God. Similarly, when this term comes up, AIXR sceptics are concerned about changing beliefs/behaviour/enact law based on arguments from reason alone that aren’t clearly connected to an empirical track record. Focusing on which subjective credences are proportionate to act upon is not likely to be persuasive compared to providing the empirical goods, as it were.
I also think almost any individual person working on AI safety is very unlikely to avert existential catastrophe, i.e. they only reduce x-risk probability by say ~50 in a million (here’s a model; LW post) or less. I wouldn’t devote my life to religion or converting others for infinite bliss or to avoid infinite hell for such a low probability that the religion is correct and my actions make the infinite difference, and the stakes here are infinitely larger than AI risk’s.[1] That seems pretty Pascalian to me. So spending my career on AI safety also seems pretty Pascalian to me.
I think people wouldn’t normally consider it Pascalian to enter a postive total returns lottery with a 1 / 20,000 (50 / million) chance of winning?
And people don’t consider it to be Pascalian to vote, to fight in a war, or to advocate for difficult to pass policy that might reduce the chance of nuclear war?
Maybe you have a different-than-typical perspective on what it means for something to be Pascalian?
I might have higher probability thresholds for what I consider Pascalian, but it’s also a matter of how much of my time and resources I have to give. This feels directly intuitive to me, and it can be cashed out in terms of normative uncertainty about decision theory/my risk appetite. I limit my budget for views that are more risk neutral.
Voting is low commitment, so not Pascalian this way. Devoting your career to nuclear policy seems Pascalian to me. Working on nuclear policy among many other things you work on doesn’t seem Pascalian. Fighting in a war only with deciding the main outcome of who wins/loses in mind seems Pascalian.
Some of these things may have other benefits regardless of whether you change the main binary-ish outcome you might have in mind. That can make them not Pascalian.
Also, people do these things without thinking much or at all about the probability that they’d affect the main outcome. Sometimes they’re “doing their part”, or it’s a matter of identity or signaling. Those aren’t necessarily bad reasons. But they’re not even bothering to check whether it would be Pascalian.
EDIT: I’d also guess the people self-selecting into doing this work, especially without thinking about the probabilities, would have high implied probabilities of affecting the main binary-ish outcome, if we interpreted them as primarily concerned with that.
A thought about AI x-risk discourse and the debate on how “Pascal’s Mugging”-like AIXR concerns are, and where this causes confusion between those concerned and sceptical.
I recognise a pattern where a sceptic will say “AI x-risk concerns are like Pascal’s wager/are Pascalian and not valid” and then an x-risk advocate will say “But the probabilities aren’t Pascalian. They’re actually fairly large”[1], which usually devolves into a “These percentages come from nowhere!” “But Hinton/Bengio/Russell...” “Just useful idiots for regulatory capture...” discourse doom spiral.
I think a fundamental miscommunication here is that, while the sceptic is using/implying the term “Pascallian” they aren’t concerned[2] with the percentage of risk being incredibly small but high impact, they’re instead concerned about trying to take actions in the world—especially ones involving politics and power—on the basis of subjective beliefs alone.
In the original wager, we don’t need to know anything about the evidence record for a certain God existing or not, if we simply Pascal’s framing and premisses then we end up with the belief that we ought to believe in God. Similarly, when this term comes up, AIXR sceptics are concerned about changing beliefs/behaviour/enact law based on arguments from reason alone that aren’t clearly connected to an empirical track record. Focusing on which subjective credences are proportionate to act upon is not likely to be persuasive compared to providing the empirical goods, as it were.
Let’s say x>5% in the rest of the 21st century for sake of argument
Or at least it’s not the only concern, perhaps the use of EV in this way is a crux, but I think it’s a different one
I also think almost any individual person working on AI safety is very unlikely to avert existential catastrophe, i.e. they only reduce x-risk probability by say ~50 in a million (here’s a model; LW post) or less. I wouldn’t devote my life to religion or converting others for infinite bliss or to avoid infinite hell for such a low probability that the religion is correct and my actions make the infinite difference, and the stakes here are infinitely larger than AI risk’s.[1] That seems pretty Pascalian to me. So spending my career on AI safety also seems pretty Pascalian to me.
Maybe not actually infinitely larger. They could both be infinite.
I think people wouldn’t normally consider it Pascalian to enter a postive total returns lottery with a 1 / 20,000 (50 / million) chance of winning?
And people don’t consider it to be Pascalian to vote, to fight in a war, or to advocate for difficult to pass policy that might reduce the chance of nuclear war?
Maybe you have a different-than-typical perspective on what it means for something to be Pascalian?
I might have higher probability thresholds for what I consider Pascalian, but it’s also a matter of how much of my time and resources I have to give. This feels directly intuitive to me, and it can be cashed out in terms of normative uncertainty about decision theory/my risk appetite. I limit my budget for views that are more risk neutral.
Voting is low commitment, so not Pascalian this way. Devoting your career to nuclear policy seems Pascalian to me. Working on nuclear policy among many other things you work on doesn’t seem Pascalian. Fighting in a war only with deciding the main outcome of who wins/loses in mind seems Pascalian.
Some of these things may have other benefits regardless of whether you change the main binary-ish outcome you might have in mind. That can make them not Pascalian.
Also, people do these things without thinking much or at all about the probability that they’d affect the main outcome. Sometimes they’re “doing their part”, or it’s a matter of identity or signaling. Those aren’t necessarily bad reasons. But they’re not even bothering to check whether it would be Pascalian.
EDIT: I’d also guess the people self-selecting into doing this work, especially without thinking about the probabilities, would have high implied probabilities of affecting the main binary-ish outcome, if we interpreted them as primarily concerned with that.