I also think almost any individual person working on AI safety is very unlikely to avert existential catastrophe, i.e. they only reduce x-risk probability by say ~50 in a million (here’s a model; LW post) or less. I wouldn’t devote my life to religion or converting others for infinite bliss or to avoid infinite hell for such a low probability that the religion is correct and my actions make the infinite difference, and the stakes here are infinitely larger than AI risk’s.[1] That seems pretty Pascalian to me. So spending my career on AI safety also seems pretty Pascalian to me.
I think people wouldn’t normally consider it Pascalian to enter a postive total returns lottery with a 1 / 20,000 (50 / million) chance of winning?
And people don’t consider it to be Pascalian to vote, to fight in a war, or to advocate for difficult to pass policy that might reduce the chance of nuclear war?
Maybe you have a different-than-typical perspective on what it means for something to be Pascalian?
I might have higher probability thresholds for what I consider Pascalian, but it’s also a matter of how much of my time and resources I have to give. This feels directly intuitive to me, and it can be cashed out in terms of normative uncertainty about decision theory/my risk appetite. I limit my budget for views that are more risk neutral.
Voting is low commitment, so not Pascalian this way. Devoting your career to nuclear policy seems Pascalian to me. Working on nuclear policy among many other things you work on doesn’t seem Pascalian. Fighting in a war only with deciding the main outcome of who wins/loses in mind seems Pascalian.
Some of these things may have other benefits regardless of whether you change the main binary-ish outcome you might have in mind. That can make them not Pascalian.
Also, people do these things without thinking much or at all about the probability that they’d affect the main outcome. Sometimes they’re “doing their part”, or it’s a matter of identity or signaling. Those aren’t necessarily bad reasons. But they’re not even bothering to check whether it would be Pascalian.
EDIT: I’d also guess the people self-selecting into doing this work, especially without thinking about the probabilities, would have high implied probabilities of affecting the main binary-ish outcome, if we interpreted them as primarily concerned with that.
I also think almost any individual person working on AI safety is very unlikely to avert existential catastrophe, i.e. they only reduce x-risk probability by say ~50 in a million (here’s a model; LW post) or less. I wouldn’t devote my life to religion or converting others for infinite bliss or to avoid infinite hell for such a low probability that the religion is correct and my actions make the infinite difference, and the stakes here are infinitely larger than AI risk’s.[1] That seems pretty Pascalian to me. So spending my career on AI safety also seems pretty Pascalian to me.
Maybe not actually infinitely larger. They could both be infinite.
I think people wouldn’t normally consider it Pascalian to enter a postive total returns lottery with a 1 / 20,000 (50 / million) chance of winning?
And people don’t consider it to be Pascalian to vote, to fight in a war, or to advocate for difficult to pass policy that might reduce the chance of nuclear war?
Maybe you have a different-than-typical perspective on what it means for something to be Pascalian?
I might have higher probability thresholds for what I consider Pascalian, but it’s also a matter of how much of my time and resources I have to give. This feels directly intuitive to me, and it can be cashed out in terms of normative uncertainty about decision theory/my risk appetite. I limit my budget for views that are more risk neutral.
Voting is low commitment, so not Pascalian this way. Devoting your career to nuclear policy seems Pascalian to me. Working on nuclear policy among many other things you work on doesn’t seem Pascalian. Fighting in a war only with deciding the main outcome of who wins/loses in mind seems Pascalian.
Some of these things may have other benefits regardless of whether you change the main binary-ish outcome you might have in mind. That can make them not Pascalian.
Also, people do these things without thinking much or at all about the probability that they’d affect the main outcome. Sometimes they’re “doing their part”, or it’s a matter of identity or signaling. Those aren’t necessarily bad reasons. But they’re not even bothering to check whether it would be Pascalian.
EDIT: I’d also guess the people self-selecting into doing this work, especially without thinking about the probabilities, would have high implied probabilities of affecting the main binary-ish outcome, if we interpreted them as primarily concerned with that.