The demand of the original Pascal’s wager is (depending on the religion) devoting a significant share of your life to religion and forcing yourself to believe something that you believe is false. This may be a big sacrifice even if you thought the probability of the religion in question being correct were 1% (and there were no competing religions or infinities to consider). I would feel aversion to giving in to the wager at such probabilities and it still feels Pascalian to me. Those devoting their careers (and donations, if any) primarily to extinction risk reduction and perhaps existential risk reduction generally* are committing most of their altruistic efforts to making a difference to the outcome they’re primarily targeting with only very small probability**. Part of me would like to probably make things much better with my career over my life (and probably not make them much worse).
Someone can have normative uncertainty (decision-theoretic uncertainty) about how much probability they can ignore over the course of their entire life, and this could span a wide range, even approaching or passing 50%. Under some approaches to decision-making under normative uncertainty, this might recommend devoting some resources to small risks, but not devoting all or most resources to them. Someone could have a “longtermist” EA bucket, but it need not be their largest bucket. Of course, it could still very well be their largest EA bucket, depending on their beliefs. It’s currently not my largest bucket.
* It’s possible that the far future is predictably and continuously sensitive to the state of affairs today and in the next few decades with decent probability, e.g. the distributions of values and tendencies in the population. Extinction, on the other hand, doesn’t really come in degrees.
** However, acausal influence in a multiverse may increase the probability of making a difference significantly with semi-independent trials (conditional on some baseline factors like the local risk of extinction and difficulty of AI safety), even possibly making them more likely than not have a large impact. I’m mostly thinking about correlated decisions across spatially and acausally separated agents in a universe that’s spatially unbounded/infinitely large. There’s also the many-worlds interpretation of quantum mechanics.
The demand of the original Pascal’s wager is (depending on the religion) devoting a significant share of your life to religion and forcing yourself to believe something that you believe is false. This may be a big sacrifice even if you thought the probability of the religion in question being correct were 1% (and there were no competing religions or infinities to consider). I would feel aversion to giving in to the wager at such probabilities and it still feels Pascalian to me. Those devoting their careers (and donations, if any) primarily to extinction risk reduction and perhaps existential risk reduction generally* are committing most of their altruistic efforts to making a difference to the outcome they’re primarily targeting with only very small probability**. Part of me would like to probably make things much better with my career over my life (and probably not make them much worse).
Someone can have normative uncertainty (decision-theoretic uncertainty) about how much probability they can ignore over the course of their entire life, and this could span a wide range, even approaching or passing 50%. Under some approaches to decision-making under normative uncertainty, this might recommend devoting some resources to small risks, but not devoting all or most resources to them. Someone could have a “longtermist” EA bucket, but it need not be their largest bucket. Of course, it could still very well be their largest EA bucket, depending on their beliefs. It’s currently not my largest bucket.
* It’s possible that the far future is predictably and continuously sensitive to the state of affairs today and in the next few decades with decent probability, e.g. the distributions of values and tendencies in the population. Extinction, on the other hand, doesn’t really come in degrees.
** However, acausal influence in a multiverse may increase the probability of making a difference significantly with semi-independent trials (conditional on some baseline factors like the local risk of extinction and difficulty of AI safety), even possibly making them more likely than not have a large impact. I’m mostly thinking about correlated decisions across spatially and acausally separated agents in a universe that’s spatially unbounded/infinitely large. There’s also the many-worlds interpretation of quantum mechanics.