I think the roughly most reasonable approach for an agent who wishes to ignore small probabilities is to ignore probability differences adding up at most to some specified threshold over the sequence of all of their own future actions and the entire sequence of outcomes in the future.* We can make some finer grained distinctions on such an account, and common sense personal prudence seems to have a much higher probability of making a difference than x-risk work, so defining a threshold based on the lifetime probabilities of commonsense personal prudence could still exclude x-risk work as “Pascalian”.
While we take some precautions in our daily lives to avoid low probability risks, these probabilities are often not tiny over the course of our lives.
Based on what you wrote, I think looking both ways before crossing the road would reduce the risk of lifetime injury by at least around 1 in 1000, i.e. 0.1%, similar to seatbelts.
A good share of people will have some kind of health condition or medical emergency at some point in their lives and I’d guess most people would benefit from care near the end of their lives, so health insurance might be justified even if we ignore up to 10% of probability differences. That being said, for routine health expenses like regular checkups and dental visits, just saving what you would spend on insurance and paying out of pocket could be more efficient.
Someone allocating funding or deciding policy (or with a significant influence over these decisions) for aviation safety and other safety critical activities may be in a position similar to 1 relative to others’ risks, because of (semi-)independent trials over multiple separate possible events across many people, although replaceability could sometimes cut against this significantly. Boeing and Airbus have had multiple accidents with fatalities since 2000, although these may have been human error that manufacturers couldn’t be reasonably expected to address. I’d guess increased security after the September 11 attacks probably actually saved lives, and some individuals may have been in a special position to influence this. I don’t see multiple crashes from the same commercial airline since 2000, though, so someone working for an airline probably wouldn’t make a counterfactual difference through higher standards than the next person who would have been in their position, and it’s hard to estimate the baseline risk in such circumstances.
That being said, people are probably irrationally afraid of plane crashes, lawsuits for avoidable crashes may be very expensive and reputation-tarring, and people may feel comforted by higher standards, so aviation safety may pay for itself for commercial flights and be in the financial interest of shareholders and executives.
Extinction is (basically) a single event that eliminates future extinction risks, so (semi-)independent trials don’t count in its favour the same way as in 2,** although someone might believe extinction risk is high enough to make up for it.
I think those actually working on small probability or unrepeatable risks (like extinction**) more directly would usually be much less likely to make a difference than those allocating funding or deciding policy, but financial incentives are often enough to get them to work on these problems without altruistic motivations (and/or they’re badly mistaken about their probability of making a difference), so other people will work on it even if you don’t think you should yourself. A first-order approximate upper bound of an individual’s probability of impact is the combined probability of impact from all those working on the problem divided by the number of individuals working on it (or better, dividing the individual’s future work-hours by the total future work-hours), and possibly much lower if they’re highly replaceable or there are quickly diminishing marginal returns. I expect the probability to almost always fall below 1/1000 and often fall below 1 in a million, but it’ll depend on the particular problem. It’s plausible to me that the average individual working directly on AI safety has a better chance than 1 in a million of averting extinction because of how few people have been working on these risks, but I’m not sure, and the growth of resources and people working on the problem may mean a much lower probability. Biosecurity may be similar, but I’m much less informed. Both AI safety and biosecurity could have much better chances of averting human deaths than averting extinction, but then other considerations could dominate, e.g. farmed and wild animal welfare.
Voting in federal elections and diet change (unless you eat locally produced products?) are probably not supported by their “direct” impacts** for the average individual over the course of their lives for a threshold around 1 in a million, although there may be other reasons to engage in these behaviours.
* Even if you’re skeptical of the persistence of personal identity or its moral relevance, it’s still worth considering how your commitment to ignore some low enough probability differences will affect how much your future selves’ will ignore.
** However, acausal influence in a multiverse may increase the probability of making a difference significantly with semi-independent trials (conditional on some baseline factors like the local risk of extinction and difficulty of AI safety), even possibly making them more likely than not have a large impact. I’m mostly thinking about correlated decisions across spatially and acausally separated agents in a universe that’s spatially unbounded/infinitely large. There’s also the many-worlds interpretation of quantum mechanics.
I think the roughly most reasonable approach for an agent who wishes to ignore small probabilities is to ignore probability differences adding up at most to some specified threshold over the sequence of all of their own future actions and the entire sequence of outcomes in the future.* We can make some finer grained distinctions on such an account, and common sense personal prudence seems to have a much higher probability of making a difference than x-risk work, so defining a threshold based on the lifetime probabilities of commonsense personal prudence could still exclude x-risk work as “Pascalian”.
While we take some precautions in our daily lives to avoid low probability risks, these probabilities are often not tiny over the course of our lives.
The risk of ever dying in a car crash is about 1% over the course of your life and consistently wearing a seatbelt seems to have made a decent dent in this on average. Someone may forego a seatbelt rarely, but they should wear it almost all of the time when available, unless they ignore probabilities below at least ~0.1% (and even then, there are other personal risks to include besides car crashes, so the threshold may need to be even higher) or they prefer the comfort of not wearing a seatbelt to the reduction in risk of injury or death.
Based on what you wrote, I think looking both ways before crossing the road would reduce the risk of lifetime injury by at least around 1 in 1000, i.e. 0.1%, similar to seatbelts.
A good share of people will have some kind of health condition or medical emergency at some point in their lives and I’d guess most people would benefit from care near the end of their lives, so health insurance might be justified even if we ignore up to 10% of probability differences. That being said, for routine health expenses like regular checkups and dental visits, just saving what you would spend on insurance and paying out of pocket could be more efficient.
Someone allocating funding or deciding policy (or with a significant influence over these decisions) for aviation safety and other safety critical activities may be in a position similar to 1 relative to others’ risks, because of (semi-)independent trials over multiple separate possible events across many people, although replaceability could sometimes cut against this significantly. Boeing and Airbus have had multiple accidents with fatalities since 2000, although these may have been human error that manufacturers couldn’t be reasonably expected to address. I’d guess increased security after the September 11 attacks probably actually saved lives, and some individuals may have been in a special position to influence this. I don’t see multiple crashes from the same commercial airline since 2000, though, so someone working for an airline probably wouldn’t make a counterfactual difference through higher standards than the next person who would have been in their position, and it’s hard to estimate the baseline risk in such circumstances.
That being said, people are probably irrationally afraid of plane crashes, lawsuits for avoidable crashes may be very expensive and reputation-tarring, and people may feel comforted by higher standards, so aviation safety may pay for itself for commercial flights and be in the financial interest of shareholders and executives.
Extinction is (basically) a single event that eliminates future extinction risks, so (semi-)independent trials don’t count in its favour the same way as in 2,** although someone might believe extinction risk is high enough to make up for it.
I think those actually working on small probability or unrepeatable risks (like extinction**) more directly would usually be much less likely to make a difference than those allocating funding or deciding policy, but financial incentives are often enough to get them to work on these problems without altruistic motivations (and/or they’re badly mistaken about their probability of making a difference), so other people will work on it even if you don’t think you should yourself. A first-order approximate upper bound of an individual’s probability of impact is the combined probability of impact from all those working on the problem divided by the number of individuals working on it (or better, dividing the individual’s future work-hours by the total future work-hours), and possibly much lower if they’re highly replaceable or there are quickly diminishing marginal returns. I expect the probability to almost always fall below 1/1000 and often fall below 1 in a million, but it’ll depend on the particular problem. It’s plausible to me that the average individual working directly on AI safety has a better chance than 1 in a million of averting extinction because of how few people have been working on these risks, but I’m not sure, and the growth of resources and people working on the problem may mean a much lower probability. Biosecurity may be similar, but I’m much less informed. Both AI safety and biosecurity could have much better chances of averting human deaths than averting extinction, but then other considerations could dominate, e.g. farmed and wild animal welfare.
Voting in federal elections and diet change (unless you eat locally produced products?) are probably not supported by their “direct” impacts** for the average individual over the course of their lives for a threshold around 1 in a million, although there may be other reasons to engage in these behaviours.
* Even if you’re skeptical of the persistence of personal identity or its moral relevance, it’s still worth considering how your commitment to ignore some low enough probability differences will affect how much your future selves’ will ignore.
** However, acausal influence in a multiverse may increase the probability of making a difference significantly with semi-independent trials (conditional on some baseline factors like the local risk of extinction and difficulty of AI safety), even possibly making them more likely than not have a large impact. I’m mostly thinking about correlated decisions across spatially and acausally separated agents in a universe that’s spatially unbounded/infinitely large. There’s also the many-worlds interpretation of quantum mechanics.