People not worried about AI risk often have much lower risk estimates than 50 %. I guess the risk of human extinction over the next year is 10^-8. I would say a 10^-100 chance of creating 10^100 years of fully healthy human life is as good as a 100 % chance of creating 1 year of fully healthy life. However, even if I thought the risk of human extinction over the next year was 1 %, I would not conclude decreasing it would be astronomically cost-effective. One should be scope-sensitive not only to large potential benefits, but also their small probabilities. Longtermists typically come up with huge amounts of benefits (e.g. helping 10^50 human simulations), and then independently guess a probability which is only moderately small (e.g. 10^-10), which results in huge expected benefits (e.g. helping 10^40 human simulations). Yet, the amount of benefits is not independent from their probability. For reasonable distributions describing the benefits, the expected benefits coming from very large benefits will be negligible. For example, if the benefits are described by a power law distribution with tail index alpha > 0, their probability will be proportional to ābenefitsā^-(1 + alpha), so the expected benefits linked to a given amount of benefits will be proportional to ābenefitsā*ābenefitsā^-(1 + alpha) = ābenefitsā^-alpha. This decreases with benefits, so the expected benefits coming from astronomical benefits will be negligible.
I do not think the reasoning above applies to the best ways of helping invertebrates. With longtermist arguments, the probability of the benefits decreases with the benefits. In contrast, the probability of the Shrimp Welfare Project (SWP) being beneficial, which is roughly proportional to the welfare range of shrimp, does not depend on the number of shrimps they help per $. SWP finding improving their operations such that they can stun 2 times as many shrimp per $ would not change oneās best guess for the welfare range of shrimp, so SWPās cost-effectiveness would become 2 times as large. Whereas I think longtermists finding that the universe can after all support 10^60 human simulations instead of 10^50 would not change the value of e.g. research on digital minds, because the expected value coming from large benefits is negligible.
I think you are one of the few people who disregards x-risk and has a well-considered probability estimate for which it makes sense to disregard x-risk. (Modulo some debate around how to handle tiny probabilities of enormous outcomes.)
I was more intending to critique the sort of people who say āAI risk isnāt a concernā without having any particular P(doom) in mind, which in my experience is almost all such people.
Thanks, Michael. I agree AI risk should not be dismissed without looking into how large it is. On the other hand, there is not an obvious relationship between existential risk, and the cost-effectiveness of decreasing it. The cost-effectiveness decreases as the risk increases because this decreases the value of the future, unless the risk is concentrated in a time of perils. In addition, a higher risk of human extinction does not necessarily imply a higher existential risk because some AI systems may well be sentient.
Thanks, Michael.
People not worried about AI risk often have much lower risk estimates than 50 %. I guess the risk of human extinction over the next year is 10^-8. I would say a 10^-100 chance of creating 10^100 years of fully healthy human life is as good as a 100 % chance of creating 1 year of fully healthy life. However, even if I thought the risk of human extinction over the next year was 1 %, I would not conclude decreasing it would be astronomically cost-effective. One should be scope-sensitive not only to large potential benefits, but also their small probabilities. Longtermists typically come up with huge amounts of benefits (e.g. helping 10^50 human simulations), and then independently guess a probability which is only moderately small (e.g. 10^-10), which results in huge expected benefits (e.g. helping 10^40 human simulations). Yet, the amount of benefits is not independent from their probability. For reasonable distributions describing the benefits, the expected benefits coming from very large benefits will be negligible. For example, if the benefits are described by a power law distribution with tail index alpha > 0, their probability will be proportional to ābenefitsā^-(1 + alpha), so the expected benefits linked to a given amount of benefits will be proportional to ābenefitsā*ābenefitsā^-(1 + alpha) = ābenefitsā^-alpha. This decreases with benefits, so the expected benefits coming from astronomical benefits will be negligible.
I do not think the reasoning above applies to the best ways of helping invertebrates. With longtermist arguments, the probability of the benefits decreases with the benefits. In contrast, the probability of the Shrimp Welfare Project (SWP) being beneficial, which is roughly proportional to the welfare range of shrimp, does not depend on the number of shrimps they help per $. SWP finding improving their operations such that they can stun 2 times as many shrimp per $ would not change oneās best guess for the welfare range of shrimp, so SWPās cost-effectiveness would become 2 times as large. Whereas I think longtermists finding that the universe can after all support 10^60 human simulations instead of 10^50 would not change the value of e.g. research on digital minds, because the expected value coming from large benefits is negligible.
I think you are one of the few people who disregards x-risk and has a well-considered probability estimate for which it makes sense to disregard x-risk. (Modulo some debate around how to handle tiny probabilities of enormous outcomes.)
I was more intending to critique the sort of people who say āAI risk isnāt a concernā without having any particular P(doom) in mind, which in my experience is almost all such people.
Thanks, Michael. I agree AI risk should not be dismissed without looking into how large it is. On the other hand, there is not an obvious relationship between existential risk, and the cost-effectiveness of decreasing it. The cost-effectiveness decreases as the risk increases because this decreases the value of the future, unless the risk is concentrated in a time of perils. In addition, a higher risk of human extinction does not necessarily imply a higher existential risk because some AI systems may well be sentient.