I share Kitās concerns about assuming binary outcomes, but Iād also like to add: Even if we assume that the outcome of a donation is binary (you have 0 impact with probability P, and X impact with probability 1-P), how can we tell what a good upper bound might be for the P weād be willing to support?
Almost everyone would agree that Pascalās Mugging stretches the limits of credulity, but something like a 1-in-10-million chance of accomplishing something big isnāt ridiculous on its face. Thatās the same order of magnitude as, say, swinging a U.S. Presidential election, or winning the lottery. Plenty of people would agree to take those odds. And it doesnāt seem unreasonable to think that giving $7000 to fund a promising line of research could have a 1-in-10-million chance of averting human extinction.
--
Intuitively, I think my upper bound on P is linked in some sense to the scale of humanityās resources. If humanity is going to end up using, say, $70 billion over the next 20 years to reduce X-risk, thatās enough to fund ten million $7,000 grants. If each of those grants has something like a 1-in-10-million chance of letting humanity survive, that feels like a good use of money to me. (By comparison, $70 billion is a little more than one year of U.S. foreign aid funding.) The same would still go for 1-in-a-billion chances, if that were really the best chances we had.
By comparison, if we only had access to 1-in-a-trillion chances, Iād be much more skeptical of X-risk funding, since under those odds we could throw all of our money at the problem and still be extraordinarily unlikely to accomplish anything at all.
Of course, we canāt really tell whether current X-risk opportunities have odds in the range of 1-in-a-million, 1-in-a-billion, or even worse. But I think weāre probably closer to āmillionā than ātrillionā, so I donāt feel like Iām being mugged at all, especially if early funding can give us more information about our ātrueā odds and makes those odds better instead of worse.
(That said, I respect that othersā intuitions differ sharply from mine, and I recognize that the āscale of humanityās resourcesā idea is pretty arbitrary.)
Itās good to know lots of people have this intuitionāI think I do too, though itās not super strong in me.
Arguably, when p is below the threshold you mention, we can make some sort of psuedo-law-of-large-numbers argument for expected utility maximization, like āIf we all follow this policy, probably at least one of us will succeed.ā But when p is above the threshold, we canāt make that argument.
So the idea is: Reject expected utility maximization in general (perhaps for reasons which will be discussed in subsequent posts!), but accept some sort of āIf following a policy seems like it will probably work, then do itā principle, and use that to derive expected utility maximization in ordinary cases.
All of this needs to be made more precise and explored in more detail. Iād love to see someone do that.
(BTW, upcoming posts remove the binary-outcomes assumption. Perhaps it was a mistake to post them in sequence instead of all at once...)
I share Kitās concerns about assuming binary outcomes, but Iād also like to add: Even if we assume that the outcome of a donation is binary (you have 0 impact with probability P, and X impact with probability 1-P), how can we tell what a good upper bound might be for the P weād be willing to support?
Almost everyone would agree that Pascalās Mugging stretches the limits of credulity, but something like a 1-in-10-million chance of accomplishing something big isnāt ridiculous on its face. Thatās the same order of magnitude as, say, swinging a U.S. Presidential election, or winning the lottery. Plenty of people would agree to take those odds. And it doesnāt seem unreasonable to think that giving $7000 to fund a promising line of research could have a 1-in-10-million chance of averting human extinction.
--
Intuitively, I think my upper bound on P is linked in some sense to the scale of humanityās resources. If humanity is going to end up using, say, $70 billion over the next 20 years to reduce X-risk, thatās enough to fund ten million $7,000 grants. If each of those grants has something like a 1-in-10-million chance of letting humanity survive, that feels like a good use of money to me. (By comparison, $70 billion is a little more than one year of U.S. foreign aid funding.) The same would still go for 1-in-a-billion chances, if that were really the best chances we had.
By comparison, if we only had access to 1-in-a-trillion chances, Iād be much more skeptical of X-risk funding, since under those odds we could throw all of our money at the problem and still be extraordinarily unlikely to accomplish anything at all.
Of course, we canāt really tell whether current X-risk opportunities have odds in the range of 1-in-a-million, 1-in-a-billion, or even worse. But I think weāre probably closer to āmillionā than ātrillionā, so I donāt feel like Iām being mugged at all, especially if early funding can give us more information about our ātrueā odds and makes those odds better instead of worse.
(That said, I respect that othersā intuitions differ sharply from mine, and I recognize that the āscale of humanityās resourcesā idea is pretty arbitrary.)
Itās good to know lots of people have this intuitionāI think I do too, though itās not super strong in me.
Arguably, when p is below the threshold you mention, we can make some sort of psuedo-law-of-large-numbers argument for expected utility maximization, like āIf we all follow this policy, probably at least one of us will succeed.ā But when p is above the threshold, we canāt make that argument.
So the idea is: Reject expected utility maximization in general (perhaps for reasons which will be discussed in subsequent posts!), but accept some sort of āIf following a policy seems like it will probably work, then do itā principle, and use that to derive expected utility maximization in ordinary cases.
All of this needs to be made more precise and explored in more detail. Iād love to see someone do that.
(BTW, upcoming posts remove the binary-outcomes assumption. Perhaps it was a mistake to post them in sequence instead of all at once...)