Somehow, the 1-life-vs-1000 thought experiment made part of my brain feel like this was decision making in an emergency, where you have to make a snap judgment in seconds. And in a real emergency, I think saving 1 life might very well be the right choice just because—how sure are you the chance is really 1%? Are you sure you didn’t get it wrong? Whereas if you are 99.9% certain you can save one life, you must have some really clear, robust reason to think that.
If I imagine a much slower scenario, so that I can convince myself I really am sure that the probability of saving 1000 lives is actually known to be 1%, it seems like a much clearer choice to save 10.
My brain still comes up with a lot of intuitive objections for utility maximization, but I hadn’t noticed this one before.
Somehow, the 1-life-vs-1000 thought experiment made part of my brain feel like this was decision making in an emergency, where you have to make a snap judgment in seconds. And in a real emergency, I think saving 1 life might very well be the right choice just because—how sure are you the chance is really 1%? Are you sure you didn’t get it wrong? Whereas if you are 99.9% certain you can save one life, you must have some really clear, robust reason to think that.
If I imagine a much slower scenario, so that I can convince myself I really am sure that the probability of saving 1000 lives is actually known to be 1%, it seems like a much clearer choice to save 10.
My brain still comes up with a lot of intuitive objections for utility maximization, but I hadn’t noticed this one before.