“50 people wouldn’t actually die if we don’t choose the AI research, instead, 100 million people would face a 0.00005% chance of death.” I think, perhaps, this line is infelicitous.
The point is that all 100 million people have an ex-post complaint, as there is a possible outcome in which all 100 million people die (if we don’t intervene). However, these complaints need to be discounted by the improbability of their occurrence.
To see why we discount, imagine we could save someone from a horrid migraine, but doing so creates a 1⁄100 billion chance some random bystander would die. If we don’t discount ex-post, then ex-post we are comparing a migraine to death—and we’d be counterintuitively advised not to alleviate the migraine.
Once you discount the 100 million complaints, you end up with 100 million complaints of death, each discounted by 99.99995%.
I hope this clears up the confusion, and maybe helps with your concerns about instability?
Thanks! But to clarify, what I’m wondering is: why take unrealized probabilities to create ex post complaints at all? On an alternative conception, you have an ex post complaint if something bad actually happens to you, and not otherwise.
(I’m guessing it’s because it would mean that we cannot know what ex post complaints people have until literally after the fact, whereas you’re wanting a form of “ex post” contractualism that is still capable of being action-guiding—is that right?)
Your guess is precisely right. Ex-post evaluations have really developed as an alternative to ex-ante approaches to decision-making under risk. Waiting until the outcome realises does not help us make decisions. Thinking about how we can justify ourselves depending on the various outcomes we know could realise does help us.
The name can definitely be misleading, I see how it can pull people into debates about retrospective claims and objective/subjective permissibility.
“50 people wouldn’t actually die if we don’t choose the AI research, instead, 100 million people would face a 0.00005% chance of death.” I think, perhaps, this line is infelicitous.
The point is that all 100 million people have an ex-post complaint, as there is a possible outcome in which all 100 million people die (if we don’t intervene). However, these complaints need to be discounted by the improbability of their occurrence.
To see why we discount, imagine we could save someone from a horrid migraine, but doing so creates a 1⁄100 billion chance some random bystander would die. If we don’t discount ex-post, then ex-post we are comparing a migraine to death—and we’d be counterintuitively advised not to alleviate the migraine.
Once you discount the 100 million complaints, you end up with 100 million complaints of death, each discounted by 99.99995%.
I hope this clears up the confusion, and maybe helps with your concerns about instability?
Thanks! But to clarify, what I’m wondering is: why take unrealized probabilities to create ex post complaints at all? On an alternative conception, you have an ex post complaint if something bad actually happens to you, and not otherwise.
(I’m guessing it’s because it would mean that we cannot know what ex post complaints people have until literally after the fact, whereas you’re wanting a form of “ex post” contractualism that is still capable of being action-guiding—is that right?)
Your guess is precisely right. Ex-post evaluations have really developed as an alternative to ex-ante approaches to decision-making under risk. Waiting until the outcome realises does not help us make decisions. Thinking about how we can justify ourselves depending on the various outcomes we know could realise does help us.
The name can definitely be misleading, I see how it can pull people into debates about retrospective claims and objective/subjective permissibility.
Sorry I edited this as I had another thought.
I apologize for this confusion. I’ve updated the section with the inaccurate statement @Richard Y Chappell quoted.