ā50 people wouldnāt actually die if we donāt choose the AI research, instead, 100 million people would face a 0.00005% chance of death.ā
Iām a bit puzzled by talk of probabilities ex post. Either 100 million people die or zero do. Shouldnāt the ex post verdict instead just depend on which outcome actually results?
(I guess the āex postā view here is really about antecedently predictable ex post outcomes, or something along those lines, but there seems something a bit unstable about this intermediate perspective.)
ā50 people wouldnāt actually die if we donāt choose the AI research, instead, 100 million people would face a 0.00005% chance of death.ā I think, perhaps, this line is infelicitous.
The point is that all 100 million people have an ex-post complaint, as there is a possible outcome in which all 100 million people die (if we donāt intervene). However, these complaints need to be discounted by the improbability of their occurrence.
To see why we discount, imagine we could save someone from a horrid migraine, but doing so creates a 1ā100 billion chance some random bystander would die. If we donāt discount ex-post, then ex-post we are comparing a migraine to deathāand weād be counterintuitively advised not to alleviate the migraine.
Once you discount the 100 million complaints, you end up with 100 million complaints of death, each discounted by 99.99995%.
I hope this clears up the confusion, and maybe helps with your concerns about instability?
Thanks! But to clarify, what Iām wondering is: why take unrealized probabilities to create ex post complaints at all? On an alternative conception, you have an ex post complaint if something bad actually happens to you, and not otherwise.
(Iām guessing itās because it would mean that we cannot know what ex post complaints people have until literally after the fact, whereas youāre wanting a form of āex postā contractualism that is still capable of being action-guidingāis that right?)
Your guess is precisely right. Ex-post evaluations have really developed as an alternative to ex-ante approaches to decision-making under risk. Waiting until the outcome realises does not help us make decisions. Thinking about how we can justify ourselves depending on the various outcomes we know could realise does help us.
The name can definitely be misleading, I see how it can pull people into debates about retrospective claims and objective/āsubjective permissibility.
Iām a bit puzzled by talk of probabilities ex post. Either 100 million people die or zero do. Shouldnāt the ex post verdict instead just depend on which outcome actually results?
(I guess the āex postā view here is really about antecedently predictable ex post outcomes, or something along those lines, but there seems something a bit unstable about this intermediate perspective.)
ā50 people wouldnāt actually die if we donāt choose the AI research, instead, 100 million people would face a 0.00005% chance of death.ā I think, perhaps, this line is infelicitous.
The point is that all 100 million people have an ex-post complaint, as there is a possible outcome in which all 100 million people die (if we donāt intervene). However, these complaints need to be discounted by the improbability of their occurrence.
To see why we discount, imagine we could save someone from a horrid migraine, but doing so creates a 1ā100 billion chance some random bystander would die. If we donāt discount ex-post, then ex-post we are comparing a migraine to deathāand weād be counterintuitively advised not to alleviate the migraine.
Once you discount the 100 million complaints, you end up with 100 million complaints of death, each discounted by 99.99995%.
I hope this clears up the confusion, and maybe helps with your concerns about instability?
Thanks! But to clarify, what Iām wondering is: why take unrealized probabilities to create ex post complaints at all? On an alternative conception, you have an ex post complaint if something bad actually happens to you, and not otherwise.
(Iām guessing itās because it would mean that we cannot know what ex post complaints people have until literally after the fact, whereas youāre wanting a form of āex postā contractualism that is still capable of being action-guidingāis that right?)
Your guess is precisely right. Ex-post evaluations have really developed as an alternative to ex-ante approaches to decision-making under risk. Waiting until the outcome realises does not help us make decisions. Thinking about how we can justify ourselves depending on the various outcomes we know could realise does help us.
The name can definitely be misleading, I see how it can pull people into debates about retrospective claims and objective/āsubjective permissibility.
Sorry I edited this as I had another thought.
I apologize for this confusion. Iāve updated the section with the inaccurate statement @Richard Y Chappell quoted.