Duncan Sabien: This is … not a clean argument. Haven’t read the full post, but I feel the feeling of someone trying to do sleight-of-hand on me.
[Added by Duncan: “my apologies for not being able to devote more time to clarity and constructivity. Mark Xu is good people in my experience.”]
Rob Bensinger: Isn’t ‘my prior odds were x, my posterior odds were y, therefore my evidence strength must be z’ already good enough?
Are you worried that the person might not actually have a posterior that extreme? Like, if they actually took 21 bets like that they’d get more than 1 of them wrong?
Guy Srinivasan: I feel like “fight! fight!” except with the word “unpack!”
Duncan Sabien: > The prior odds that someone’s name is ‘Mark Xu’ are generously 1:1,000,000. Posterior odds of 20:1 implies that the odds ratio of me saying ‘Mark Xu’ is 20,000,000:1, or roughly 24 bits of evidence. That’s a lot of evidence.
This is beyond “spherical frictionless cows” and into disingenuous adversarial levels of oversimplification. I’m having a hard time clarifying what’s sending up red flags here, except to say “the claim that his mere assertion provided 24 bits of evidence is false, and saying it in this oddly specific and confident way will cow less literate reasoners into just believing him, and I feel gross.”
Guy Srinivasan: Could it be that there’s a smuggled intuition here that we’re trying to distinguish between names in a good faith world, and that the bad faith hypothesis is important in ways that “the name might be John” isn’t, and that just rounding it off to bits of evidence makes it seem like the extra 0.1 bits “maybe this exchange is bad faith” are small in comparison when actually they are the most important bits to gain?
(the above is not math)
Marcello Herreshoff: I share Duncan’s intuition that there’s a sleight of hand happening here. Here’s my candidate for where the slight of hand might live:
Vast odds ratios do lurk behind many encounters, but specifically, they show up much more often in situations that raise an improbable hypothesis to consideration worthiness (as in Mark Xu’s first set of examples) than in the situation where they raise consideration worthy hypotheses to very high levels of certainty (as in Mark Xu’s second set of examples.)
Put another way, how correlated your available observations are to some variable puts a ceiling on how certain you’re ever allowed to get about that variable. So we should often expect the last mile of updates in favor of a hypothesis to be much harder to obtain than the first mile.
Ronny Fernandez: @Duncan Sabien So is the prior higher or is the posterior lower?
Chana Messinger: I wonder if this is similar to my confusion at whether expected conservation of evidence is violated if you have a really good experiment that would give you strong evidence for A if it comes out one way and strong evidence for B if it comes out the other way.
Ronny Fernandez: @Marcello Mathias Herreshoff I don’t think I actually understand the last paragraph in your explanation. Feel like elaborating?
Marcello Herreshoff: Consider the driver’s license example. If we suppose 1/1000 of people are identity thieves carrying perfect driver’s license forgeries (of randomly selected victims), then there is absolutely nothing you can do (using drivers licenses alone) to get your level of certainty that the person you’re talking to is Mark Xu above 99.9%, because the evidence you can access can’t separate the real Mark Xu from a potential impersonator. That’s the flavor of effect the first sentence of the last paragraph was trying to point at.
Facebook discussion of this post:
___________________________
Duncan Sabien: This is … not a clean argument. Haven’t read the full post, but I feel the feeling of someone trying to do sleight-of-hand on me.
[Added by Duncan: “my apologies for not being able to devote more time to clarity and constructivity. Mark Xu is good people in my experience.”]
Rob Bensinger: Isn’t ‘my prior odds were x, my posterior odds were y, therefore my evidence strength must be z’ already good enough?
Are you worried that the person might not actually have a posterior that extreme? Like, if they actually took 21 bets like that they’d get more than 1 of them wrong?
Guy Srinivasan: I feel like “fight! fight!” except with the word “unpack!”
Duncan Sabien: > The prior odds that someone’s name is ‘Mark Xu’ are generously 1:1,000,000. Posterior odds of 20:1 implies that the odds ratio of me saying ‘Mark Xu’ is 20,000,000:1, or roughly 24 bits of evidence. That’s a lot of evidence.
This is beyond “spherical frictionless cows” and into disingenuous adversarial levels of oversimplification. I’m having a hard time clarifying what’s sending up red flags here, except to say “the claim that his mere assertion provided 24 bits of evidence is false, and saying it in this oddly specific and confident way will cow less literate reasoners into just believing him, and I feel gross.”
Guy Srinivasan: Could it be that there’s a smuggled intuition here that we’re trying to distinguish between names in a good faith world, and that the bad faith hypothesis is important in ways that “the name might be John” isn’t, and that just rounding it off to bits of evidence makes it seem like the extra 0.1 bits “maybe this exchange is bad faith” are small in comparison when actually they are the most important bits to gain?
(the above is not math)
Marcello Herreshoff: I share Duncan’s intuition that there’s a sleight of hand happening here. Here’s my candidate for where the slight of hand might live:
Vast odds ratios do lurk behind many encounters, but specifically, they show up much more often in situations that raise an improbable hypothesis to consideration worthiness (as in Mark Xu’s first set of examples) than in the situation where they raise consideration worthy hypotheses to very high levels of certainty (as in Mark Xu’s second set of examples.)
Put another way, how correlated your available observations are to some variable puts a ceiling on how certain you’re ever allowed to get about that variable. So we should often expect the last mile of updates in favor of a hypothesis to be much harder to obtain than the first mile.
Ronny Fernandez: @Duncan Sabien So is the prior higher or is the posterior lower?
Chana Messinger: I wonder if this is similar to my confusion at whether expected conservation of evidence is violated if you have a really good experiment that would give you strong evidence for A if it comes out one way and strong evidence for B if it comes out the other way.
Ronny Fernandez: @Marcello Mathias Herreshoff I don’t think I actually understand the last paragraph in your explanation. Feel like elaborating?
Marcello Herreshoff: Consider the driver’s license example. If we suppose 1/1000 of people are identity thieves carrying perfect driver’s license forgeries (of randomly selected victims), then there is absolutely nothing you can do (using drivers licenses alone) to get your level of certainty that the person you’re talking to is Mark Xu above 99.9%, because the evidence you can access can’t separate the real Mark Xu from a potential impersonator. That’s the flavor of effect the first sentence of the last paragraph was trying to point at.