Epistemic status: I read pretty quickly, so some of my points or numbers could be way off.
Weakly upvoted; I love these posts, but I thought this one was less successful than the first. I’m not sure what was happening internally, but I think some of your statistical claims just look shakier than I think they would if you were reading + responding to a story you felt neutrally about.
LOL, Manhattan Institute, checks out.
[...]
I think I’ve just slipped into soldier mindset at this point but I have to say, I feel like I’m being very open and rational! The disparity between the claims and the evidence seems so obvious!
One useful rule for these posts (which I really like!) would be that if you feel yourself getting annoyed by someone’s citing a source, the proper reaction seems like either “I’m going to assume this source is saying something true” or “I’m going to look at the source and figure out whether something is false”. In this case, it felt like your mind did a sneaky “aha, this organization probably said something untrue” without checking the data.
Of course, some organizations really do say a lot of untrue things, and this may be more prevalent the further out you go on the political spectrum (in both directions). But these data checks are still good practice — and if an org lies a lot, the checks may not take very long.
Some of the biggest shifts to my own worldview have been when I did “epistemic spot-checking” on claims I was sure were exaggerated or fake, only to find that they really did seem fair. You never know when one of those might hit you, and once you’re in “LOL, checks out” mode, you’re probably no longer open to them.
(Massive credit for typing “LOL, checks out” and acknowledging that was going through your head — that’s a big part of the battle! Er, the scouting mission.)
Also, “Mental health-related calls accounted for 22% of cases in which on-duty police used lethal force and killed someone, according to data from 2009 to 2012 from 17 states where data was available.” Funny how the author doesn’t mention this when claiming that police respond well to MH calls!
In order to claim this statistic as evidence that police handle these calls badly, it feels like you need (a) some sense of the absolute numbers (22% of what?), and (b) a sense of how “mental health-related calls” are defined. What fraction of these calls involve an in-progress assault, or someone brandishing a weapon, vs. an unarmed person who isn’t seriously threatening anyone? The NPR article hints that few such calls involve this kind of danger, but I could imagine a world where:
There are 500,000 such calls per year in the US
90% of them are nonthreatening, 10% are threatening (50,000)
There are 1,000 police killings per year, 22% of which happen in mental health incidents (220)
In this set of imaginary calculations, the chance of a person being killed when the police deal with a threatening mental health event is ~1/200. This doesn’t seem unreasonable if these incidents almost always involve weapons/assault/some other form of threatening behavior (though again, I don’t know what the actual numbers are).
That said, I’m also leaving out cases of serious injury, etc., which could make police numbers look much worse.
I say this as someone who’s really excited about actually trying new things like B-HEARD, and who likes the idea of “experts dealing with the things they’re good at” all over society. But I’d want to know more before I concluded that overall, the police are really bad at this particular thing.
Funny that the author doesn’t mention that “One of the major concerns going into the program was the safety of the first responders, but so far the program has only called for NYPD backup seven times. On the other hand, the city said, the NYPD has called in B-HEARD teams 14 times after finding police services weren’t needed.” (emphasis mine)
Maybe I’m not reading correctly, but it seems like B-HEARD called the police for 7⁄107 instances, while the NYPD called in B-HEARD in 14/X instances. What is X? If it’s 393 (500 mental health calls − 107 taken by B-HEARD), then the police call in B-HEARD less often, per case, than B-HEARD calls in the cops.
This doesn’t mean B-HEARD is a bad idea or anything — as you point out, people might be poorly calibrated on what they should be handling — but it doesn’t seem to warrant an “on the other hand” that implies B-HEARD helps the police more often than they receive help.
Epistemic status: I read pretty quickly, so some of my points or numbers could be way off.
Weakly upvoted; I love these posts, but I thought this one was less successful than the first. I’m not sure what was happening internally, but I think some of your statistical claims just look shakier than I think they would if you were reading + responding to a story you felt neutrally about.
One useful rule for these posts (which I really like!) would be that if you feel yourself getting annoyed by someone’s citing a source, the proper reaction seems like either “I’m going to assume this source is saying something true” or “I’m going to look at the source and figure out whether something is false”. In this case, it felt like your mind did a sneaky “aha, this organization probably said something untrue” without checking the data.
Of course, some organizations really do say a lot of untrue things, and this may be more prevalent the further out you go on the political spectrum (in both directions). But these data checks are still good practice — and if an org lies a lot, the checks may not take very long.
Some of the biggest shifts to my own worldview have been when I did “epistemic spot-checking” on claims I was sure were exaggerated or fake, only to find that they really did seem fair. You never know when one of those might hit you, and once you’re in “LOL, checks out” mode, you’re probably no longer open to them.
(Massive credit for typing “LOL, checks out” and acknowledging that was going through your head — that’s a big part of the battle! Er, the scouting mission.)
In order to claim this statistic as evidence that police handle these calls badly, it feels like you need (a) some sense of the absolute numbers (22% of what?), and (b) a sense of how “mental health-related calls” are defined. What fraction of these calls involve an in-progress assault, or someone brandishing a weapon, vs. an unarmed person who isn’t seriously threatening anyone? The NPR article hints that few such calls involve this kind of danger, but I could imagine a world where:
There are 500,000 such calls per year in the US
90% of them are nonthreatening, 10% are threatening (50,000)
There are 1,000 police killings per year, 22% of which happen in mental health incidents (220)
In this set of imaginary calculations, the chance of a person being killed when the police deal with a threatening mental health event is ~1/200. This doesn’t seem unreasonable if these incidents almost always involve weapons/assault/some other form of threatening behavior (though again, I don’t know what the actual numbers are).
That said, I’m also leaving out cases of serious injury, etc., which could make police numbers look much worse.
I say this as someone who’s really excited about actually trying new things like B-HEARD, and who likes the idea of “experts dealing with the things they’re good at” all over society. But I’d want to know more before I concluded that overall, the police are really bad at this particular thing.
Maybe I’m not reading correctly, but it seems like B-HEARD called the police for 7⁄107 instances, while the NYPD called in B-HEARD in 14/X instances. What is X? If it’s 393 (500 mental health calls − 107 taken by B-HEARD), then the police call in B-HEARD less often, per case, than B-HEARD calls in the cops.
This doesn’t mean B-HEARD is a bad idea or anything — as you point out, people might be poorly calibrated on what they should be handling — but it doesn’t seem to warrant an “on the other hand” that implies B-HEARD helps the police more often than they receive help.