Epistemic status: I read pretty quickly, so some of my points or numbers could be way off.
Weakly upvoted; I love these posts, but I thought this one was less successful than the first. Iâm not sure what was happening internally, but I think some of your statistical claims just look shakier than I think they would if you were reading + responding to a story you felt neutrally about.
LOL, Manhattan Institute, checks out.
[...]
I think Iâve just slipped into soldier mindset at this point but I have to say, I feel like Iâm being very open and rational! The disparity between the claims and the evidence seems so obvious!
One useful rule for these posts (which I really like!) would be that if you feel yourself getting annoyed by someoneâs citing a source, the proper reaction seems like either âIâm going to assume this source is saying something trueâ or âIâm going to look at the source and figure out whether something is falseâ. In this case, it felt like your mind did a sneaky âaha, this organization probably said something untrueâ without checking the data.
Of course, some organizations really do say a lot of untrue things, and this may be more prevalent the further out you go on the political spectrum (in both directions). But these data checks are still good practice â and if an org lies a lot, the checks may not take very long.
Some of the biggest shifts to my own worldview have been when I did âepistemic spot-checkingâ on claims I was sure were exaggerated or fake, only to find that they really did seem fair. You never know when one of those might hit you, and once youâre in âLOL, checks outâ mode, youâre probably no longer open to them.
(Massive credit for typing âLOL, checks outâ and acknowledging that was going through your head â thatâs a big part of the battle! Er, the scouting mission.)
Also, âMental health-related calls accounted for 22% of cases in which on-duty police used lethal force and killed someone, according to data from 2009 to 2012 from 17 states where data was available.â Funny how the author doesnât mention this when claiming that police respond well to MH calls!
In order to claim this statistic as evidence that police handle these calls badly, it feels like you need (a) some sense of the absolute numbers (22% of what?), and (b) a sense of how âmental health-related callsâ are defined. What fraction of these calls involve an in-progress assault, or someone brandishing a weapon, vs. an unarmed person who isnât seriously threatening anyone? The NPR article hints that few such calls involve this kind of danger, but I could imagine a world where:
There are 500,000 such calls per year in the US
90% of them are nonthreatening, 10% are threatening (50,000)
There are 1,000 police killings per year, 22% of which happen in mental health incidents (220)
In this set of imaginary calculations, the chance of a person being killed when the police deal with a threatening mental health event is ~1/â200. This doesnât seem unreasonable if these incidents almost always involve weapons/âassault/âsome other form of threatening behavior (though again, I donât know what the actual numbers are).
That said, Iâm also leaving out cases of serious injury, etc., which could make police numbers look much worse.
I say this as someone whoâs really excited about actually trying new things like B-HEARD, and who likes the idea of âexperts dealing with the things theyâre good atâ all over society. But Iâd want to know more before I concluded that overall, the police are really bad at this particular thing.
Funny that the author doesnât mention that âOne of the major concerns going into the program was the safety of the first responders, but so far the program has only called for NYPD backup seven times. On the other hand, the city said, the NYPD has called in B-HEARD teams 14 times after finding police services werenât needed.â (emphasis mine)
Maybe Iâm not reading correctly, but it seems like B-HEARD called the police for 7â107 instances, while the NYPD called in B-HEARD in 14/âX instances. What is X? If itâs 393 (500 mental health calls â 107 taken by B-HEARD), then the police call in B-HEARD less often, per case, than B-HEARD calls in the cops.
This doesnât mean B-HEARD is a bad idea or anything â as you point out, people might be poorly calibrated on what they should be handling â but it doesnât seem to warrant an âon the other handâ that implies B-HEARD helps the police more often than they receive help.
Epistemic status: I read pretty quickly, so some of my points or numbers could be way off.
Weakly upvoted; I love these posts, but I thought this one was less successful than the first. Iâm not sure what was happening internally, but I think some of your statistical claims just look shakier than I think they would if you were reading + responding to a story you felt neutrally about.
One useful rule for these posts (which I really like!) would be that if you feel yourself getting annoyed by someoneâs citing a source, the proper reaction seems like either âIâm going to assume this source is saying something trueâ or âIâm going to look at the source and figure out whether something is falseâ. In this case, it felt like your mind did a sneaky âaha, this organization probably said something untrueâ without checking the data.
Of course, some organizations really do say a lot of untrue things, and this may be more prevalent the further out you go on the political spectrum (in both directions). But these data checks are still good practice â and if an org lies a lot, the checks may not take very long.
Some of the biggest shifts to my own worldview have been when I did âepistemic spot-checkingâ on claims I was sure were exaggerated or fake, only to find that they really did seem fair. You never know when one of those might hit you, and once youâre in âLOL, checks outâ mode, youâre probably no longer open to them.
(Massive credit for typing âLOL, checks outâ and acknowledging that was going through your head â thatâs a big part of the battle! Er, the scouting mission.)
In order to claim this statistic as evidence that police handle these calls badly, it feels like you need (a) some sense of the absolute numbers (22% of what?), and (b) a sense of how âmental health-related callsâ are defined. What fraction of these calls involve an in-progress assault, or someone brandishing a weapon, vs. an unarmed person who isnât seriously threatening anyone? The NPR article hints that few such calls involve this kind of danger, but I could imagine a world where:
There are 500,000 such calls per year in the US
90% of them are nonthreatening, 10% are threatening (50,000)
There are 1,000 police killings per year, 22% of which happen in mental health incidents (220)
In this set of imaginary calculations, the chance of a person being killed when the police deal with a threatening mental health event is ~1/â200. This doesnât seem unreasonable if these incidents almost always involve weapons/âassault/âsome other form of threatening behavior (though again, I donât know what the actual numbers are).
That said, Iâm also leaving out cases of serious injury, etc., which could make police numbers look much worse.
I say this as someone whoâs really excited about actually trying new things like B-HEARD, and who likes the idea of âexperts dealing with the things theyâre good atâ all over society. But Iâd want to know more before I concluded that overall, the police are really bad at this particular thing.
Maybe Iâm not reading correctly, but it seems like B-HEARD called the police for 7â107 instances, while the NYPD called in B-HEARD in 14/âX instances. What is X? If itâs 393 (500 mental health calls â 107 taken by B-HEARD), then the police call in B-HEARD less often, per case, than B-HEARD calls in the cops.
This doesnât mean B-HEARD is a bad idea or anything â as you point out, people might be poorly calibrated on what they should be handling â but it doesnât seem to warrant an âon the other handâ that implies B-HEARD helps the police more often than they receive help.