NCMEC says that reports of child porn are growing, but that could easily be reports per posting, postings per image, or images per activity. NCMEC just *counts* reports, which are either a member of the public clicking a “report” button or an algorithm finding suspicious content. They acknowledge that a significant part of the rise in from broader deployment of such algorithms.
Similarly, the fraction of porn-producing activities which involve traumatic abuse is unclear. And is likely declining, judging by common anecdotes of sexual teenage selfies. I realize anecdotes are weak evidence at best, but producing such images is becoming easier, and puberty ages are dropping, so I’ll stand by my weak claim.
NCMEC sites IWF as saying that “28% of CSAI images involve rape and sexual torture”, but I cannot find a matching statement in IWF’s report. The closest I find is “28% of these reports [from members of the public] correctly identified child sexual abuse images,” but IWF seems to regard any sexualized imagery of an under-18-year-old as “abuse”, even if no other person is involved.
In any case, the IWF report is from 2016 and clearly states that “self-produced content” is increasing, and the share of content which involves children under 10 is decreasing (10 is an awkward age to draw a line at, but it’s the one they reported on). Likely these trends continued into 2018.
On the meta level, I note that NCMEC and IWF are both organizations whose existence depends on the perceived severity of internet child porn problems, and NYT’s business model depends on general dislike of the internet. I don’t suspect any of these organizations of outright fraud, but I doubt they’ve been entirely honest either.
NCMEC says that reports of child porn are growing, but that could easily be reports per posting, postings per image, or images per activity. NCMEC just *counts* reports, which are either a member of the public clicking a “report” button or an algorithm finding suspicious content. They acknowledge that a significant part of the rise in from broader deployment of such algorithms.
Good point. I wonder:
Did algorithm deployment expand a lot from 2014 to 2018? (I’m particularly boggled by the 18x increase in reports between 2014 and 2018)
What amount of the increase seems reasonable to explain away by changes in reporting methods?
About half? (i.e. remaining 2014-to-2018 increase to be explained is 9x?)
75%? (i.e. remaining 2014-to-2018 increase to be explained is 4x?)
A major contributor to the observed exponential growth is the rise of proactive, automated detection efforts by ESPs [electronic service providers], as shown in Figure 3 . Since then, reporting by ESPs increased an average of 101% year-over-year, likely due to increasing user bases and an influx of user-generated content. While automated detection solutions help ESPs scale their protections, law enforcement and NCMEC analysts currently contend with the deluge of reports in a non-automated fashion as they are required to manually reviews the reports
I doubt porn-related child abuse is growing.
NCMEC says that reports of child porn are growing, but that could easily be reports per posting, postings per image, or images per activity. NCMEC just *counts* reports, which are either a member of the public clicking a “report” button or an algorithm finding suspicious content. They acknowledge that a significant part of the rise in from broader deployment of such algorithms.
Similarly, the fraction of porn-producing activities which involve traumatic abuse is unclear. And is likely declining, judging by common anecdotes of sexual teenage selfies. I realize anecdotes are weak evidence at best, but producing such images is becoming easier, and puberty ages are dropping, so I’ll stand by my weak claim.
NCMEC sites IWF as saying that “28% of CSAI images involve rape and sexual torture”, but I cannot find a matching statement in IWF’s report. The closest I find is “28% of these reports [from members of the public] correctly identified child sexual abuse images,” but IWF seems to regard any sexualized imagery of an under-18-year-old as “abuse”, even if no other person is involved.
In any case, the IWF report is from 2016 and clearly states that “self-produced content” is increasing, and the share of content which involves children under 10 is decreasing (10 is an awkward age to draw a line at, but it’s the one they reported on). Likely these trends continued into 2018.
On the meta level, I note that NCMEC and IWF are both organizations whose existence depends on the perceived severity of internet child porn problems, and NYT’s business model depends on general dislike of the internet. I don’t suspect any of these organizations of outright fraud, but I doubt they’ve been entirely honest either.
Good point. I wonder:
Did algorithm deployment expand a lot from 2014 to 2018? (I’m particularly boggled by the 18x increase in reports between 2014 and 2018)
What amount of the increase seems reasonable to explain away by changes in reporting methods?
About half? (i.e. remaining 2014-to-2018 increase to be explained is 9x?)
75%? (i.e. remaining 2014-to-2018 increase to be explained is 4x?)