The National Center for Missing and Exploited Children received 18.4 million reports of child sexual abuse material in 2018. As many of the reports contained more than one image or video, the grand total of exploitative content consisted of more than 45 million instances of child sexual abuse.
Content from the dark web surprisingly only accounted for a fraction of these reports, according to the New York Times. The majority of reports were presented by tech companies based in the United States, with 12 million of them – roughly 90 percent– taking place over the un-encrypted Facebook Messenger. While reports of child sexual abuse imagery occur on a global scale, the dispersal and detection of this material is largely grounded in Silicon Valley.
With a whopping amount of reports coming from Facebook Messenger, the company’s recent decision to move to encryption is unnerving. Encryption protects digital communication, ensuring a private platform that is free from surveillance. Which, at face value, is positive. We can look at Hong Kong as an example of why digital government surveillance poses a threat to freedom and democracy. Especially when taking Facebook’s history of sketchy privacy policies into account, encryption is an improvement to the platform’s security. However, considering that the company bans 250,000 WhatsApp accounts per month due to CSAI, increasing its privacy settings could yield inadvertent, heinous consequences. Encryption would serve as a sort of a cyber-sanctuary, making policing the platform, and consequently flagging CSAI material, much more difficult.
With the rise of technology came the rise of CSAI circulation. In turn, Congress introduced and passed the PROTECT Our Children Act of 2008, which instituted reporting requirements for tech companies, established strategic planning for prevention and interdiction of CSAI, and allocated a $60 million budget to the Cyber Division. The law set forth that tech companies legally have to report accounts of CSAI when detected; they are not required to search for CSAI or held liable for abusive material that goes undiscovered. Companies can’t regulate what they don’t see, which might give them incentive to not see it.
A lot of content moderation relies on tech companies’ cooperation with law enforcement. The exponential growth of CSAI, even after the child protection law passed in 2008, can be blamed on the incompetence of both of these factions. About 1 in 10 Homeland Security agents are assigned to the child exploitation cases, yet only about half of the designated $60 million annual budget for the cyber crimes unit has been allocated to the task force’s efforts; this year, the Department of Homeland Security diverted $6 million of the already underfunded cybercrimes budget to immigration enforcement.
According to the New York Times, the Justice Department has completed just two of the six required reports devised to compile data about CSAIs and lay out strategies to eradicate them.
The Department of Justice has demonstrated that the growing issue of child abuse and exploitation is not a priority, extending a large portion of prevention and interdiction responsibilities on tech companies and the NCMEC. As Silicon Valley develops more impenetrable electronic service providers, detection of CSAI and online perpetrators will likely diminish. This is a breaking point. Relying on electronic service providers to report CSAIs did contribute to cybercrime efforts, but we exist in a reality where that strategy alone is wildly inadequate.
Right now, encryption looks like a Gordian Knot. With the nuances of the Protect Our Children’s reporting requirements, it’s very plausible that tech companies will close their eyes to CSAIs rather than determine methods to eliminate them. It protects everyone except exploited children. I don’t believe the decision to enhance privacy online is a matter of weighing pros and cons, but Silicon Valley’s scope over online interaction indicates that they should be thinking three steps ahead of digital predators.