The Impact Of ‘Deepfake’ On Child Safety
By a Biometrica staffer
There’s no doubt that combating child sex abuse material (CSAM) is a serious challenge that law enforcement officers, parents, caregivers, and various others involved in or concerned about child safety and protection face. Sample this: Of the 21.7+ million reports that the Cyber Tipline of the National Center for Missing and Exploited Children (NCMEC) received in 2020, apparent CSAM accounted for a significant portion. That included 65.4 million files with:
• 33,690,561 images
• 31,654,163 videos
• 120,590 other files
At Biometrica, we’ve written about various aspects of CSAM and its impact on the well-being of children before. Recently, for instance, we wrote about how the PROTECT Act aimed at safeguarding children from sexual exploitation. We also wrote about the growth in sentencing in the U.S. for those producing CSAM. Between 2005 and 2019, there was a 422% spike in the number of CSAM production offenders sentenced, rising to 512 from 98.
Child protection is what drives us at Biometrica. It is at the heart of why we do what we do, every single day: We want to use technology, our data, software, and systems, to help build partnerships that make our communities stronger, our world safer, and a little more secure for the most vulnerable amongst us.
In this piece, we give you a brief introduction on how deep fake technology impacts the production of CSAM and, therefore, the lives of the young. What exactly is deepfake technology? Although the term has its origins in the production of sexual images and videos, it can also be used for other purposes. It has, however, “come to mean the use of AI to create synthetic media (images, audio, video) in which someone appears to be doing or saying what in reality they haven’t done or said,” according to a column in the Wall Street Journal by Kartik Hosanagar, a professor of technology and digital business at the Wharton School of the University of Pennsylvania.
Deep fake, much like all other technology, is not harmful by itself. But technology can only take on the role that the user assigns to it. To that effect, there have been instances where deep fake AI has been used in a legal, positive manner, too. However, the technology’s potential to be used as a tool by criminals of all stripes will likely overshadow positive use cases. It could be used by unscrupulous individuals to pull off everything from financial frauds to identity thefts to much worse crimes, including endangering the lives of children.
For example, in his column, Prof. Hosanagar describes how MIT Prof. Sinan Aral once discovered a video of himself that he hadn’t recorded endorsing an investment fund’s stock-trading algorithm. It wasn’t Prof. Aral in the video, but a deepfake creation in his likeness using artificial intelligence (AI). With an increasing number of deepfake apps available for download, anyone can become a victim of such scams, Prof. Hosanagar adds. To add to it, much like with other internet crimes, tracking down the criminals behind deepfake scams and illegal content will be challenging for any law enforcement organization across the world.
Needless to say, deepfake technology can easily be used by child sex predators to create CSAM images or video based on the likeness of a real child without anyone being any the wiser to it. In 2018, one in five police officers already reported finding deepfakes in their investigations, and three quarters of those surveyed believed the prevalence of deepfakes would increase in the future, according to Netherlands-based INHOPE, an organization whose vision is a world free of CSAM online.
“Deepfakes present unique challenges for victim identification. Technology can be used to obscure the face of the child in material depicting genuine abuse making identification much harder. In other cases, the face of a different child might be superimposed onto the original image meaning law enforcement waste time trying to identify the wrong child,” INHOPE adds on its website. The problem is only going to worsen as deepfake technology gets better, which is a process that has already been underway according to some.
Just last week, a Florida senator who survived a traumatizing experience with CSAM filed a bill to tackle this issue. State Senator Lauren Book was sexually abused by her nanny for six years when she was a child, according to a news report. On Feb. 12, she started receiving messages and images on her phone from an unknown number in an effort to intimidate, terrorize and extort her and her family. Book had become a victim of a deepfake image-based sexual abuse, digital hacking, and cyber stalking.
The bill she’s filed is aimed at transforming the way Florida would prosecute what she terms ‘cyber-trafficking,’ where the likeness of predominantly women and girls, both real and stolen images and created deep fakes alike, are being uploaded to the darkest corners of the internet for people to buy, sell, trade and use however else they see fit. It would strengthen Florida’s Revenge Porn Law by making it a felony to steal sexually explicit images from someone’s phone or other digital devices, and also make disseminating altered or created sexually explicit images, known as deep fakes, a felony.
While a child may not be physically harmed by the creation of CSAM deepfakes, it is still a form of sexualization of the child. “In a recent research report by Red Barnet, the operators of the Danish hotline, they argue that placing everyday images of children in sexualizing context constitutes a violation of a child’s right to be protected from being made a sexual object – appearing in article 34 of the United Nations Convention on the Rights of the Child. This is regardless of whether the child finds out or not,” INHOPE adds.
Though the CSAM problem has undoubtedly worsened since the start of the Covid-19 pandemic, it is not directly attributable to the virus. As one source put it, “[A]dults have been sexually exploiting children since the dawn of time under the guise of art in paintings, sketches, and other forms of portrayal.” With the invention of the camera in the 19th century, the category of CSAM exploded, though it tended to remain in individual possession, and was rarely published.
It was only with the advent of technology and the internet that the proliferation of CSAM grew out of control. Today, “there is virtually no commercial child pornography sold or produced” in the U.S., i.e., not sold legitimately. Rather, the production, trade, and exchange of such material is criminal and usually happens between private individuals via the internet.