by Elton Gomes
Strengthening its fight against child abuse images around the web, Google plans to launch an AI toolkit in helping organisations identify and report child sexual abuse material (CSAM) images. Google announced that it will use AI technology that will act as an add-on to existing technologies to help various service providers, NGOs, and other companies who monitor disturbing content.
Using the company’s expertise in machine vision, Google’s AI toolkit will assist human moderators by sorting flagged images and videos and “prioritizing the most likely CSAM content for review.” This should shorten the time for a review process. Google said that, in one trial, the AI tool helped a moderator “take action on 700 percent more CSAM content over the same time period,” the Verge reported.
How does the toolkit function?
Google’s toolkit will use deep neural networks to process images in such a way that fewer people should be exposed to them, thereby lowering the need for human moderators. This technique can help reviewers spot more content in less time. “Quick identification of new images means that children who are being sexually abused today are much more likely to be identified and protected from further abuse,” engineering lead Nikola Todorovic and product manager Abhi Chaudhuri said in Google’s blog post, CNET reported.
Deep neural networks scan through a multitude of images for abusive content and they prioritize the most likely candidates for review. The method makes it possible to dramatically increase the number of responses and reduce the number of people who have to look at child abuse images.
How can institutions acquire the kit?
The tool will be free for corporate partner and NGOs via Google’s Content Safety programming kit. “We’re making this available for free to NGOs and industry partners via our Content Safety API, a toolkit to increase the capacity to review content in a way that requires fewer people to be exposed to it,” Google said in its blog post, CNET reported.
Tech firms told to keep child abuse in check
UK Home Secretary Sajid Javid said that the British government was planning a major crackdown on child abuse online. Javid urged tech companies to come up with stringent steps to combat child sexual abuse. In a speech made at the London headquarters of the National Society for the Prevention of Cruelty to Children (NSPCC), Javid said that it was his “personal mission” to tackle crimes against children, such as online grooming and live-streaming of sexual acts on minors.
Javid said that technology firms are not taking online child sexual abuse seriously. The home secretary announced an additional 21.5 million euros to help investigators who said that they are facing a “constant uphill struggle” to track down offenders. Javid’s “call to action” comes after the National Crime Agency (NCA) revealed that up to 80,000 people in the UK present some kind of sexual threat to children online.
The NCA, on Sunday, said that more than 130 suspects, including a former police officer and five teachers, were arrested in a recent crackdown on online child sexual abuse offenders over the course of one week in July. Of the 130 arrested, 13 were registered sex offenders while 19 others held positions of trust. The NCA said it had received a staggering 82,109 referrals for child sexual abuse images from social media companies in 2017. This figure saw a rise of 700 percent since 2012.
According to a 2017 IANS report, one in every two children is a victim of child sexual abuse in India. It was also revealed that one in every five do not feel safe due to the fear of being sexually abused.
How can AI reduce child abuse?
AI is being heavily relied on to counter child sexual abuse online. Joelle Casteix, a renowned child rights advocate, harnessed the benefits of AI and created Project G. Project G is a tool that spots risk factors of predatory behaviours, not only in people who victimize children but also those linked with the cover up of sexual exploitation. Casteix said, “Today, with the tool we are aware of various behaviours and also use AI to find patterns that we have never thought possible to protect more and more children from abuse. It’s an amazing and fascinating tool, because it helps show us what a predator looks like,” Vanguard reported.
Elton Gomes is a staff writer at Qrius
Stay updated with all the insights.
Navigate news, 1 email day.
Subscribe to Qrius