In August, Apple announced a new tool called neutralMatch. It’s an algorithm that will be installed on iPhones to scan images uploaded from an iPhone to Apple’s cloud storage, iCloud.
It’s designed to detect people who are sharing child sexual abuse material, or CSAM. If the uploaded images match images in the existing known child abuse image database, they’ll be flagged by the system and manually reviewed by Apple staff to confirm whether they are, in fact, CSAM.
If there’s a positive match, Apple will disable the account and report it to the National Center for Missing and Exploited Children (NCMEC) in the US, where the new tool is being trialled.
It’s not designed to target iPhone users who might take photos of their baby or kids in a bathtub and then save them to the iCloud – although we should all be wary of doing this.
Rather, the tech giant says it wants to “protect children from predators who use communication tools to recruit and exploit them, and limit the spread of Child Sexual Abuse Material (CSAM)”, and that neutralMatch is designed to “provide valuable information to law enforcement on collections of CSAM in iCloud Photos”.
While these aspirations are legitimate and positive, concerns relating to censorship and privacy invasion have been raised, and Apple this month decided to delay the introduction of neutralMatch.
The question here is the extent to which we should agree to Apple, or other technology companies, taking on the role of online police. Could this power to review personal content be used to impede privacy and freedom of speech, or even to supress certain political views?
Co-production is essential
In the digital world, governments alone are no longer capable of combating cybercrime, both cyber-enabled crime (such as online child exploitation and online fraud) and cyber-dependent crime (such as hacking).
The private sector is playing a significant role in cybercrime prevention and investigation, which is especially true, as it’s private-sector companies that hold most of the data and information.
This means technology companies such as Apple and Facebook are increasingly facing greater public pressure to take more responsibility for the prevention of cybercrime and assist in crime investigation – not only the spread of CSAM, but also hate speech and terrorism. In 2020, Apple was criticised for providing criminals a safe haven after it rejected an FBI request to unlock two iPhones.
From supporting genocide to contributing to surveillance
As of June, Facebook has roughly 2.9 billion users. It’s no longer just a private social media platform, but a platform that can have a significant impact on developments around the world, negatively or positively.
Facebook suspended Donald Trump’s account following the storming of the United States Capitol Complex in Washington DC on 6 January this year.
But during the Rakhine/Rohingya crisis in Myanmar in 2017, Facebook was criticised for supporting genocide, as it didn’t put enough effort into removal of hate speech.
The UN Report of the independent international fact-finding mission on Myanmar (A/HRD/39/64) criticised it as being “slow and ineffective” in responding to hate speech. Since then, Facebook has strengthened its response by hiring more Myanmar language experts, and banning Myanmar hate figures and organisations from the platform.
While there are growing calls for social media and tech companies to use their platforms to help reduce cybercrime, there’s also a concern we’re granting them too much power in the policing of cyberspace, which might amount to censorship and surveillance, and neutralMatch by Apple is a good example of this tension.
Although the design of it focuses today on CSAM, the algorithm can be easily adapted to review other content. This is the type of tool many governments, especially irresponsible ones, would like to have.
It’s a potential threat to privacy, freedom of speech, and civil society.
A salutary lesson for Apple and neutralMatch is Pegasus, a spyware used to “collect data from the mobile devices of specific individuals, suspected to be involved in serious crime and terror”. NSO, the Israeli firm that developed Pegasus, claimed it sold it to “responsible” governments, but it’s been found to have been used to collect information and data of politicians, journalists, and activists.
Once spyware has been developed, it’s hard to control how it will be used. And social media companies and technology companies will in some countries need to comply with requirements to collect data and/or censor specific content, including political content.
Facebook’s fake news and hate speech operations have already raised censorship concerns. In Taiwan, it’s recently been reported that posts on issues related to sensitive political events such as Taiwan independence, Hong Kong, and China-India relations are likely to have been censored and taken down.
Balancing safety and security
As technology advances, cybercrime investigation is becoming increasingly difficult, but for states and the private sector, the use of spyware in crime investigation is almost unavoidable.
This is of heightened concern when the trust between the users of the spyware and the general public is not strong. One way to build trust might be for a state or a tech company such as Facebook to come up with a better, more transparent censorship process, such as the regular release of information on collected data and statistics. This might also help gain public trust.
Ideally, universal guidelines on the conduct of social media companies are needed to build trust and avoid the misuse of power.
Stay updated with all the insights.
Navigate news, 1 email day.
Subscribe to Qrius