As technology rapidly progresses, doomsday stories emerge just as quickly. Not a day passes without some prophecy about how technology will drive us to the depths of darkly-anonymized-crypto-fake-and-artificially-intelligent-blockchain-based mayhem.
Dissident voices, however, are trying to spread alternative messages. In the past century, increasingly efficient technology and the advancement of knowledge have addressed global challenges including global poverty, deaths from violent crime, childhood mortality, preventable diseases and human life expectancy – at a scale never seen before.
For obvious reasons, such breakthroughs can be considered among humanity’s greatest achievements; yet we also see how technology facilitates online disinformation, global cyberattacks and unprecedented terrorist media campaigns inspiring thousands of would-be terrorists.
“Technology is a force that takes what was once scarce and makes it abundant,” write Peter Diamantis and Steven Kotler in Abundance: The Future is Better than You Think. Truly, this is the cornerstone of the Fourth Industrial Revolution. But in a world of finite resources, when something becomes “abundant”, it is often at the expense of something else that becomes scarce.
The global criminal community vastly benefits from the exponential rise of technology and the promises of this revolution. Does the abundance of technology, by fuelling new criminal models, create scarcities in law enforcement capabilities? Elsewhere, does the abundance of data make information scarce? Does the multiplication of attention-mining interfaces make privacy scarce? Does the wide-scale adoption of end-to-end encryption and zero-knowledge models make attribution scarce? Does the proliferation of online fakes and massive disinformation make evidence scarce? Does the globalization of real-time communication make jurisdiction scarce?
All in all, does the abundance of technology create scarcities or opportunities for law enforcement?
Scarcity of veracity, abundance of fakes
As reported by Europol, the past decade has seen an explosion of data available to law enforcement, affecting the capacity of security services to comprehend and analyse this data at speed and scale. But the issue is no longer about finding the needle in the haystack; now, the challenge is to find the needle disguised as hay.
Trust in data is an essential condition of police and legal procedures. If we cannot evaluate the veracity of the data collected or the e-evidence seized, this shakes the very foundation of the judicial system. How can a court trust what is presented in a courtroom when both defendant and prosecution could produce digitally convincing yet different surveillance-camera records, extracts of social media activities or differing logs of GPS coordinates about the very same occurrence? The convergence of exponential technologies is expediting the emergence of a major threat: a scarcity of data veracity, and thereby, scarcity in trust.
In late 2017, a community of developers posted an experiment on Reddit under the codename DeepFake. They were proud of their latest success: tampering with porn videos and replacing the faces of sex performers with those of Hollywood stars. Since then, the deepfake community has grown massively and new technological breakthroughs, such as Samsung’s new artificial intelligence technology, have emerged to create deepfakes,
This is but one example demonstrating the power of the abundance of converging technologies, particularly when combined with a decentralized network of talent and cheap access to algorithmic capacity and computational power. The public release of an AI algorithm (TensorFlow), the massive availability of personal data, the sudden connectivity of an organic community of innovators and the extensive production of free knowledge all converged to give rise to a business model with the potential to threaten data veracity on a global scale.
Tampering with data veracity goes beyond AI-supported media creation; generating fakes would be nothing without the ability to introduce them to the public. This happens not only via the generation and distribution of fake narratives (via text, media and targeted dissemination tactics) but also via citizen profiling. It’s precision disinformation influencing citizen beliefs or behaviour through targeted fakes.
What can law enforcement do?
As extensive availability of data, convergence of AI and large-scale profiling help to generate fakes, law enforcement should look carefully into the data they hold. Investing in AI to analyse historical data harvested in the pre-deepfake and pre-fake-news era would generate patterns and signatures against which potential manipulation could be assessed. Various industries are currently trying to make the best use of dark data, in particular getting valuable information from historical data. In a world in which newly generated data is prone to manipulation, historical datasets, for the most part manually collected, are more valuable in terms of veracity.
Another way to use AI would be to mimic the existing best practices for generating fakes. It is often said that the porn and entertainment industries are the forerunners of media technologies. While porn fans were the first to invest in deepfakes, Hollywood is already generating fake actors to fool the human eye, notably through the use of AI Generative Adversarial Networks (GAN). The concept involves two opposing AIs: one to generate the fake and the second to evaluate its quality. Law enforcement could invest in a similar process of generation and detection.
Further investment in blockchain technology, which was created to ensure transparency and traceability of data, could also be beneficial. One of the objectives of blockchain-based services is to create a collaborative ledger of truth with which it is virtually impossible to tamper (at least not until quantum computing goes mainstream). This is at the core of self-sovereign identity, a potential global response to identity theft or the generation of fake identities, and technology could help law enforcement revert potentially altered data to its original state.
Knowledge is critical
Law enforcement must develop a profound understanding of how citizens are vulnerable to fakes. Just as fighting burglaries, theft or illicit drugs require an understanding of how criminals target their market, countering fakes requires a good comprehension of the cognitive vulnerabilities of the public. Law enforcement needs to develop knowledge about what type of data is harvested by smart-city sensors, the Internet of Things, social media and self-driving cars, among others, and how this data could be weaponized against the public by criminal groups.
The threat to data veracity stems from a convergence of disruptive technologies and the massive digitalization of services. Law enforcement can address this challenge through another convergence: the fusion of state-of-the-art technological literacy and deep understanding of data bias. This will allow law enforcement not only to understand the abundance of technology but also how to transform data into novel policing solutions and actions to benefit the community it serves.
Stéphane Duguin, Head of the European Union Internet Referral Unit, Europol
The article is originally published in World Economic Forum
Stay updated with all the insights.
Navigate news, 1 email day.
Subscribe to Qrius