By Hilary Sutcliffe and Conrad Kameke
Hilary Sutcliffe is the director at SocietyInside. Conrad Kameke is Head of strategy at BioInnovators Europe.
On Sunday 18 March, in Tampe, Arizona, the first pedestrian was killed by a self-driving car.
The circumstances of the accident have prompted a rapid reassessment of the safety of the cars in most countries where they are being trialled, with many tests suspended. The event has shaken public trust in self-driving cars and their safety,and triggered increased scrutiny of the technology and its claims.
Unusually, in this particular instance, it is the process and design of the governance of the technology which is receiving the most attention. In particular, the approach of different US states to finding the appropriate balance between the promotion of technological innovation and ensuring public safety has brought the role of governance and the priorities and trade-offs into the spotlight.
The reverberations worldwide in response to governance decisions in a single city in the US illustrate the potential for a significant negative impact, not just on individuals, but on whole sectors and even entire technologies when governance is suddenly not seen to be trustworthy.
In response to the accident, Arizona suspended testing of the cars by Uber, the company involved, while other manufacturers, such as Toyota, have voluntarily suspended their road trials of self-driving cars.
Carmaker BMW’s chief engineer suggested that no automaker or technology company currently has sensor and computer systems advanced enough to ensure adequate safety in such urban settings, calling into question the pace of development of the technology in the US over Europe which he proposes is safer and less reckless.
The event has also galvanised public opinion, with polarisation quickly forming. A group of institutions focused on highway and auto safety are calling for a moratorium to restrict the abilities of companies to conduct ‘beta-testing on public roads with families as unwitting crash-test dummies’; while others cite the many lives lost because of driver error, which will, theoretically, be saved through the use of autonomous vehicles, as a rationale for a lighter regulatory touch – particularly in US federal regulation which is currently under consideration.
As information about governance trade-offs and design flaws become clearer and the positions of the different groups harden, it seems likely that trust in the governance regimes will become the decisive factor for this debate. The outcomes, in Arizona and elsewhere, seem likely to have important repercussions in the effective deployment of self-driving cars across the world.
Trust is not normally a concept put at the heart of governance design. We propose that it should be. If notions of earning trust and a focus on trustworthiness were part of governance design efforts and procedures, would that make any difference to its design and effectiveness?
Would governance be more deserving of our trust, and would it actually earn greater trust from policymakers and society at large?
We will explore these important questions in a new project, which aims to develop a series of Principles for Trust in Technology Governance. These principles would help support those crafting governance mechanisms as well as stakeholders participating in their design and implementation. The project would enable better understanding of how such a focus could inform governance design in the future by taking into account the underlying mechanisms of earning or losing trust.
This is fundamentally not about finding ways of facilitating or inhibiting the introduction of technologies into the market; it is about designing governance that increases the chance of earning political and societal trust. This is regardless of whether, for example, risk assessment and risk-management decisions under this governance result in unrestricted market access, market restrictions, or even prohibitions of a technology or its respective products.
While self-driving cars are a focus of media interest at the moment, the history of technology introductions, from GMOs to food irradiation, nanotechnologies to artificial intelligence shows us that trust in the governance of technology is of paramount importance. Greater understanding of the mechanisms of the earning and losing of that trust is of critical value to all stakeholders involved in governance design – from regulators to politicians, NGOs, industry, and the scientific community – and both directly and indirectly to society at large.
Stay updated with all the insights.
Navigate news, 1 email day.
Subscribe to Qrius