Driverless cars, Facebook and control: How fear breeds overreaction

We’re learning that this technological age, which seemed to start with the pretty innocuous act of emailing, has evolved to play into our more primal fears about feeling out of control.

By Tyler Cowen

The world has now seen its first death by driverless car, in Tempe, Arizona, as a pedestrian was struck down by an Uber vehicle. This is a big news story, even though about 40,000Americans die from motor vehicle accidents each year. We’re learning that this technological age, which seemed to start with the pretty innocuous act of emailing, has evolved to play into our more primal fears about feeling out of control.

Stories about how individual people die, or nearly die, are captivating. If a child is trapped down a well, the world will watch for days and spend whatever is needed to pull off a rescue. This is because we elevate highly visible deaths over harder-to-see deaths: Notice the news coverage of ISIS beheadings of Western journalists, victims of mass school shootings, and the recent attempted-assassination of a Russian former double agent in the U.K. We should not, however, see our hurried responses as overreacting. Too often the natural human tendency is not to respond decisively to a tragedy at all. If the drama of these deaths mobilizes us, that may be needed to help fix the underlying problem.

The rush to respond is more problematic, however, when the visible deaths spring from a new technology. The first American citizen to be assassinated by a mini-drone on U.S. soil will be a big, big story. There is already a viral fictional video about this possibility. We will indeed be mobilized to action, but we may respond by shunning or overregulating drones, even if such action is unlikely to limit terrorist attacks.

Part of the problem with drones is that we don’t like feeling out of control, and the newness of the technology gives the story extra legs. We think we can protect our lives against many kinds of risks, perhaps irrationally to some extent, but how do you protect against being assassinated by a small, poison-equipped drone? Even if there are useful steps you can take, they are not obvious to us today. This seems like a scary possibility attached to a dystopian future, even if the number of deaths by drone assassination never approaches that of suicide from handguns.

Like drones, driverless cars possess some features of an, especially potent scare story. They are a new and exciting technology, and so stories about them get a lot of clicks. We don’t actually know how safe they are, and that uncertainty will spook people above and beyond whatever is the particular level of risk. Most of all, driverless cars by definition involve humans not feeling indirect control. It resembles how a lot of people feel in greater danger when flying than driving a car, even though flying is usually safer. Driverless cars raise a lot of questions about driver control: Should you be allowed to sleep in the backseat? Or must you stay by the wheel? That focuses our minds and feelings on the issue of control all the more.

The authorities have issued a preliminary judgment that the accident in Tempe was the result of an errant pedestrian and that it had nothing to do with the car in question being driverless. Still, if you think that Americans process information about risk rationally, I would advise you to consider attitudes toward vaccines. Or consider how many people develop diabetes, partially through bad lifestyle decisions, and then they don’t treat the condition properly, even with their lives on the line.

As for the accident, the result so far is that Uber has suspended all tests of its autonomous vehicles on American roads. We all know that another driverless car accident next week, no matter what the context, could create a media frenzy and a regulatory backlash, for better or for worse.

The recent brouhaha over Facebook and Cambridge Analytica (read here and here) reflects some similar issues. Could most Americans clearly and correctly articulate exactly what went wrong in this episode? Probably not, but people do know that when it comes to social networks, their personal data and algorithms, they don’t exactly feel in control. The murkiness of the events and legal obligations is, in fact, part of the problem.

 When I see a new story or criticism about the tech world, I no longer ask whether the tech companies poll as being popular (they do). I instead wonder whether voters feel in control in a world with North Korean nuclear weapons, an erratic American president, and algorithms everywhere. They don’t. Haven’t you wondered why articles about robots putting us all out of work are so popular during a time of full employment?

We are about to enter a new meta-narrative for American society, which I call “re-establishing the feeling of control.” Unfortunately, when you pursue the feeling rather than the actual control, you often end up with neither.

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Tyler Cowen is a Bloomberg View columnist. He is a professor of economics at George Mason University and writes for the blog Marginal Revolution. His books include “The Complacent Class: The Self-Defeating Quest for the American Dream.”

This article was originally published on Bloomberg Quint.

AbstractCultureNow you knowscience and technology