Public apprehension about data privacy and security has skyrocketed in the wake of user data being leveraged for fake news, political meddling and data breaches.
It is almost a conspiratorial truth about data surveillance that there are entities gathering user data for unknown purposes. Companies and political organisations alike mine our data in increasingly innovative ways, to determine what we do, whom we know, and where we go. Data mining and subsequent customisation of our experience on the internet has become so ubiquitous that we don’t even realise how our opinions, preferences and worldview are actually shaped by a few companies. For instance, remember that advertisement that piqued your interest while browsing your Facebook newsfeed yesterday? Well, that’s because Facebook used your search history on Google and Amazon to make you think that the social media network had turned into a mind reader.
Facebook’s appetite for your personal data may have come under scrutiny in the wake of the Cambridge Analytica scandal, it would be naïve of us to let Google and Amazon go scot-free.
How much do Google and Amazon know?
When he was asked about what user data Google collects, Electronic Frontier Foundation Senior Staff Attorney Nate Cardozo replied, “Everything they can.” Google collects and usually monetises your location history, your search queries, your activity, your email content, your contacts, your documents on Google Drive, your YouTube watch history, your friends, and your photos. Its wide variety of services — Search, Gmail, Maps, Drive, YouTube, AdWords, Android, etc. — are all aimed at advertisement personalisation.
Google insists that is doesn’t sell your data, but keep in mind that Facebook also didn’t technically sell personal data to Cambridge Analytica either; it merely allowed a researcher to use an app to gather the data which was subsequently given to the company to influence the American presidential election and Brexit vote.
In all fairness to Google, it has chosen to adopt a far more transparent approach towards informing its consumers about how their data is used. Since its 2009 Data Privacy Summit and over the last few years, it has doubled down on its privacy concerns, adopting Application Programming Interface (API) rules that prohibit the integration and resale of data to third party companies and software.
However, as sociologist of technology Zeynep Tufekci tweeted “If your business is building a massive surveillance machinery, the data will eventually be used & misused.”
In October 2018, Google made headlines when it pulled the plug on its social network Google+ following allegations of a bug that had exposed the data of up to 500,000 Google+ users since 2015. Moreover, as reported by the Associated Press, Google continues to store user location data on both Android and iOS devices, even when users repeatedly prompt a pause on its collection.
The primary thing to understand about Google is that it is neither a consumer software company, nor a search company; it is an advertisement company. Consequently, any information that Google collects about you is used to broker ad sales around the internet.
Amazon, also collects a large amount of data from its users to provide “interest-based advertisements”. According to the Amazon Privacy notice, some of the information collected by the company includes users’ address, phone number, credit and debit card information, IP addresses, and miscellaneous information about users’ location and mobile device, including a unique identifier for the device. This excludes the data that Alexa, Amazon’s digital assistant, stores in form of transcripts of user interaction.
While Amazon has repeatedly claimed that it isn’t in the business of selling user data to others, that does not mean that user data is safe with the private company. Last month, the company admitted to a “technical error” that exposed the data of several of its customers that has since been fixed. Despite being asked by numerous media organisation, the company refused to provide an actual number of people whose information was compromised.
In a more concerning development, Amazon quietly applied for a patent in October that would allow Alexa to decipher a user’s physical characteristics and emotional state based on their voice (including language accent, ethnic origin, emotion, gender and age). If this patent is approved, it would raise several ethical concerns related to racial profiling and discriminatory advertising.
These scenarios, despite their seeming implausibility, have already been realised by other companies. Netflix was allegedly targeting users on the basis of their race.
What can you do?
User data is thus not abstract or inconsequential metadata that is collected and used with users’ informed consent. In fact, as Jathan Sadowski, a reasearcher of smart cities at the University of Sydney, Australia points out that “many common practices of data collection should actually be treated as a form of theft … [or] data appropriation – which means capturing data from people without consent and compensation”.
In such a bleak scenario, deleting your social media accounts may seem like the best route. However, that may not always be feasible. Here are a few steps that you can take to protect your data:
2. Prioritise privacy in your web browser: You may want to switch from Chrome to Tor Browser that prevents organisations from tracking you.
3. Try an alternative search engine: While this might seem contradict the common adage of “Googling it”, it is necessary to recognise algorithmic biases on the basis of socioeconomic metrics, politics and market dominance that govern Google’s search results. As Christian Stewart chronicles in his experience, there are better engines to search for specific queries when compared to Google. Some alternatives to try out include DuckDuckGo, Start Page and Search Encrypt.
4. Be mindful of, and regularly assess your digital well-being: Exercise caution while giving your data. Quartz lists a number of ways to assess the safety and privacy of your data.
5. Devote time to understand your privacy settings: Spend time getting to know your privacy settings, even if it means spending a few extra minutes reading the user agreements and other related documentation. The burden to protect your data from abuse shouldn’t be on you, but unfortunately, that’s where we are today.
Fortunately, global awareness about data privacy has been steadily rising. As a result, at least in the European Union, since May 25, 2018, the data of European citizens will be covered by the General Data Protection Regulation which is explicitly designed to protect and empower all EU citizens on data privacy and reshape the way organisations across the region approach the matter.
In India too, due to the lacunae within the IT Act and an acute lack of legislation focusing on data privacy and protection, a committee of experts was tasked with identifying lapses and shortcomings of the current laws. This resulted in the Personal Data Protection Bill, 2018. The Bill lists provisions for special protection of personal data and an overall expansion of the its definition to include Aadhaar number, transgender status, political beliefs and affiliations. As a whole, the Bill takes cognisance of the many debates and issues surrounding data protection. Similarly, the issue of data localisation is expected to be resolved when government opens the issue for public feedback.
A 2016 report by Oracle and MIT Technology Review argues that most technological companies across the world recognise “data as an asset”. In the wake of the technology start-up boom in India, it is only a matter of time before Indian companies follow suit. Treating data as a form of capital means that firms and organisations hoard, commodify, and monetise as much data as they possibly can. And these data banks can never be too big.
This argument is, to some extent, contested by the concept of “Data as Labour (DaL)”. It was was first introduced by virtual reality pioneer Jaron Lanier in his 2013 book Who Owns the Future?. Lanier argued that in the current digital landscape, we should actually be getting paid for the data that we provide big data companies since it constitutes the real source of their profits. Our online interactions should technically fall under the category of “work”. In theory, this ideology would actually lead to a more economically fair society, especially with the advent of Web 2.0, future of work, and an increase in AI algorithms.
In a similar vein, cybersecurity firm Kaspersky Lab piloted the idea of a data dollar store in 2017. The data dollar store is a regular store that sells exclusive t-shirts, mugs, poster prints, etc. However, you can buy the items only by giving up some of your personal data. Want the mug? Hand over three screenshots of your WhatsApp, email or SMS chats. How about a T-shirt? That’ll cost the last three pictures on your camera roll. To buy a poster print, a member of staff would get access to your phone and choose any five photos to keep. While the store was clearly a marketing stunt, it compels people to think critically about the data they unwittingly to give away for free.
As big data continues to have both quantifiable and unquantifiable implications in our daily lives, it becomes tremendously important to educate ourselves of the value associated with our data and exercise caution in an increasingly digitalised present and future. In the end, it is befitting to end with a a quote from Yuval Noah Hariri’s book 21 Lessons for the 21st Century that summarises the value of our data in the context of time:
“At present, people are happy to give away their most valuable asset — their personal data — in exchange for free email services and funny cat videos. It’s a bit like African and Native American tribes who unwittingly sold entire countries to European imperialists in exchange for colourful beads and cheap trinkets.”
Just something to think about!
Siddharth Srivastava is a writing analyst at Qrius.
Stay updated with all the insights.
Navigate news, 1 email day.
Subscribe to Qrius