Fact-check service & new privacy setting: All WhatsApp’s doing to fight fake news before polls

WhatsApp is serious about fighting fake news on its platform; to that end, on Wednesday, April 3, it announced it will add another layer of protection for users to intercept false information ahead of the Lok Sabha elections next week.

The messaging app has plans to introduce a fact-checking service and an invite system; the latter will allow users to vet any incoming group invite before joining. If and when the additional protections come on, users will have control over who has permission to add them to groups.

This will likely “help to limit abuse” and keep people’s phone numbers private.

How to activate this feature?

To enable the new protection, users must first avail the most recent version of WhatsApp. After going to Settings, tap Account > Privacy > Groups, then choose one of the three options of who can add you to a group: “Nobody”, “My Contacts”, or “Everybody”.

“Nobody” means you’ll have to approve every incoming group invite, WhatsApp says, while “My Contacts” means only users you already know can add you to groups.

The latest update comes just a day after parent company Facebook launched a fact-checking tipline, and nearly a month after the Chief Election Commissioner laid down the model code of conduct and its pre-certified political advertisement rules for the upcoming polls.

To avail the fact-checking service, users can submit inputs (send messages to verify if the content is true or false) to WhatsApp’s Checkpoint Tipline +91-9643-000-888. It comes with a database of  and a verification , which media-skilling startup PROTO is operating.

What this on?

WhatsApp, like other social media platforms, has contributed massively to the disruption and manipulation of key elections around the world. Most misinformation spreads through group chats where, studies found, people were routinely added against their will; these had up to 256 people.

The Wall Street Journal recently reported that India’s political parties often use the app to blast messages to groups by caste, income level, and religion.

In an attempt to distinguish between factual reports and fake news, EC has asked social media platforms Facebook and Twitter, messaging platforms WhatsApp and Share Chat, and search engine Google to appoint officers and take action against those who violate the code of conduct.

Several players, including WhatsApp, have appointed “grievance officers” to take “necessary and prompt actions against the content published on their platforms”, informs the EC.

Most of these tech giants, with increasing stake and expanding businesses in India, have reportedly “committed in writing” to ensuring a special monitoring mechanism for a clean campaign.

“We have seen a number of parties attempt to use WhatsApp in ways that it was not intended; our firm message to them is that using it in that way will result in bans of our service,” Carl Woog, head of communications for WhatsApp, told reporters at a briefing last month.

The EC has also warned social media companies to take down offending content much faster than usual, bypassing the standard operating procedure for doing so.

Why is this difficult on WhatsApp?

Doctored videos and reports have caused a spate of mob lynchings and even riots, while experts are scratching their heads to figure out a way to monitor WhatsApp, which has end-to-end encryption.

Unlike disinformation campaigns on social media platforms, the messages on WhatsApp are private, which makes it difficult for the company to trace from where the fake inflammatory messages originate, or moderate what’s happening and intervene.

WhatsApp’s role in curbing fake news

Largely unprepared for the role social media would come to play in politics, the last decade has witnessed political campaigning in the digital realm push the envelope to allow Cambridge Analytica to harvest Facebook data, or WhatsApp to skew Brazil’s presidential elections for the far-right.

In hindsight, the world is now acclimatising to the threats of micro-targeting and deepfakes over social media, introducing reforms for greater transparency instead of trying to impose a ban on them altogether.

WhatsApp, which has 200 million users in India, recently released a white paper; it lists several of the app’s measures to reduce the “abuse” of the platform at the hands of politicians and political parties. Last year, an alarming trend of conspiracy-mongering over the app went viral to deeply concerning results.

Lynchings brought WhatsApp under global spotlight

Unconfirmed  about kidnapping rackets, thieves, and sexual predators formed the gist of it and led to violent vigilantism and over 30 deaths last summer.

In Tamil Nadu, a mentally unstable man was beaten up based on false WhatsApp . A rowdy mob killed a transgender begging around in Telangana on similar grounds. In June, a 45-year-old woman hailing from a nomadic tribe in Rajasthan was confused for a child abductor and brutally lynched in Gujarat. Two young men from Assam were also attacked on a similar pretext.

The spate of lynchings ultimately roused the government (and international media) to take stock of the situation. Law enforcement authorities later discovered that miscreants had pieced the forwarded messages together with photographs from news reports on Syria and Rohingya refugee camps; they then tried to dissuade people from acting on the basis of such messages.

What were the consequences?

Unable to come up with a comprehensive solution to the fake news menace, the government then warned WhatsApp that it would treat the messaging platform as an abettor of  propagation and legal consequences would follow if adequate checks were not in place.

The Centre issued an ultimatum to WhatsApp to get to the bottom of this, while organising digital armies to infiltrate group chats and locate potential troublemakers. A few months later, BBC came out with a report that pegged the fake news menace to surging nationalism.

Meanwhile, the tech company tried labelling forwards as “suspicious”, flagging fake news, organising awareness campaigns, and even appointing a grievance officer to tackle the issue that, despite efforts, continued to snowball. 

WhatsApp also limited the number of times a user can forward a message in one go, to five, an attempt to slow down the viral dissemination of misinformation.

The limit on the number of forwards followed in the wake of another recent report that claimed that the Facebook-owned app had played host to fake news campaigns and conspiracy theories in several other countries including Brazil, Pakistan, Mexico.

Why this matters

Allowing users to control who adds them to groups will help to curb the spread of misinformation only to an extent and only if users go the extra mile in exploring and changing the settings for themselves.

Speaking of the latest group privacy feature, tech and privacy analysts say that the app should have enabled this level of protection as default, not as an option.

With problems like convincing AI impersonations or posing a grave challenge for lawmakers today, it’s impossible to say if WhatsApp’s eleventh-hour measures will preserve the sanctity of Indian elections.

What we do know is that messaging apps and social media platforms are likely to remain the dominant platform for political exchange—and misinformation for political points is the “new normal” tech firms and users have to collectively fight.

Prarthana Mitra is a Staff Writer at Qrius

fake newsLok Sabha Elections 2019WhatsApp