by Elton Gomes
As the first step in its battle against fake news, Facebook-owned WhatsApp has launched its “forwards label” feature in India. The new feature will help users identify whether a text or video has been originally composed or is simply being forwarded from another user. The move comes after the government urged the company to use technology to stop the spread of fake news and misinformation through the platform. WhatsApp also took to print media to educate people about the harms of fake news and how they can be prevented.
What is the “forwards” label?
The messaging platform said in a post, ““This extra context will help make one-on-one and group chats easier to follow. It also helps you determine if your friend or relative wrote the message they sent or if it originally came from someone else,” the Indian Express reported. The company further said, “WhatsApp cares deeply about your safety. We encourage you to think before sharing messages that were forwarded. As a reminder, you can report spam or block a contact in one tap and can always reach out to WhatsApp directly for help,” the Hindu reported.
Why was it introduced?
Rumours that spread via messaging app have led to lynchings across Assam, Maharashtra, Karnataka, Tripura, and West Bengal. Due to this, the Indian government said that the company cannot evade accountability and should carry out remedial measures. Union IT Minister Ravi Shankar Prasad demanded greater accountability from WhatsApp and said that the government would not tolerate any misuse of the platform. WhatsApp then replied saying that fake news can be eliminated by the government, civil society, and technology companies “working together”. Emphasising on this message in its advocacy campaign, WhatsApp said, “To fight fake news, we all need to work together – technology companies, the government and community groups. If you see something that’s not true, make people aware and help stop the spread.”
WhatsApp’s has announced initiatives to curb fake news
A few days ago, it was reported that WhatsApp was testing a new feature named the Suspicious Link Detection, wherein users will be informed if a link they have received has spurious or iffy origins. At the backend, WhatsApp would assess all links received as a message, and detect the suspicious ones.
Taking on a more research-based approach, the company also announced grants worth $50,000 to study the spread of fake news and find ways to detect unwanted content on the platform. According to a Facebook Research blog post, “WhatsApp is commissioning a competitive set of awards to researchers interested in exploring issues that are related to misinformation on WhatsApp. We welcome proposals from any social science or related discipline that foster insights into the impact of technology on contemporary society in this problem space,” NDTV reported. The company contacted persons who had experience in studying online interaction and information technologies, and sought to expand their research in those areas.
Elton Gomes is a staff writer at Qrius.