Instagram is demoting inappropriate content on its app. Here’s why

On Wednesday, April 10, Facebook-owned Instagram announced a series of new rules to manage “problematic content”. The company will now demote content from Instagram’s Explore and hashtag pages that is inappropriate, but does not outright violate the community guidelines.

Facebook says it has already started controlling content that may not violate its guidelines but is questionable enough to not receive engagement and views. For example, sexually-charged content will be filtered out from now on. Even memes that may not be directly violent or discriminatory but reflect poor taste could be downrated, said TechCrunch.

“We have begun reducing the spread of posts that are inappropriate but do not go against Instagram’s Community Guidelines, limiting those types of posts from being recommended on our Explore and hashtag pages,” said the company.

Facebook clarifies that if a user will still see a borderline post if they follow the account that posts it. However, that same post may not appear for the larger Instagram community in the Explore or hashtag pages.

Instagram’s Community Guidelines

Instagram’s Community Guidelines caution users against posting inappropriate content.

According to the app, “inappropriate content” includes nudity in the form of photos and videos, digital renditions of sexual intercourse, genitalia, buttocks, and female nipples. Nude or partially nude images of children may also be removed for safety reasons, said the app.

However, photos of post-mastectomy scarring, breastfeeding, and artwork depicting nude bodies are permitted.

Instagram didn’t specify when the new rule went into effect or exactly what kind of content is being regulated; it, however, said the process of downrating content has already begun.

Content democracy

In 2014, Instagram users were outraged when Rihanna’s topless photos were removed on Instagram. Posts featuring #FreeTheNipple movement and Rupi Kaur’s photo showing menstrual blood have also been removed in the past.

But Facebook Head of News Feed Tessa Lyons argued that just because some content is allowed on the app, doesn’t mean it needs to be the top advertised post on the platform.

Facebook founder Mark Zuckerberg also penned a letter; he wrote on the need to discourage borderline content to prevent people from engaging with “sensationalist and provocative content”.

He said the algorithm to downrate borderline content is a good idea because research suggests that when content gets close to the policy line, people engage with it more than average.

“This is a basic incentive problem that we can address by penalising borderline content, so that it gets less distribution and engagement,” said Zuckerberg.

However, users have often complained about the app’s lack of content democracy, particularly for posts on body and sex positivity. This issue is compounded with artificial intelligence systems that evaluate content.

Instagram Product Lead Will Ruben said the app is using machine learning to determine whether or not a post violates community guidelines or is problematic enough to be filtered from Explore and hashtag pages.

“Instagram is now training its content moderators to label borderline content when they’re hunting down policy violations; it then uses those labels to train an algorithm to identify,” said Ruben.

Automated biases

But machine learning has its limitations, as well.

In 2015, Google was criticised for its ‘racist’ image-recognition software that auto-tagged pictures of black people as “gorillas”. Yahoo-owned Flickr’s auto-tagging photo software also labelled a black man as an “ape”.

The Guardian reported that while machines are becoming more intelligent, they are also absorbing the deeply ingrained biases humans have. Reason: a software’s purely mathematical and statistical approach cannot sometimes comprehend social and cultural contexts.

US Congresswoman Alexandria Ocasio-Cortez said, “Algorithms are still made by human beings, and those algorithms are still pegged to basic human assumptions… They’re just automated assumptions. And if you don’t fix the bias, then you are just automating the bias.”

Computer scientists agreed with Ocasio-Cortez’s statements, saying that machines are trained on human data that can often have biases.

Instagram may make a smarter, more nuanced algorithm than ever before. But the underlying concern of human prejudice is still the same.

It’s important to note, Zuckerberg added, that the borderline content filter will be on by default, but users will get to choose if they want to opt into the filter only after the software is intelligent enough to proactively remove problematic posts.

Rhea Arora is a Staff Writer at Qrius