By Rachel Kaser
Twitter today announced it’d be altering its hateful conduct policies to prohibit “dehumanizing speech.” By doing so, it intends to patch a hole in its rules against hate speech to account for tweets that don’t specifically target anyone, but which are nonetheless demeaning. It’s also asking for users to give feedback on whether the new rules are clear.
Del Harvey, Twitter‘s VP of Trust and Safety, and Vijaya Gadde, Legal, Policy and Trust & Safety Lead, authored the new rules. According to them, this new policy has been in development for months, in an attempt to address tweets that users find abusive but that do not outright violate existing rules: “Better addressing this gap is part of our work to serve a healthy public conversation.”
Twitter‘s basing the rule on research from Harvard researchers, as well as the Dangerous Speech Project. The latter claims, on its site, to have “been in touch with Facebook, Twitter, Google and other Internet companies as an unpaid advisor, providing ideas for diminishing DS and other harmful content online while protecting freedom of speech.”
According to Harvey, Twitter defines dehumanizing speech thus:
Language that treats others as less than human. Dehumanization can occur when others are denied of human qualities (animalistic dehumanization) or when others are denied of human nature (mechanistic dehumanization). Examples can include comparing groups to animals and viruses (animalistic), or reducing groups to their genitalia (mechanistic).
I’m sure we’ve all seen this kind language on Twitter before — and on Facebook, Instagram, etc. Twitter isn’t the first to institute rules against it, either. Facebook’s definition of hate speech includes this description:
A post that calls all people of a certain race “violent animals” or describes people of a certain sexual orientation as “disgusting” can feel very personal and, depending on someone’s experiences, could even feel dangerous. In many countries around the world, those kinds of attacks are known as hate speech. We are opposed to hate speech in all its forms, and don’t allow it on our platform.
It’s interesting to see this particular definition arise in the wake of the Alex Jones scandal: Jones was banned for violating the policy against abusive behavior, and is alleged to have tripped Twitter‘s alarms by saying CNN reporter Oliver Darcy had “the eyes of a rat.” Animalistic dehumanization, possibly? Or perhaps this would be considered dehumanizing language under the new policy:
When you give a crazed, crying lowlife a break, and give her a job at the White House, I guess it just didn’t work out. Good work by General Kelly for quickly firing that dog!
— Donald J. Trump (@realDonaldTrump) August 14, 2018
Regardless, those cute Mean Tweet videos Jimmy Kimmel puts out every few months might have to soften up, since usually at least one tweet per video compares a celebrity or a musician to an inanimate object.
Twitter is giving users two weeks to provide feedback on the proposal, via a form in Gadde and Harvey’s blog post. Users have to rate the clarity of the new rules on a scale of 1-5, give examples of speech that might violate this policy but still be part of healthy conversation, and say how it could be improved.
This article has been previously published on The Next Web.
Rachel is a writer and former game critic from Central Texas.
Stay updated with all the insights.
Navigate news, 1 email day.
Subscribe to Qrius