Top Selling Multipurpose WP Theme
Home Technology Updating our rules against hateful conduct: Twitter

Updating our rules against hateful conduct: Twitter

This blog was first posted in July 9, 2019, and updated March 5, 2020, to reflect additional changes made to our rule against hateful conduct 

We create our rules to keep people safe on Twitter, and they continuously evolve to reflect the realities of the world we operate within. Our primary focus is on addressing the risks of offline harm, and research* shows that dehumanizing language increases that risk. As a result, of months of conversations and feedback from the public and conversations with bothand, external experts and our own teams, in July 2019, we expanded our rules against hateful conduct  to include language that dehumanizes others on the basis of religion. Today, we are further expanding this rule to include language that dehumanizes on the basis of age, disability or disease.

We will require Tweets like these to be removed from Twitter when they’re reported to us:

If reported, Tweets that break this rule pertaining to age, disease and/or disability, sent before today will need to be deleted, but will not directly result in any account suspensions because they were Tweeted before the rule was in place.

Why start with these groups?

In 2018, we asked for feedback to ensure we considered a wide range of perspectives and to hear directly from the different communities and cultures who use Twitter around the globe. In two weeks, we received more than 8,000 responses from people located in more than 30 countries.

Some of the most consistent feedback we received included:

  • Clearer language — Across languages, people believed the proposed change could be improved by providing more details, examples of violations, and explanations for when and how context is considered. We incorporated this feedback when refining this rule, and also made sure that we provided additional detail and clarity across all our rules.
  • Narrow down what’s considered — Respondents said that “identifiable groups” was too broad, and they should be allowed to engage with political groups, hate groups, and other non-marginalized groups with this type of language. Many people wanted to “call out hate groups in any way, any time, without fear.” In other instances, people wanted to be able to refer to fans, friends and followers in endearing terms, such as “kittens” and “monsters.”
  • Consistent enforcement — Many people raised concerns about our ability to enforce our rules fairly and consistently, so we developed a longer, more in-depth training process with our teams to make sure they were better informed when reviewing reports. For this update it was especially important to spend time reviewing examples of what could potentially go against this rule, due to the shift we outlined earlier.

We are continuing to learn as we expand to additional categories. We’ve seen that by doing more in-depth training and an extended testing period to determine what we need to further clarify and define, our team is more prepared to handle cultural nuances and ensure we take action more consistently.

We also realize we don’t have all the answers, which is why we have developed a global working group of outside experts to help us think about how we should address dehumanizing speech around more complex categories like  race, ethnicity and national origin. This group will help us understand the tricky nuances, important regional and historical context and ultimately help us answer questions like:

  • How do we protect conversations people have within marginalized groups, including those using reclaimed terminology?
  • How do we ensure that our range of enforcement actions take context fully into account, reflect the severity of violations, and are necessary and proportionate?
  • How can – or should – we factor in considerations as to whether a given protected group has been historically marginalized and/or is currently being targeted into our evaluation of severity of harm?
  • How do we account for “power dynamics that can come into play across different groups?

All of this builds on our ongoing work with the Trust and Safety Council and our commitment to strengthening and focusing those partnerships.  We agree that these are difficult areas to get right, so we want to be thoughtful and effective as we expand this rule.

We’ll continue to build Twitter for the global community it serves and ensure your voices help shape our rules and how we work. As we look to expand the scope of this change, we’ll update you on what we learn and how we address it within our rules. We’ll also continue to provide regular updates on all of the other work we’re doing to make Twitter a safer place for everyone @TwitterSafety.

*Examples of research on the link between dehumanizing language and offline harm:

@2023 – Cellit. All Rights Reserved.

Contact us: contact@cellit.in