Twitter Share Analysis Of Racist Abuse Against Football Players After Euro

Twitter has reviewed the racist abuse targeted at England players on its platform after the Euro 2020 final in July.

Rashford, Sancho, and Saka – three England players playing in the UEFA Euro 2020 final at Wembley Stadium – were subjected to a torrent of racist abuse and harassment on Twitter and Instagram in July.

Many people strived to drown out the abuse on the players’ Instagram pages by commenting on their posts with messages of love and support. At the time, Instagram stated, “We quickly removed comments and accounts directing abuse at England’s footballers last night, and we’ll proceed to take action against those that break our rules.”

One month from the England V Italy final, Twitter has published its conclusions of an analysis of the heresy of the players on the platform after they missed penalties in the 3-2 shootout loss.

In a series of tweets, Twitter UK said that it had identified that “the UK was – by far – the largest country of origin for the abusive Tweets we removed.” This finding debunks the reports issued by Conservative commentators at the time, who claimed that the abuse was coming from abroad. This finding also proves that Conservative MP Michael Fabricant made improper comments, who suggested that the racist abuse might not be “home grown” and declared that it could have come from foreign powers “trying to destabilise our society.”

In the 24 hours following the Euros Final, our automated tools detected and removed 1,622 abusive Tweets,” Twitter UK’s findings noted. “Only 2 percent of the Tweets we removed generated 1,000 or more Impressions.”

Additionally, the report provided evidence that ID verification – a proposed solution in the wake of the abuse – would not effectively combat online abuse. More than 696,000 people signed a petition lobbying the government to make ID a legal requirement for social media accounts. Many pointed out flaws with this idea, as it could also marginalize vulnerable groups and silence whistleblowers. Twitter says its data “suggests that ID verification was unlikely to prevent the abuse, since 99 percent of the permanently suspended accounts were identifiable.”

Twitter continued to soon be trialing a feature that temporarily autoblocks users that use “harmful language.” The company’s UK account concluded by remarking, “There is no place for racist abuse on Twitter.”

“We’re committed to making Twitter a safe place for people to communicate. We’re determined to do everything in our power, along with our partners, to stop these abhorrent views and behaviors,” read the final tweet in the thread.