Two in five Premier League players received abusive messages on social media platform Twitter last season, it has been revealed.
New data from the Professional Footballers Association revealed that out of 400 Premier League players who had Twitter accounts, 176 of them – 44% – received abusive and hateful content.
Twenty percent of the abuse targeted just four players, whom the PFA chose not to name in order to avoid triggering further abuse.
The players’ union worked in association with detection firm Signify to monitor levels of abuse on the platform during the 2020-21 campaign, while examining how Twitter itself handled the abuse.
The study analyzed more than six million messages and performed a more in-depth analysis of 20,000 messages, and found that 1,781 explicitly abusive messages were sent from 1,674 accounts. The tweets were reported to Twitter for deletion and the accounts reported to Twitter for sanction.
Just over 50% of abusive posts were from UK-based accounts, according to the report, and the problem worsened as the season wore on – racist abuse rose 48% across the country. second half of the campaign compared to the first.
A third of the abusive accounts verified were found to be affiliated with a UK club, whether as a fan, member or subscription holder. The clubs concerned have been contacted directly by the PFA.
Racist abuse peaked in May, according to the report, after initially plummeting following the social media boycott earlier this month. The report attributed the peak in May to an incident in the FA Cup final between Chelsea and Leicester, made worse by the two sides meeting in the Premier League three days later.
The report also highlighted an apparent lack of action to remove the posts or hold those who posted the posts to account.
Signify found that more than three-quarters of the 359 accounts that sent explicitly racist abuse to players during the season were still on the platform last month.
Only 56% of the racist posts identified throughout the campaign were deleted, with some remaining online for months and even for the duration of the season. Of those 56 percent, the report determined that 19 percent were deleted by the account holder rather than Twitter.
More than four out of five targeted homophobic messages identified during the season were still visible on Twitter.
The PFA presented its findings to the social media platform, and its new chief executive Maheta Molango said: “Now is the time to move from analysis to action. PFA’s work with Signify makes it clear that the technology exists to identify large-scale abuse and the people behind offensive accounts.
“Having access to this data means that the real consequences can be prosecuted for online abuse. If the players union can do it, so can the tech giants. “
The report also looked at a sample of accounts held by EFL and Women’s Super League (WSL) players, and it said Twitter appeared to apply a hierarchical approach to its moderation.
Twenty-seven percent of abusive messages directed at Premier League players are no longer visible, falling to 17% for abuse targeting EFL players and 12% for WSL players.
Twitter does not believe the report fully or fairly reflects the steps it has taken to proactively enforce its rules.
A spokesperson for the platform said, “It is our top priority to keep everyone who uses Twitter safe and free from abuse. While we’ve recently made strides in giving people more control over how they manage their security, we know there is still work to be done.
“We continue to take action when we identify tweets or accounts that violate Twitter rules. We invite people to speak out freely about our service, however, we have put in place clear rules to deal with threats of violence, abuse and harassment and hateful conduct.
“For example, in the hours following the Euro 2020 final, using a combination of machine learning-based automation and human scrutiny, we quickly deleted over 1,000 tweets and permanently suspended one. number of accounts for violating our rules – the vast majority of which we have detected ourselves using technology proactively.
Twitter points out that it launched the ability to hide replies in November 2019 and recently added new conversation settings that allow people on the platform, especially those who have been abused, to choose who can respond. to the conversations they start.
However, Molango believes that it is too much on players to apply filters to block abuse.
“It’s not for the victim to press a button. We made it clear that it was not good enough, ”said Molango.