Social media: less regulated, more toxic?
Have you been the victim of hate speech on social media? You are not alone.
Do less regulated social media platforms inevitably lead to an explosion of hateful and toxic speech? This is what we see on the far-right platforms that have appeared in recent years. It remains to be seen whether this is the fate that awaits Twitter.
Several of the users banned from Facebook or Twitter had in fact fallen back on Gab and other TruthSocial where, according to Gianluca Stringhini, Boston University, “their online activity tends to become more extreme”. The only upside is that their community is then smaller.
However, could the turn that Twitter seems to be taking under the leadership of Elon Musk be a game-changer? From the first week following the acquisition, preliminary analyzes noted a multiplication of racist and anti-Semitic insults. Concerns have since escalated, with news that previously banned accounts were being reactivated.
Normally, a lot of stories with an extremist flavor start on the more marginal platforms and remain confined there. It's when they spread on Twitter or Facebook that they gain popularity, because that's where journalists discover them. Until now, Twitter's internal policy of limiting hate speech and misinformation about COVID has reduced the chances of these messages reaching a wide audience.
Unless Twitter, losing little to its advertisers, shrinks itself to the rank of these extreme platforms? This is the hypothesis supported at the beginning of the month, in a report in the magazine Nature, terrorism expert James Piazza of Pennsylvania State University. “These communities degenerate to the point where they're not really usable anymore: they're overwhelmed with bots, pornography, objectionable content.
For now, these misinformation experts are left to speculate, as the change in ownership is too recent. But many researchers are reportedly already preparing protocols to compare Musk's “before” and “after” and see if patterns in the dissemination of misinformation or cyberbullying can be identified.
Meanwhile, those who have studied the impact of social media on ethnic violence in recent years are concerned, notes Nature. When you have “ several stakeholders who make public incitements to commit crimes ”, these crimes end up being committed, says Félix Ndahinda, who has studied online hatred around the armed conflicts that are tearing the Republic of Congo apart. .
Most of this speech escapes platform moderators, because it is committed in languages that are subject to little monitoring. But having even less moderation, if any at all, will only help them, and even attract opportunistic disinformants to Twitter — because the network may be much less popular than Facebook, but it's still , for now, more popular than other far-right platforms. “It will encourage these speakers and increase the virulence of their speeches. »