Social Media Has A Hate-Speech Problem — Why Won't Companies Do Something About It?

Social Media Has A Hate-Speech Problem — Why Won't Companies Do Something About It?




By Lincoln Anthony Blades


The March 15 terrorist attack on both the Al Noor mosque and the Linwood Islamic Centre in Christchurch, New Zealand, was the deadliest shooting in the nation's history, an act of white supremacist violence engineered to inflict maximum pain on the Muslim community, and go viral in the process.


We know this in part because it was later revealed that, prior to the attack, the shooter allegedly uploaded a 73-page manifesto, linked out to on Twitter, that was heavily coded with references to obscure memes, low-quality trolling and uncommon racist irony, called "shitposting."  The shooter, a white man wearing a helmet-mounted GoPro, also streamed the murder of 50 worshippers in a video that was posted to Facebook, Twitter, and YouTube. The recording was later taken down on all platforms, and right after, both YouTube and Facebook promised to block and remove uploads of the video; the latter in particular specifically condemned the livestream of the terrorist attack.


Let's be clear that white supremacy is violence. While various groups claim different titles, there really is no version of white supremacy that comes without violent erasure of minorities, and the legacy of white supremacy is the proof. However the intention to destroy those considered non-white isn't just a matter of historical reflection: Today, there really is a steady rise in white supremacist violence, a rise in hate crimes against marginalized groups, and the reality that 100 percent of extremist mass murders committed last year came at hands of far-right zealots and white supremacists.


For those reasons, it’s particularly unsettling that on Facebook, at least 1.5 million copies of the video had to be noticed and removed in the immediate aftermath of the attack. Nevertheless it’s not altogether surprising; multiple reports have elaborate the ways in which white supremacist ideologies have permeated the most public-facing corners of the world wide web for years, with little interference from the platforms themselves.


On March 29, mere weeks soon following the mosque attacks, YouTube chief product officer Neal Mohan told the New York Times the site “was began as, and remains, an open platform for content and voices and advice and thoughts… across the whole spectrum.” In July 2018, Facebook founder Mark Zuckerberg defended the company’s choice to not curb Holocaust-deniers and the like under the claim that the agency shouldn’t silence people simply because “they got a number of things wrong.” For years, people have begged Twitter to do something about the abuse and harassment they face on the service, to little avail. Instagram is becoming a breeding ground for far-right conspiracy theories. Each time, the platforms allege that, because they did not create the dialogue themselves, they are in the clear. Each time, they point to the idea of “free speech” to absolve themselves of culpability.


The idea of “free speech” is borne out of The opening Amendment To America Constitution, which, with certain caveats, prevents the government from placing restrictions on what people mention, how they worship, and why they pick to express themselves. It also protects how people pick to protest the government, and protects the press. Although over the years, its 45-word promise has become twisted to also encompass hate speech; as Jeffrey Rosen, the President of the National Constitution Center, explained to USA Today, “The American free speech tradition holds unequivocally that hate speech is protected, unless it is intended to and likely to incite imminent violence.”


Crucially, yet, The initial Amendment does not directly apply to privately owned social-media sites. Because the Columbia Journalism Review points out, “As nongovernmental entities, the platforms are usually unconstrained by constitutional limits, including those imposed by The opening Amendment.” However that itself is a slippery slope towards legitimizing hateful, bigoted belief systems, with increasingly dire real-world consequences. We must stop framing free speech as a battleground for fortitude and political correctness, as soon as that “civility” quite often fosters the existence and expansion of white-supremacy inspired terrorism.


In a 2018 report titled "Alternative Influence: Broadcasting the Reactionary Right on YouTube", Becca Lewis, a PhD student who researches online political subcultures, identified how some white nationalists are leveraging YouTube to promote their xenophobic views for monetization and engagement, all while radicalizing viewers.


“YouTube monetizes influence for each person, without consideration of how harmful their belief systems are,” she explained. “The platform, and its parent corporation, have allowed racist, misogynist, and harassing content to remain online – and in several situations, to generate advertising income – as long as it does not explicitly include slurs." Not only does white supremacist content openly exist on social media networking platforms, their algorithms have also been gamed to push radicalizing content on viewers who do not even express any interest in consuming that kind of content.Because the people who make hateful content can typically canny about avoiding uncensored slurs, their content is free to proliferate.


Digital giants like YouTube and Facebook clearly have the equipment to do demonstrable good — specifically, to block hateful content, conspiracies, and propaganda — yet, in the interest of preserving their appearance of corporate nonpartisanship, only ever seem prepared to take a selective stance against hate. While it's commendable that Facebook and YouTube would battle the uploading and sharing of a violent white supremacist terror attack, several experts responded by wondering why these massive and famed social life aren’t identically motivated and meticulous about removing white supremacist content more broadly. The fact that Facebook is currently banning outright white nationalism, itself a rebranding of white supremacy, is a begin, however as Motherboard reports, “Implicit and coded white nationalism and white separatism will not be banned immediately, in part as the agency mentioned it’s harder to know and remove.”


In 2017, Germany passed the Network Enforcement Act to make sure that social media sites with at least two million German users were upholding national laws against hate speech which were implemented as a post-Holocaust measure. Because Germany does not want abusive content sitting online on social media sites to be consumed by their citizens, the law they’ve recently passed gives social media sites 24 hours to remove abusive hate speech before they are fined; Facebook has since come under fire in a German court for not doing enough to prevent hate speech on their platform, and has deleted hundreds of posts from its platform in response to remain in accordance of the law. The law and its implementation isn’t without its flaws, yet, which kicked off a heated debate about what, exactly, constitutes free speech — and laid bare the reality that these firms should care more, and do more, without consideration of legal precedent.


As an alternative, a noticeable disparity has emerged between the dialogue these platforms do remove for being harmful, and the dialogue they permit. According to a report from Program on Extremism at George Washington University, white nationalist movements have saw a 600 percent growth in their Twitter followers since 2012, and also a huge increase in how several tweets they publish day-to-day, with seemingly no work from the platform to curtail that growth. To be clear, there have been several moments of what felt like progress: in 2016, Twitter permanently banned Milo Yiannopoulos for targeting black female actress Leslie Jones with racist and sexist abuse; they later followed suit with Alex Jones in 2018, who violated their abusive behavior policy in countless ways, including live-streaming a verbal attack on a reporter outdoor of a congressional hearing. Nevertheless bans like these often occur years immediately after pundits solidify their once-fringe fan bases into something more mainstream. This has allowed their inflammatory and virulent discrimination to expand at previously unfathomable speeds.


Frequently, even the most egregious acts of direct hate speech are allowed to remain on the site for extended periods of time, or up until mounting public pressure results in change. These actions against hate speech, or lack thereof, correlate to particular targets, too: Per a 2016 study by Amnesty International, black females were the most likely to face abuse on Twitter, nevertheless several black ladies point out that it takes far also long for Twitter to both recognize their claims as actual hate speech, and then actually taking the next step in removing the hateful content and accounts. Political commentator Rochelle Ritchie made countless reports against the Twitter account of would-be pipe-bomber Cesar Sayoc Jr. For threatening her; at the time, she had only derived a message from Twitter that his actions did not violate their terms of service. Compare that to the swiftness with which the platform deactivated the account of the Australian teen accountable for egging a Australian politician who blamed the mosque attacks on immigrants.


While better monitoring and action against hate speech on social media won't solve white supremacy, it's a begin toward delegitimizing it in its current, digital form. As Clark Hogan-Taylor, manager at Moonshot CVE, a business specializing in countering online extremism, told MTV News, “There is no question that social media corporations need to continue to improve in their efforts to remove extremist content and accounts. Although it's essential to point out that someone looking for a part of content does not necessarily have their interest in that content reduced by its removal.”


Bigotry has become an accepted reality of our mainstream discourse. Pundits come on-air to facetiously debate racism like they're arguing if LeBron is better than Jordan, and hosts are allowed to revel in their xenophobia. Racism hasn't just infected, however serves as a feature in each branch of government from Capitol Hill to the Oval Office. College campuses have been more and more flooded with white supremacist propaganda. Yet just frustrating because the mainstreaming of white supremacy is how those who spew white supremacist beliefs, several of whom either launched or expanded their racist platform on social media, have been propped up as victims of intolerance and political correctness, rather than being contained accountable for stoking the rise of violence against certain groups of people as an alternative opposed to quelling it.


The elimination of hatred from the mainstream is neither intolerance, a threat to free speech, nor oppressive political correctness — rather, it is a required rebuke of our society’s violent white supremacy. Up until we begin treating white supremacy as what it is, and what it has habitually been — the advocacy of a racial caste system, cultural oblivion, and genocide — we are going continue to be able to see this uptick in white supremacist terrorism. Yet, if we’re truly interested in stemming this discriminatory brutality, here's the initial step social media behemoths can take: to begin treating white supremacy, online and off, like the hate speech it is.









Leave a Comment

Have something to discuss? You can use the form below, to leave your thoughts or opinion regarding Social Media Has A Hate-Speech Problem — Why Won't Companies Do Something About It?.