On Leslie Jones, being the target of racist trolls, and Twitter's responsibility to police their own platform

trigger warning: racist hate speech, sexualization of children

Over the past week I've watched the story of Leslie Jones leaving Twitter with both sadness and familiarity. If you haven't heard the story, Leslie Jones is one of the stars of the new Ghostbusters movie (and a hilarious comedian in her own right.) She also happens to be black . . . a fact that online white supremacists took issue with. They orchestrated a coordinated attack of Leslie's Twitter page. She was receiving horrible racists messages by the minute, and it proved to be too much for her to take.

This was familiar to me because it happened to me, albeit on a much smaller scale. I'm not in a widely-release movie . . . but I angered the Online White Supremacy Mob last year by posting a video where I gave my two white daughters black dolls. Apparently, this video coupled with the fact that I have adopted two black children made me their target. I was getting daily, and then hourly, comments about how disgusting I was, how my boys were going to kill me, how they would rape my daughters . . . all kinds of vile hate. Initially I was deleting the comments. Then they started coming in too swiftly for me to stay on top of it. And suddenly, it moved to Twitter, and got even worse. My Twitter feed was like watching a scroll of the NYSE, only it was full of racist comments and disgusting altered photos of my children. The comments were hateful, but not very creative, and centered around a couple themes: that I'm a race traitor, that my sons will rape my daughters, that my children are monkeys.

I understand Leslie Jones leaving Twitter. Reading this kind of sick racist propaganda is beyond demoralizing, and she got so much more than I did, pointed directly at her as a black woman. When my children were the target, it landed me in bed for a few days. Even though I knew how absurd all of it was, the sheer volume and overt hate was more than I could bear for a few days. I took a break, and let friends log in and block these accounts for me.

The thing people need to understand is that what happened to me, and what happened to Leslie . . . these are not isolated incidents. There are hate groups that spend their time finding new targets every week, and they are often black women  Ijeoma Oluo writes about her own experience in a piece for The Guardian:

I had received racist tweets in the past – at least a few each day, but this was different. This was a sea of hate doing its best to engulf me. Finally, one of my followers sent me a link that explained what was happening– somebody had created a thread about me on a neo-Nazi site. I had some tweets about race that had been picked up by national press, and this neo-Nazi group had decided that this was too much legitimacy for a black woman to have, so they fired up their troops with screenshots of my tweets and information about where to find me on social media. Their goal was to harass me off of the internet because my voice was considered a threat.
That was the first campaign of many, and whenever I find myself drowning in racist and sexist vitriol, a quick Google search will usually find a group working hard to create and sustain the abuse that I’m receiving. This is never organic, this is never an accident – it is a purposeful campaign every time. I have reported hundreds of such abusive tweets and Facebook comments, but can count the number of times that Twitter or Facebook have determined that these horribly violent racist and misogynistic messages violate their policy on one hand. I have blocked over 60,000 people on Twitter, and yet still, every day abuse comes. 

Like Ijeoma, I was able to follow the tracks back to an online white supremacy site, which was the main source of the coordinated Twitter attack. Eventually, they moved on. They grew bored and found a new victim to taunt. But I still get a steady trickle of racist hate that makes me want to avoid my Twitter stream. Like this gem from a couple weeks ago:

I can't explain what it's like to open a social media account and have someone say something like this about your own child.

Now, I'm going to be explicit here, because I think it's necessary to really pick apart what is happening here. A man described my son (age 9) as masturbating and fantasizing about his sister (age 9).  On a public forum, an adult described a sexual act by a minor, involving a sexual fantasy of a minor. In what world is this okay? What platform would let something like this stand? Well, let's have a look at Twitter's response to me:

Apparently Twitter did not find this offensive enough to take action.

Twitter has taken some steps. When I was under attack, some reported accounts were banned. But others were not. Twitter recently banned high-profile troller Milo Yiannopoulos who was fanning the flames of racist attacks on Leslie Jones. However, I'm guessing the very public outrage over her attacks fueled their action. Leslie Jones is a national celebrity. There are less famous women and minorities being attacked online every day, with very little recourse. An article in Fusion describes the inconsistency:
Jones’ experience is merely a high-profile example of the kinds of abuse that persist for many people on Twitter—especially women of color. Over the past year, Twitter has taken many steps to signal to the world that it takes harassment seriously: it has banned revenge porn, issued new anti-harassment rules, established a trust and safety council and de-verified high-profile users (like Yiannopoulos) that it considers abusive. But enforcement of those policies is still woefully inconsistent and which tweets actually violate company policy is opaque at best. One Twitter moderator might take down a tweet that includes a death threat, but another might not.
There are many people who shrug their shoulders and cite free-speech as tying the hands of Twitter. However, Twitter is an online social platform. There is nothing that requires Twitter to amplify every single voice. In the same way, I have a blog. It's my own platform. I don't have to publish every comment that comes through. If something is blatantly racist or hateful I delete it. These censored commenters often cry about free speech, but they do have free speech. They are free to build their own blog, pay their own hosting fees, and say whatever they feel like in their own space. In the same way, if I held a dinner party, and someone began spewing vile hate, I would not sit there and allow it in my own home under the rights of free speech. I would take them out of my home. They can say whatever they want, but I don't have to host it in my own space.

In the same way, Twitter has the freedom to edit and police their own platform. There is nothing in the first amendment that requires Twitter to host every hateful voice that would like to post something on their own website. And Twitter's lack of action means that the free speech of targeted minority voices are shrinking. As Ijeoma Oluo  says:

They do not have the power to cut off our access to the internet outright, so they will instead make it unbearable for us to be there. They are complicit enablers of the thousands of angry, hateful “trolls” who bombard us with rape threats, racist slurs, images of torture and abuse.
It's time for Twitter to stop enabling hate. 

Related Posts Plugin for WordPress, Blogger...