By Elizabeth M. Vasily
“I’m a ‘n*****,’ I look ’n*****y,’ I haven’t earned my ‘n***** card,’ I’m a ‘pseudon*****,’ ‘f****** ni***ster,’ or ‘scab n*****.’ If you winced when you read that list of slurs, imagine having them lobbed at you nearly every day for two years.”[i]
Imani Gandy, a Black American lawyer and legal analyst, has been harassed for the past two years by an anonymous Twitter user, who on average creates ten different Twitter accounts per day in order to assail her with offensive posts. In her August 2014 blog post, “#TwitterFail: Twitter’s Refusal to Handle Online Stalkers, Abusers, and Haters,” Ms. Gandy documents her troubling experience with social media as she was attempting to advance her career by networking on the site.[ii]
Social media platforms, such as Twitter, have become cesspools for racial hate crimes. Third-party social media sites offer the perfect recipe for racial harassment: easy and free access, unlimited proliferation of user accounts, legal immunity, and anonymity.
Twitter may be the only “workplace” in America that is exempt from Title VII. Today, Twitter is an essential tool for marketing, sales, and promotion. Start-up businesses, journalists, and personalities benefit from the site’s networking capabilities on a daily basis. However, when Black Americans, such as Ms. Gandy, use the social media platform to take advantage of its benefits, they often endure constant racial harassment in the process. In fact, the attacks are correlated with success and career growth: the more popular the account, the more hits received, and, accordingly, the more racist spammers. Moreover, as of the current time, Title VII can do nothing to stop it: the Internet is not a workplace or institution in which racial harassment is prohibited.[iii]
So, is Twitter doing anything to stop this? Are there any legal remedies?
Catherine Buni and Soraya Chemaly, in their article in The Atlantic this month, “The Unsafety Net: How Social Media Turned Against Women,” indicate that social media sites, such as Twitter, are “doing little to stop the problem.”[iv] They quote Twitter co-founder Biz Stone’s post “Tweets Must Flow,” which stated “we strive not to remove Tweets on the basis of their content.”[v] While Twitter tries to remove messages when possible, the sheer magnitude of users and posts makes it very difficult for them to do so.[vi]
Law professor Danielle Keates Citron offers some legal suggestions founded in tort, civil rights, and criminal law for female victims of cyber abuse in her new September 2014 book Hate Crimes in Cyberspace.[vii] For example, she points to state and federal criminal law against threatening another person.[viii] But, what exactly is a “threat” in the online social media world? The Supreme Court will hopefully answer that question in December 2014 in the case of Elonis v. United States.
Twitter’s terms and conditions prohibit “true threats” to users, as has the Court for years, indicating that “true threats” are not protected speech under the First Amendment. In Elonis, the justices will consider whether threats are governed by the defendant’s “subjective intent to threaten” or whether “an objective person could consider… [the] posts to be threatening.”[ix] The courts have been split on this issue since the 2003 case Virginia v. Black, in which the Supreme Court “invalidated Virginia’s broad prohibition on cross-burning because it said the law lacked a requirement of proof that the Ku Klux Klan intended to intimidate someone by burning a cross.”[x] Perhaps, if the Court rules in a certain direction, it may at least diminish some threatening comments from trollers out of fear of prosecution. That is, if they can be found.
A major problem with the current legal options for redress is the anonymity of the users and the unlimited proliferation of new anonymous accounts from which the offensive messages are generated. The perpetrators themselves are impossible to locate without a warrant or a court order to trace the ISP address, which is very difficult for the average victim to obtain.[xi] Citron analogizes the shield of an anonymous online account to the hood of the Ku Klux Klan member: “[A]nonymity has contributed to the rise of bigoted mobs… the hoods of the Ku Klux Klan were key to the formation of mobs responsible for the death of African Americans.”[xii]
As a result, Twitter should do everything possible to link users with emails, addresses, and other contact information that the internal site managers can use to verify a true identity.[xiii] The unlimited anonymous free-flow of ideas should be encouraged on other websites with less of a networking presence in the digital marketing world.
[i] Imani Gandy, #TwitterFail: Twitter’s Refusal to Handle Online Stalkers, Abusers, and Haters, RH REALITY CHECK (Aug. 12, 2014, 5:08 PM), http://rhrealitycheck.org/article/2014/08/12/twitterfail-twitters-refusal-handle-online-stalkers-abusers-haters/.
[iii] DANIELLE KEATES CITRON, HATE CRIMES IN CYBERSPACE 135 (2014).
[iv] Catherine Buni and Soraya Chemaly, The Unsafety Net: How Social Media Turned Against Women, THE ATLANTIC (Oct. 9 2014, 12:08 PM), available at http://www.theatlantic.com/technology/archive/2014/10/the-unsafety-net-how-social-media-turned-against-women/381261/.
[vi] Citron supra note 3, at 233.
[vii] Citron supra note 3, at 120.
[viii] Citron supra note 3, at 123.
[ix] John Elwood, Relist Watch, SCOTUSBLOG (Jun. 20, 2014, 1:14 PM), http://www.scotusblog.com/2014/06/relist-watch-40/.
[xi] DANIELLE KEATES CITRON, HATE CRIMES IN CYBERSPACE 224 (2014).
[xii] Id. at 224.
[xiii] Cf. id. at 238. Citron lauds Facebook’s “real-name policy,” without which she states “bad actors are emboldened because there is little chance of serious consequence.” I believe that Facebook’s policy, and even more heightened standards, should be implemented for Twitter.