Combatting Hate Speech Online: An Argument in Support of CAP’s “Change the Terms” (2018)
Today in the US, debates are ongoing between lawmakers, civil rights groups, free speech activists, and commercial technology entities about how to handle the growing presence of hate speech and other hateful activities on the internet. As Neera Tanden of the Center for American Progress explains, the Change the Terms coalition defines “…hateful activities [as] those that incite or engage in violence intimidation, threats, or defamation targeting groups or individuals based on actual or perceived race, color, religion, national origin, ethnicity, immigration status, gender, gender identity, sexual orientation or disability”.[1] Much of the disagreement centers around the question of whether or not the First Amendment protects the right to engage in (even hateful) free speech on the modern internet. The trouble is, this is not strictly a first amendment issue. First, private platforms do not have the same obligations as government entities to honor First Amendment rights. The issue of hate speech and hate organizing online is profoundly complex and likely calls for more than that which legislative remedies alone can accomplish. Conflicting international regulatory frameworks were an additional factor leading to the decision to focus on extralegal remedies for hate online.
The Communications Decency Act of 1996 sought mainly to regulate pornography on the internet; although, it has proved to have much broader potential applications. Section 230 of the Communications Decency Act of 1996 holds that ISP’s (internet service providers) will not be held criminally responsible for the speech of their users.[2] Just this July, a Twitter user, Mezey, sued Twitter for suspending Mezey’s account. The court upheld Twitter’s right to remove content due to protections afforded by Section 230.[3] Author and Professor of Law, Danielle Keats Citron explains,
“With regard to civil liability, [Section 230] guarantees that “no provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected”. (Keats Citron, 171)
Despite the existence of legal means of combating hate, we are not necessarily dependent upon new or existing laws to aid in this fight. We should, however, seek to update current laws to fit today’s technological realities. Law enforcement to legislators and all government agents should enforce intangible internet laws regarding hate activities with the same rigor as hate speech laws for print as well as in vivo, physical speech.
Change the Terms is a collaborative initiative between Center for American Progress, Color of Change, Free Press, Lawyer’s Committee for Civil Rights Under Law, National Hispanic Media Coalition, and the Southern Poverty Law Center. Change the Terms calls on internet companies to take stronger action in policing hateful activities taking place on their platforms. The coalition drafted this set of policy recommendations with the aim of balancing the need to disrupt violent and hateful activity with the values of free expression and privacy. Additionally, to avoid threatening net-neutrality, the coalition directed “Change the Terms” not at Internet Service Providers, but rather, toward companies providing services over the internet such as Facebook, Reddit, Twitter, and PayPal. This approach acknowledges the need for a broad strategy that encompasses the entire “stack” of companies on the internet, from ISP’s at the bottom, to user-facing services like Facebook at the top, and financing and ad services in between. While critics might see some of the proposed steps to curbing online hate activity as attacks on free speech rights, these policy recommendations are focused on action, and not just words. The incitement of violent, hateful activities offline by hateful activities online is the determinant in suggested corporate response
Change the Terms outlines a strategy with seven main corporate policy recommendations centered mostly around optimizing online platforms’ self-regulation. The initiative states that first, internet companies should prohibit hateful activities on their platform (employing the definition stated above). Second, though it may require the investment of time and resources, including money and human power, internet companies must expand enforcement mechanisms and further define them in great detail. The third and fourth remedies propose accessible and transparent removal and appeals processes, which would, in part, serve as an avenue for public accountability. A crucial element for improving the take-down process involves improving training of content moderators working at internet companies to make sure they are aware of and able to think critically about the social, political, and cultural contexts in which they are operating. For example, content moderators outsourced outside of the United States may have difficulty detecting some of the nuances involved in American racism and white nationalism. The sixth recommendation necessitates a restructuring of corporate governance and accountability. Change the Terms suggests appointing someone from senior management, along with a board committee, to oversee removals and appeals related to hateful content. The release of annual, publicly available reports would allow for online companies, civil rights watch groups, and other members of the public to follow ongoing efforts and utilize data to improve moving forward.  This process might include creating a mechanism for civil rights watch organizations to flag for review any material potentially in violation of adopted online corporate anti-hate measures. The last piece of the Change the Terms policy recommendations urges internet companies to prohibit state actors, bots, and troll campaigns that use racism and hateful activities to forward their missions. This concept gained traction after the 2016 election-hacking by Russian bots, and the verified role of fake news and hate propaganda in laying the groundwork for the genocide in Myanmar.[4]
While I agree with the premise of Change the Terms’ focus on extra-legal strategies to combating hate online; I also see some potential areas of legal action. We ought to make sure that civil rights laws are adapted carefully but quickly to meet the complex needs of our networked world. Translating existing hate laws into the online context necessitates the involvement of government, human rights groups, media rights groups, open-internet organizations, as well as self-regulating for-profit internet service providers and other internet companies. I see power and truth in Keats Citron’s understanding that,
“Civil rights law has the same potential for civil rights violations in cyberspace. Law would signal that online abuse produces corrosive harm to individuals, groups, and society, just as law helped people appreciate the social harms of sexual harassment in the workplace. Civil rights law would help show victims that they do not have to sacrifice their emotional and physical well-being to work, speak, and interact online, much as courts in the 1970s made clear that women did not need to choose between working in sexually harassing environments and earning a living” (Keats Citron, 128).
In-person gender and racial discrimination in the workplace were eventually found to violate citizen civil rights. One prong of Citron’s argument holds that the existence online of hateful and false information about an individual poses a threat to equal opportunity employment. A victim of dehumanization and other hate-driven online activity may suffer inevitable detriment due to the enduring and extremely far-reaching nature of the internet today. While there are apparent contextual differences, the current atmosphere of online hate activity does share parallels with past civil rights struggles. We may look to successes in the battle for civil rights to direct our future actions in the fight against hate in cyberspace.
Aside from the moral and civil-rights-related obligations faced by internet companies, there is a positive business case to be made for internet companies eliminating hate activity on their platforms/services and for strengthening the safety and freedom of all individuals, especially marginalized communities, to participate in global discourse online. Platforms and other internet companies might be able to fend off growing user attrition rates if they can demonstrate a competitive advantage in cultivating and maintaining safe, equitable, and civil environments online.[5]

[1] Center for American Progress (seeprog) Filmed [October 2018]. “Change the Terms: Reducing Hate Online,” Dur: 1:18:05. Posted [October 2018]. https://www.youtube.com/watch                              ?time_continue=503&v=fnFJc15bQh4.
[2] Banks, James, “Regulating Hate Speech Online.” International Review of Law, Computers & Technology” Routledge Vol. 24, No. 3, November 2010, (pg. 237)
[3] Mezey v. Twitter, Inc., 2018 WL 5306769 (S.D. Fla. July 19, 2018)
[4] Mozer, Paul. “A Genocide Incited on Facebook, With Posts From Myanmar’s Military.” The New York Times Online. https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html
[5] Winkler, Elizabeth. “Twitter Loses Users, But Will Advertisers Follow?”. The Wall Street Journal Online. https://www.wsj.com/articles/twitter-loses-users-but-will-advertisers-follow-1532705997
[ Lily Zimmerman / F’18 – Law & Policy of Digital Media]

You may also like

Back to Top