MEMORANDUM October 11, 2019 To: Subcommittee on Communications and Technology and Subcommittee on Consumer Protection and Commerce Members and Staff Fr: Committee on Energy and Commerce Staff Re: Hearing on “Fostering a Healthier Internet to Protect Consumers” On Wednesday, October 16, 2019, at 10 a.m. in the John D. Dingell Room, 2123 of the Rayburn House Office Building, the Subcommittee on Communications and Technology and the Subcommittee on Consumer Protection and Commerce will hold a joint hearing entitled, “Fostering a Healthier Internet to Protect Consumers.” I. BACKGROUND ON CONTENT MODERATION The internet has become a major source of news, information, and advertising. At the same time, problematic content, such as political disinformation, hate speech, extremist recruiting, cyberbullying, and election interference, among other things, is proliferating. According to a 2019 Anti-Defamation League survey, “37 [percent] of Americans have experienced severe online harassment, which includes sexual harassment, stalking, physical threats, and sustained harassment.” 1 This number represented a substantial increase in the 18 percent reported in a comparable Pew Research Center survey conducted approximately a year earlier. 2 Law enforcement also has raised concerns regarding the use of the internet to promote extremism. According to the Federal Bureau of Investigation, “[r]adicalization to violence of domestic terrorists is increasingly taking place online, where violent extremists can use social media for the distribution of propaganda, recruitment, target selection, and incitement to violence.” 3 Much of the problematic content on the internet comes from individuals and companies that post on websites they do not own or operate. Many websites, including social media platforms such as Facebook and Twitter, rely primarily on third-party generated content to populate their sites and attract users. Social media platforms have proven to be incredibly 1 See, e.g., Anti-Defamation League, Online Hate and Harassment: The American Experience (www.adl.org/onlineharassment#survey-report) (accessed Oct. 9, 2019). 2 3 Id. House Committee on Homeland Security, Hearing on Confronting the Rise of Domestic Terrorism in the Homeland, 116th Cong. (May 8, 2019). popular with over 2.4 billion active Facebook users, 4 approximately 139 million daily active Twitter users, 5 and nearly two billion monthly logged-in YouTube users. 6 Users of such social media platforms both generate and consume a tremendous amount of content. The large user base of social media platforms is attractive to not only advertisers, but other groups interested in swaying consumers. For example, the Senate Select Committee on Intelligence recently released a report detailing how Russian operatives used the major social media platforms as part of an effort to influence the 2016 U.S. election. 7 Other websites, such as news or retail sites, generate their own content or sell products and services, but may allow third parties to post comments or other third-party content on their sites. In either case, these websites or “platforms” may create policies for moderating this thirdparty content. Some platforms choose to be permissive with little to no moderation. 8 Other platforms have detailed community guidelines and systems that allow users to report disallowed content, which may lead those platforms to remove or downgrade such content. 9 While the variation in content moderation practices has existed since the earliest days of the internet, two lawsuits in the early 1990s changed the legal landscape for platforms. In Cubby, Inc. v. CompuServe Inc., a court held that CompuServe could not be held liable as a publisher of third-party content posted on its website when “it did not know and had no reason to know” of the existence of the third-party content at issue. 10 Following that case, in StrattonOakmont, Inc. v. Prodigy Services Co., a court found that the website, Prodigy, could be held liable as a publisher when it actively used “technology and manpower to delete notes from its computer bulletin boards on the basis of offensiveness and ‘bad taste.’” 11 These two cases 4 Facebook, Newsroom: Company Info (newsroom.fb.com/company-info/) (accessed Oct. 9, 2019). 5 Twitter, Inc., Annual Report (Form 10-Q), at 29 (June 30, 2019). 6 YouTube Nears Major Milestone Amid Emphasis on Subscriptions, Fortune (Feb. 4, 2019) (fortune.com/2019/02/04/youtube-google-subscriptions-q4-2018/). 7 Senate Select Committee on Intelligence, Russian Active Measures Campaigns and Interference in the 2016 U.S. Election Volume 2: Russian’s Use of Social Media with Additional Views (Oct. 8, 2019) (www.intelligence.senate.gov/sites/default/files/documents/ Report_Volume2.pdf). 8 See, e.g., Absolutely Everything You Need to Know to Understand 4chan, the Internet’s Own Bogeyman, Washington Post (Sept. 25, 2014) (www.washingtonpost.com/news/theintersect/wp/2014/09/25/absolutely-everything-you-need-to-know-to-understand-4chan-theinternets-own-bogeyman/). 9 See, e.g., Facebook Releases Community Standards Enforcement Report, TechCrunch (May 23, 2019) (techcrunch.com/2019/05/23/facebook-releases-community-standards-enforcementreport/). 10 Cubby, Inc. v. CompuServe Inc., 776 F. Supp. 135 (S.D.N.Y. 1991). 11 Stratton-Oakmont, Inc. v. Prodigy Services Co., 1995 WL 323710 (N.Y. Sup. Ct. 1995). 2 together stood for the proposition that if a platform moderated content on its site, it could be held liable for any content on its site, and if it did not moderate content, it would not be held liable. II. SECTION 230 OF THE COMMUNICATIONS DECENCY ACT In response to the Court’s decision in Prodigy, Congress passed what became Section 230 of the Communications Decency Act (CDA 230) on February 8, 1996. 12 CDA 230 enables websites to more freely moderate content online by generally providing immunity for online platforms for content posted by users. That means platforms are mostly not held liable for thirdparty content posted on their websites, with some relevant exceptions. The immunity works in two ways. First, CDA 230 prohibits courts from treating “an interactive computer service”—a web-based platform—“as the publisher or speaker” of material posted on the site by third-parties. 13 Second, CDA 230 prohibits courts from holding websites liable for removing—in good faith—content that the websites found to be “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” 14 CDA 230 does not protect a website from liability for its own content. CDA 230 does provide some exceptions to this immunity. Websites may still be held liable for third-party content that violates: (1) federal criminal law; (2) intellectual property law; (3) the Electronic Communications Privacy Act; and (4) certain laws prohibiting sex trafficking. The internet has become substantially more complex and sophisticated since the passage of CDA 230, and courts have generally interpreted CDA 230 broadly by finding that the immunity applies in any lawsuit in which the website is the publisher of a third-party’s content, 15 but that immunity is not absolute. Some courts have found that, for Section 230(c)(1) immunity to attach: (1) the website must be a “provider or user of an interactive computer service,” (2) which the plaintiff is treating as a “publisher or speaker” of (3) content “provided by another information content provider.” 16 Section 230(c)(2) grants immunity for actions taken in “good faith,” which makes this section’s immunity narrower than Section 230(c)(1). This subsection does not solely immunize websites from third-party content posted on their sites, but it also immunizes websites from its 12 Section 230: A Key Legal Shield for Facebook, Google Is About to Change, NPR (Mar. 21, 2018) (www.npr.org/sections/alltechconsidered/2018/03/21/591622450/section-230-a-key-legalshield-for-facebook-google-is-about-to-change). 13 47 U.S.C. § 230(c)(1). 14 47 U.S.C. § 230(c)(2). 15 See, e.g., Blumenthal v. Drudge, 992 F. Supp. 44 (D.D.C. 1998); Zeran v. AOL, 129 F.3d 327 (4th Cir. 1997). 16 47 U.S.C. § 230(c)(1); see also, Barnes v. Yahoo!, Inc., 570 F.3d 1096, 1100-01 (9th Cir. 2009). 3 own decisions to remove objectionable content. Courts have reinforced that websites are eligible for CDA 230 immunity when screening or blocking content. 17 Provisions similar to CDA 230 have been included in the United States-Mexico-Canada Agreement. Reports indicate it was also included in the recent U.S.-Japan Trade Agreement, in essence binding the United States and our trading partners to similar principles, if adopted. III. WITNESSES The following witnesses have been invited to testify: Steve Huffman Co-Founder & CEO Reddit, Inc. Danielle Keats Citron Professor of Law Boston University School of Law Corynne McSherry Legal Director Electronic Frontier Foundation Hany Farid Professor University of California, Berkeley Katherine Oyama Global Head of Intellectual Property Policy Google, Inc. Gretchen S. Peters Executive Director Alliance to Counter Crime Online 17 See e.g., Langdon v. Google, 474 F. Supp. 2d 622 (D. Del. 2007) (search engines were not required to accept paid ads); e360Insight, LLC v. Comcast Corp., 546 F. Supp. 2d 605 (N.D. Ill. 2008) (Comcast was not liable for blocking email that it thought was spam). 4