Before the FEDERAL COMMUNICATIONS COMMISSION Washington, DC 20554 In the Matter of Section 230 of the Communications Act of 1934 ) ) ) ) File No. RM-_____ To: The Commission PETITION FOR RULEMAKING OF THE NATIONAL TELECOMMUNICATIONS AND INFORMATION ADMINISTRATION National Telecommunications and Information Administration U.S. Department of Commerce 1401 Constitution Avenue, NW Washington, DC 20230 (202) 482-1816 July 27, 2020 TABLE OF CONTENTS I. STATEMENT OF INTEREST............................................................................................ 3 II. SUMMARY OF ARGUMENT ............................................................................................ 3 III. THE COMMISSION SHOULD ACT TO PROTECT FREE SPEECH ONLINE .... 6 IV. RELEVANT FACTS AND DATA: TECHNOLOGICAL AND MARKET ............... 9 V. THE AUTHORITY AND NEED FOR ISSUING REGULATIONS FOR SECTION 230 ............................................................................................................................................... 15 A. The Commission’s Power to Interpret Section 230 of the Communications Decency Act ............................................................................................................................................... 15 B. Background to Section 230................................................................................................. 18 C. Section 230(c)’s Structure .................................................................................................. 22 D. Expansive Court Rulings Tied to Early Platforms and Outdated Technology............. 24 E. Need for FCC Regulations: Ambiguities in Section 230.................................................. 27 1. The Interaction Between Subparagraphs (c)(1) and (c)(2) ................................................... 28 2. The Meaning of Section 230(c)(2) ........................................................................................ 31 3. Section 230(c)(1) and 230(f)(3) ............................................................................................ 40 4. “Treated as a Publisher or Speaker” ..................................................................................... 42 VI. TITLE I AND SECTIONS 163 AND 257 OF THE ACT PERMIT THE FCC TO IMPOSE DISCLOSURE REQUIREMENTS ON INFORMATION SERVICES ............... 47 A. Social media are information services............................................................................... 47 B. Several statutory sections empower the FCC to mandate disclosure ............................ 49 VII. CONCLUSION ................................................................................................................... 52 APPENDIX A: PROPOSED RULES........................................................................................ 53 Before the FEDERAL COMMUNICATIONS COMMISSION Washington, D.C. 20554 In the Matter of Section 230 of the Communications Act of 1934 ) ) ) ) File No. RM-_____ To: The Commission PETITION FOR RULEMAKING OF THE NATIONAL TELECOMMUNICATIONS AND INFORMATION ADMINISTRATION Pursuant to section 1.401 of the Code of Federal Regulations, 1 in accordance with Executive Order 13925 (E.O. 13925), 2 and through the National Telecommunications and Information Administration (NTIA), the Secretary of Commerce (Secretary) respectfully requests that the Federal Communications Commission (FCC or Commission) initiate a rulemaking to clarify the provisions of section 230 of the Communications Act of 1934, as amended. 3 NTIA, as the President’s principal adviser on domestic and international telecommunications and information policy, is charged with developing and advocating policies concerning the regulation of the telecommunications industry and “ensur[ing] that the views of the executive branch on telecommunications matters are effectively presented to the Commission . . . .” 4 Specifically, per E.O. 13925, NTIA requests that the Commission propose rules to clarify: 1 47 CFR § 1.401(a). Exec. Order No. 13925: Preventing Online Censorship, 85 Fed. Reg. 34,079 (June 2, 2020) (E.O. 13925). 3 47 U.S.C. § 230. 4 47 U.S.C. § 902(b)(2)(J); see also 47 U.S.C. §§ 901(c)(3), 902(b)(2)(I) (setting forth related duties). 2 1 (i) the interaction between subparagraphs (c)(1) and (c)(2) of section 230, in particular to clarify and determine the circumstances under which a provider of an interactive computer service that restricts access to content in a manner not specifically protected by subparagraph (c)(2)(a) may also not be able to claim protection under subparagraph (c)(1); 5 (ii) the conditions under which an action restricting access to or availability of material is not “taken in good faith” within the meaning of subparagraph (c)(2)(A) of section 230, particularly whether actions can be “taken in good faith” if they are (A) deceptive, pretextual, or inconsistent with a provider’s terms of service; or (B) taken after failing to provide adequate notice, reasoned explanation, or a meaningful opportunity to be heard; 6 and (iii) any another proposed regulation that NTIA concludes may be appropriate to advance the policy described in subsection (a) of E.O. 13925, to impose disclosure requirements similar those imposed on other internet companies, such as major broadband service providers, to promote free and open debate on the internet. 7 5 See infra sections V.E.1, V.E.3 and section V.E.4. See infra section V.E.2. 7 See infra section VI. 6 2 I. Statement of Interest Since its inception in 1978, NTIA has consistently supported pro-competitive, proconsumer telecommunications and internet policies. NTIA files this petition pursuant to E.O. 13925 to ensure that section 230 of the Communications Act of 1934, as amended, continues to further these goals. The President, through E.O. 13925, has directed the Secretary to file this petition for rulemaking through NTIA. 8 II. Summary of Argument Freedom of expression defends all our other freedoms. Only in a society that protects free expression can citizens criticize their leaders without fear, check their excesses, and expose their abuses. As Ben Franklin stated, “[w]hoever would overthrow the Liberty of a Nation, must begin by subduing the Freeness of Speech.” 9 However, social media and its growing dominance present troubling questions on how to preserve First Amendment ideals and promote diversity of voices in modern communications technology. Social media’s power stems in part from the legal immunities granted by the Communications Decency Act of 1996. 10 Congress passed the statute in the beginning of the internet age with the goal of creating a safe internet for children. It did so by protecting children from pornography and providing incentives for platforms to 8 E.O. 13925, Section 2(b). Benjamin Franklin, Silence Dogood No. 8, The New-England Courant, July 9, 1722. 10 Communications Decency Act of 1996 (CDA), Pub. L. No. 104-104, 110 Stat. 133, Title V— Obscenity and Violence, § 509 “Online family empowerment,” codified at 47 U.S.C. 230, “Protection for private blocking and screening of offensive material.” The CDA was incorporated as Title V to the Telecommunications Act of 1996, which in turn, was incorporated in the Communications Act of 1934. While these laws are all now part of the same statute, they do have separate histories and will be referred to individually when necessary. 9 3 remove harmful content. While the Supreme Court struck down the provisions limiting pornography, section 230 remained. 11 Section 230 is the legislative response to a New York state case, Stratton Oakmont, Inc. v. Prodigy Servs. Co. 12 In this case, the court extended tort liability to internet bulletin boards and ruled that defendant Prodigy Services Company would be liable for the entire content of their platform if they engaged in editing and moderation to remove distasteful content. 13 Congress intended section 230 to offer platforms immunity from liability under certain circumstances, namely to encourage platforms to moderate specific types of material, mostly that are sexual or inappropriate to minors. It is vital to remember, however, that Congress in section 230 also had the express purpose of ensuring that the “Internet and other [internet platforms] offer a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity.” 14 Times have changed, and the liability rules appropriate in 1996 may no longer further Congress’s purpose that section 230 further a “true diversity of political discourse.” A handful of large social media platforms delivering varied types of content over high-speed internet have replaced the sprawling world of dial-up Internet Service Providers (ISPs) and countless bulletin boards hosting static postings. Further, with artificial intelligence and automated methods of textual analysis to flag harmful content now available, unlike at the time of Stratton Oakmont, Inc., platforms no longer need to manually review each individual post but can review, at much 11 Reno v. American Civil Liberties Union, 521 U.S. 844 (1997). 1995 WL 323710 (N.Y. Sup. Ct., May 24, 1995) (unpublished). See also, Force v. Facebook, Inc., 934 F.3d 53, 63-64 (2d Cir. 2019) (“To overrule Stratton . . . .”). 13 Stratton Oakmont, 1995 WL 323710, at *3. 14 47 U.S.C. § 230(a)(3). 12 4 lower cost, millions of posts. 15 Thus, the fundamental assumptions driving early section 230 interpretation are antiquated and lack force, thus necessitating a recalibration of section 230 protections to accommodate modern platforms and technologies. The FCC should use its authorities to clarify ambiguities in section 230 so as to make its interpretation appropriate to the current internet marketplace and provide clearer guidance to courts, platforms, and users. NTIA urges the FCC to promulgate rules addressing the following points: 1. Clarify the relationship between subsections (c)(1) and (c)(2), lest they be read and applied in a manner that renders (c)(2) superfluous as some courts appear to be doing. 2. Specify that Section 230(c)(1) has no application to any interactive computer service’s decision, agreement, or action to restrict access to or availability of material provided by another information content provider or to bar any information content provider from using an interactive computer service. 3. Provide clearer guidance to courts, platforms, and users, on what content falls within (c)(2) immunity, particularly section 230(c)(2)’s “otherwise objectionable” language and its requirement that all removals be done in “good faith.” 4. Specify that “responsible, in whole or in part, for the creation or development of information” in the definition of “information content provider,” 47 U.S.C. § 230(f)(3), includes editorial decisions that modify or alter content, including but not limited to substantively contributing to, commenting upon, editorializing about, or presenting with a discernible viewpoint content provided by another information content provider. 15 Adrian Shahbaz & Allie Funk, “Freedom on the Net 2019 Key Finding: Governments harness big data for social media surveillance,” Freedom House, Social Media Surveillance, https://freedomhouse.org/report/freedom-on-the-net/2019/the-crisis-of-social-media/socialmedia-surveillance (“Social media surveillance refers to the collection and processing of personal data pulled from digital communication platforms, often through automated technology that allows for real-time aggregation, organization, and analysis of large amounts of metadata and content . . . . Advances in artificial intelligence (AI) have opened up new possibilities for automated mass surveillance.”). 5 5. Mandate disclosure for internet transparency similar to that required of other internet companies, such as broadband service providers. III. The Commission Should Act to Protect Free Speech Online New regulations guiding the interpretation of section 230 are necessary to facilitate the provisions’ interpretation in a way that best captures one of the nation’s most important Constitutional freedoms. “Free speech is the bedrock of American democracy . . . . The freedom to express and debate ideas is the foundation for all of our rights as a free people.” 16 Our democracy has long recognized that control of public discourse in the hands of too few stifles freedom of expression and risks undermining our political institutions. For centuries, Americans have taken action to maintain the free flow of information and ideas to ensure the fullest and most robust marketplace of ideas—from the Postal Service Act of 1792, one of Congress’s first acts which established preferential rates for newspapers, 17 to nondiscrimination requirements for telegraphs and telephones, 18 to antitrust actions to ensure the free flow of news stories, 19 and to efforts to limit undue dominance in broadcast and cable media to guarantee the flow of information to television viewers. 20 Yet today, free speech faces new threats. Many Americans follow the news, stay in touch with friends and family, and share their views on current events through social media and other 16 E.O. 13925, Section 1. Richard B. Kielbowicz, News in the Mail: The Press, Post Office and Public Information, 1700-1860s, at 33-34 (1989). 18 Thomas B. Nachbar, The Public Network, 17 CommLaw Conspectus 67, 77 (2008). (“Nondiscriminatory access is . . . the order of the day for . . . telecommunications, and even cable television.”). 19 Associated Press v. United States, 326 U.S. 1 (1945). 20 Turner Broad. Sys, Inc. v. F.C.C., 512 U.S. 622 (1994); F.C.C. v. National Citizens Comm. for Broad., 436 U.S. 775 (1978); Nat’l Broad. Co. v. United States, 319 U.S. 190 (1943); Time Warner Ent. Co. L.P. v. F.C.C., 240 F.3d 1126 (D.C. Cir. 2001). 17 6 online platforms. These platforms function, as the Supreme Court recognized, as a 21st century equivalent of the public square. 21 Provision and control of the public square is a public trust. Because it entails selecting which speech gets heard and by whom, social media can assimilate a collective conversation into a corporate voice with a corporate point of view. As the E.O. explains, “[w]hen large, powerful social media companies censor opinions with which they disagree, they exercise a dangerous power. They cease functioning as passive bulletin boards, and ought to be viewed and treated as content creators.” 22 The Commission itself has previously recognized the importance of enabling “the widest possible dissemination of information from diverse and antagonistic sources” and “assuring that the public has access to a multiplicity of information sources” as internet regulations’ essential goal. 23 Unfortunately, large online platforms appear to engage in selective censorship that is harming our national discourse. The E.O. notes that “[t]ens of thousands of Americans have reported online platforms “flagging” content as inappropriate, even though it does not violate any stated terms of service” and is not unlawful. The platforms “mak[e] unannounced and unexplained changes to company policies that have the effect of disfavoring certain viewpoints and delet[e] content and entire accounts with no warning, no rationale, and no recourse.” 24 FCC 21 Packingham v. North Carolina, 137 S. Ct. 1730, 1732 (2017) (“Social media . . . are the principal sources for knowing current events, checking ads for employment, speaking and listening in the modern public square, and otherwise exploring the vast realms of human thought and knowledge.”). 22 E.O. 13925, Section 1. 23 Federal Communications Commission, In the Matter of Protecting and Promoting the Open Internet, GN Docket No. 14-28, FCC 15-24, Report and Order on Remand, Declaratory Ruling, and Order, 2015 WL 1120110, *268 (¶ 545) (quoting Turner, 512 U.S. at 663). 24 E.O. 13925, Section 1; Divino Group LLC, et al. v. Google LLC, et al., 5:19-cv-4749-VKD, Dkt #20 (2d Am. Compl.) at ¶¶ 119-123, 128-247 (N.D. Cal. (San Jose Division), dated Aug. 13, 2019) (class action complaint alleging YouTube censorship of LGBT+ content). 7 Commissioner Brendan Carr has remarked, “there’s no question that [large social media platforms] are engaging in editorial conduct, that these are not neutral platforms.” 25 Others have expressed shock that while large social media platforms will censor or fact-check constitutionally elected democratic leaders, many social media companies welcome and facilitate censorship by the Chinese Communist Party, thereby spreading disinformation and communist propaganda related to China’s mass imprisonment of religious minorities, the origins of the COVID-19 pandemic, and the pro-democracy protests in Hong Kong. 26 Unfortunately, few academic empirical studies exist of the phenomenon of social media bias. Much of social media’s overarching influence and power stems from the immunities it enjoys under expansive interpretations of section 230 of the Communications Decency Act, 27 a provision Congress passed in 1996 at the beginning of the internet era. Many early cases, understandably protective of a nascent industry, read section 230’s protections expansively. But, given the maturing internet economy and emergence of dominant social media platforms, the FCC should re-examine section 230, as well as other provisions of the Communications Act of 1934. The FCC should determine how section 230 can best serve its goals of promoting internet 25 Jan Jekielek, On Social Media Bias, Trump’s Executive Order, and the China Data Threat: FCC Commissioner Brendan Carr, The Epoch Times, June 1, 2020, https://www.theepochtimes.com/on-social-media-bias-trumps-executive-order-and-the-chinadata-threat-fcc-commissioner-brendan-carr 3372161.html. 26 See, e.g., Sigal Samuel, China paid Facebook and Twitter to help spread anti-Muslim propaganda, Vox, Aug. 22, 2019, https://www.vox.com/futureperfect/2019/8/22/20826971/facebook-twitter-china-misinformation-ughiur-muslim-internmentcamps; Ryan Gallagher, China’s Disinformation Effort Targets Virus, Researcher Says, Bloomberg News, May 12, 2020, https://www.bloomberg.com/news/articles/2020-05-12/chinas-disinformation-campaign-targets-virus-and-businessman; James Titcomb & Laurence Dodds, Chinese state media use Facebook adverts to champion Hong Kong crackdown, June 8, 2020, https://www.telegraph.co.uk/technology/2020/06/08/chinese-state-media-use-facebook-advertschampion-hong-kong/. 27 47 U.S.C. § 230. 8 diversity and a free flow of ideas, as well as holding dominant platforms accountable for their editorial decisions, in new market conditions and technologies that have emerged since the 1990s. 28 IV. Relevant Facts and Data: Technological and Market Changes Contemporary social media platforms have vastly different offerings, business models, relationships to users and customers, and, indeed, roles in national life than the early online bulletin boards that Prodigy and AOL offered in 1996. The FCC should recognize that the liability protections appropriate to internet firms in 1996 are different because modern firms have much greater economic power, play a bigger, if not dominant, role in American political and social discourse, and, with machine learning and other artificial techniques, have and exercise much greater power to control and monitor content and users. CompuServe, Prodigy, America Online, and their competitors had fundamentally different business models from modern social media companies. 29 They had proprietary server banks, and their business model was to charge consumers for access, with significant surcharges 28 See, e.g., Cubby, Inc. v. CompuServe Inc., 776 F.Supp. 135 (S.D.N.Y. 1991) (addressing CompuServe’s 1990 service providing various online subscriber forums for certain groups). 29 Andrew Pollack, Ruling May Not Aid Videotex, N.Y. Times, Sept. 15, 1987, at D1, https://www.nytimes.com/1987/09/15/business/ruling-may-not-aid-videotex.html (last visited July 27, 2020) (“The Videotex Industry Association estimates that there are 40 consumeroriented services, such as CompuServe and the Source, in the United States, with a total membership of 750,000.”). 9 for use of social features. 30 They were not interoperable, 31 There was thus no online “general public” population about whom information could be known, nor were there business partners to whom information on members of the public could be aggregated and sold. Online services faced a competitive landscape. Online services competed with one another by commissioning or developing their own games, chat systems, financial-markets reporting, news services, and in-network mail services. 32 As users paid to connect, and thus directly funded online services, most online services did not contain advertising. The online service business model was not significantly reliant on thirdparty content because access to proprietary content was at the heart of online services’ marketing 30 Id. (“It is unclear, for instance, to what extent the gateway will be able to tell consumers where to go for the information they desire . . . . Each information service has its own commands for information retrieval.”); Michael J. Himowitz, A look at on-line services CompuServe and Prodigy, The Baltimore Sun, Jan. 17, 1994 (“CompuServe [costs] $8.95 per month . . . . Effective Feb. 6, rates for forums and extended services . . . are an additional $4.80 per hour at 1200 or 2400 Baud, $9.60 per hour at 9600 or 14,400 Baud . . . . Prodigy: Most popular plan charges $14.95 per month . . . Additional Plus hours [for use of bulletin boards and stock market prices] are $3.60 each.”). 31 Pollack, supra note 29 (“Each information service has its own commands for information retrieval. With a useful gateway [which did not yet exist], the user would need to know only one set of commands and the gateway would translate them.”); David Bernstein, Interoperability: The Key to Cloud Applications, https://e.huawei.com/en/publications/global/ict insights/hw 376150/feature%20story/HW 3762 86 (last visited July 19, 2020) (“[T]he original online services such as AOL, Prodigy, and CompuServe had no interoperability between them. Content posted on one service could not be consumed by a client connected to a different service. Email could not be sent from a user on one service to a user on another.”). 32 Joanna Pearlstein, MacWorld’s Guide to Online Services, MacWorld, Aug. 1994, at 90 (“Core services include general, business, and sports news; computer forums and news; reference materials; electronic mail and bulletin boards; business statistics and data; games; shopping services; travel services; and educational reference material. Still, the different online services do have different emphases, so even though they all offer a range of basic services, they are not interchangeable.”). 10 efforts. 33 The online services of the late 1990s ran online bulletin boards as a minor sideline and used volunteer moderators from the computer hobbyist community. 34 Their business model was based on fees for connection time and professional database access, not community content. One result of this model was that monitoring users and their content was a burden and regulatory imposition. Zeran, a leading and widely cited case on moderation, reflects this understanding of the technology of that time. 35 The Zeran court took the view, which most section 230 cases accept, that “liability [for third-party posts] upon notice [by an offended 33 James Coats, Getting on-line with cyberspace heavyweights, Chicago Tribune, Feb. 28, 1993 at C8 (“GEnie’s greatest value to me is that it serves as a gateway to the ultraexpensive Dow Jones News/Retrieval service. Typing DOWJONES on GEnie gets me access to hundreds of thousands of newspaper articles - but at a cost well above $2 a minute. Still, when I’m involved in personal research, it empowers me with access to more than 100 different newspapers, wire services and magazines . . . . A costly service [on CompuServe] called IQUEST, for example, gets you access to thousands of newspapers, magazines, books and other research materials. A magazine database lets you search hundreds of thousands of back issues of publications from Playboy to Foreign Policy. The catch is that each article you decide to read in full costs $1.50 . . . . Tremendous amounts of information about stocks and investing can be had as well, for a price. You can follow favorite stocks by BasicQuotes and seek out news by company. Much of the famous Standard and Poor’s research data can be had on CompuServe’s S&P Online. Most company filings with the Securities and Exchange Commission can be downloaded on a service called Disclosure. I make heavy use of CompuServe’s Executive News Service, which gives me an electronic ‘clipping service’ providing each day’s news about dozens of firms I follow for my job, as well as other topics . . . . But Delphi takes the Internet much further than the other boards, which confine Internet traffic to electronic mail. With Delphi you can actually hook your home computer up with mainframes and minicomputers all around the world and read and download an almost unimaginably diverse wealth of files.”). 34 Catherine Buni & Soraya Chemaly, The Secret Rules of the Internet: the murky history of moderation, and how it’s shaping the future of free speech, The Verge (April 13, 2016), https://www.theverge.com/2016/4/13/11387934/internet-moderator-history-youtube-facebookreddit-censorship-free-speech (last visited July 19, 2020) (“Moderation’s initially haphazard, laissez-faire culture has its roots here. Before companies understood how a lack of moderation could impede growth and degrade brands and community, moderators were volunteers; unpaid and virtually invisible. At AOL, moderation was managed by a Community Leader program composed of users who had previously moderated chat rooms and reported ‘offensive’ content. They were tasked with building ‘communities’ in exchange for having their subscription fees waived. By 2000, companies had begun to take a more proactive approach.”). 35 Zeran v. Am. Online, Inc., 129 F.3d 327 (4th Cir. 1997). 11 viewer] reinforces service providers’ incentives to restrict speech and abstain from selfregulation.” 36 The court went on to explain that online services cannot possibly take responsibility for third-party content due to its volume; as such, online services will simply prohibit all such content unless they are protected from liability for it. In the court’s words: “If computer service providers were subject to distributor liability, they would face potential liability each time they receive notice of a potentially defamatory statement— from any party, concerning any message. Each notification would require a careful yet rapid investigation of the circumstances surrounding the posted information, a legal judgment concerning the information’s defamatory character, and an on-the-spot editorial decision whether to risk liability by allowing the continued publication of that information. Although this might be feasible for the traditional print publisher, the sheer number of postings on interactive computer services would create an impossible burden in the Internet context.” 37 However, today’s social media companies have adopted a different business model. Rather than provide database access, like Prodigy did, social media offers primarily third-party content. 38 Rather than charge fees, social media platforms profile users in order to categorize 36 Id. at 333. Id. 38 Facebook Investor Relations, https://investor.fb.com/resources/default.aspx (last visited July 19, 2020) (“Founded in 2004, Facebook’s mission is to give people the power to build community and bring the world closer together. People use Facebook to stay connected with friends and family, to discover what’s going on in the world, and to share and express what matters to them.”); Twitter Investor Relations, https://investor.twitterinc.com/contact/faq/default.aspx (last visited July 19, 2020) (“What is Twitter’s mission statement? The mission we serve as Twitter, Inc. is to give everyone the power to create and share ideas and information instantly without barriers. Our business and revenue will always follow that mission in ways that improve – and do not detract from – a free and global conversation.”); Google, Our Approach to Search, https://www.google.com/search/howsearchworks/mission/ (last visited July 19, 2020) (“Our company mission is to organize the world’s information and make it universally accessible and useful.”); YouTube Mission Statement, https://www.youtube.com/about/ (last visited July 19, 2020) (“Our mission is to give everyone a voice and show them the world. We believe that everyone deserves to have a voice, and that the world is a better place when we listen, share and build community through our stories.”); Matt Buchanan, Instagram and the Impulse to Capture Every Moment, The New Yorker, June 20, 2013, https://www.newyorker.com/tech/annals-oftechnology/instagram-and-the-impulse-to-capture-every-moment (last visited July 27, 2020) 37 12 them and connect them to advertisers and other parties interested in user information. 39 Online platforms like Twitter, Facebook, and YouTube have content moderation at the heart of their business models. Unlike the early internet platforms, they have invested immense resources into both professional manual moderation and automated content screening for promotion, demotion, monetization, and removal. 40 (“When I think about what Instagram is, I think about moments,” said Kevin Systrom, the photosharing service’s co-founder and C.E.O. “Our mission is to capture and share the world’s moments.”). 39 Len Sherman, Why Facebook Will Never Change Its Business Model, Forbes.com, Apr, 16, 2018, https://www.forbes.com/sites/lensherman/2018/04/16/why-facebook-will-never-changeits-business-model/#7cdac11c64a7 (last visited July 27, 2020) (“By now, it’s widely understood that Facebook’s voracious appetite for user data is driven by their business model which charges advertisers for access to precisely targeted segments of their massive consumer database. No one knows more about more consumers than Facebook”); Twitter and Facebook have differing business models, The Economist, June 6, 2020, https://www.economist.com/business/2020/06/04/twitter-and-facebook-have-differing-businessmodels (last visited July 27, 2020) (“At first blush, Twitter and Facebook look similar. Each is a social network, connecting users online and presenting them with content in a ‘feed’, a neverending list of posts, pictures and videos of pets. Each makes money by selling advertising, and thus has an interest in using every trick to attract users’ attention. And each employs gobbets of data gleaned from users’ behaviour to allow advertisers to hit targets precisely, for which they pay handsomely”); Enrique Dans, Google Vs. Facebook: Similar Business Models, But With Some Very Big Differences, Forbes.com, Feb. 2, 2019, https://www.forbes.com/sites/enriquedans/2019/02/02/google-vs-facebook-similar-businessmodels-but-with-some-very-big-differences/#6ab9408541ef (last visited July 27, 2020) (“Google does not sell my data or pass it on to any third party, it simply allows that third party to display an advertisement to a segment of its database that includes me, based on certain variables . . . . What is the result of Google knowing about us and our online interests? We receive ads that largely reflect those interests and we still have some control over what we see.”). 40 Zoe Thomas, Facebook content moderators paid to work from home, BBC.com, Mar. 18, 2020, https://www.bbc.com/news/technology-51954968 (last visited July 27, 2020) (“Facebook has approximately 15,000 content moderators in the US, who are hired by third-party contracting companies”); Elizabeth Dwoskin, et al., Content moderators at YouTube, Facebook and Twitter see the worst of the web — and suffer silently, Washington Post, July 25, 2019, https://www.washingtonpost.com/technology/2019/07/25/social-media-companies-areoutsourcing-their-dirty-work-philippines-generation-workers-is-paying-price/ (last visited July 27, 2020) (“In the last couple of years, social media companies have created tens of thousands of jobs around the world to vet and delete violent or offensive content . . . .”); Shannon Bond, Facebook, YouTube Warn Of More Mistakes As Machines Replace Moderators, National Public 13 Understanding how new entrants can or cannot participate in these intermediary markets is therefore key in understanding appropriate liability regimes; this is particularly important because liability shields can deter entrance. Market observers have significant concerns about barriers to entrance for new social media companies as well as social media’s role with other edge providers in creating mediation markets. It is no secret that today’s online platforms exist in highly concentrated markets. 41 Moreover, the relationship between social media and their adjacent markets is unclear, with mergers and other agreements having the potential for unexpected anticompetitive results. 42 Social media firms also demonstrate network effects and other barriers to entry, which frequently lead to weaker competition. 43 This lack of competition is particularly troubling given the decrease of new entrants documented in the broader economy. 44 Section 230 was designed to assist the nascent internet industry. Pivotal judicial decisions, such as Zeran, interpreted ambiguous language in section 230 broadly, but at a time when different cost structures, business models, and markets prevailed. Given the rapidly Radio, March 31, 2020, https://www.npr.org/2020/03/31/820174744/facebook-youtube-warn-ofmore-mistakes-as-machines-replace-moderators (last visited July 27, 2020) (“Facebook, YouTube and Twitter are relying more heavily on automated systems to flag content that violate their rules . . . . Tech companies have been saying for years that they want computers to take on more of the work of keeping misinformation, violence and other objectionable content off their platforms. Now the coronavirus outbreak is accelerating their use of algorithms rather than human reviewers.”). 41 Justin Haucap & Ulrich Heimeshoff, Google, Facebook, Amazon, eBay: Is the Internet driving competition or market monopolization? 11 Int. Econ. Policy 49–61 (2014). 42 Carl Shapiro, Protecting Competition in the American Economy: Merger Control, Tech Titans, Labor Markets. 33(3) Journal of Economic Perspectives 69 (2019), available at http://faculty.haas.berkeley.edu/shapiro/protectingcompetition.pdf. 43 Steven Berry, Martin Gaynor & Fiona Scott Morton, Do Increasing Markups Matter? Lessons from Empirical Industrial Organization, 33(3) Journal of Economic Perspectives 44 (2019). 44 Germán Gutiérrez & Thomas Philippon, The Failure of Free Entry. NBER Working Paper No. 26001 (June 2019), available at https://www.nber.org/papers/w26001.pdf. 14 changing markets and relationship between market structure and optimal liability rules, NTIA urges the FCC to re-examine section 230 and work towards transparency in these markets. V. The Authority and Need for Issuing Regulations for Section 230 This section sets forth the FCC’s authority to issue regulations to interpret section 230 and shows how regulations are necessary to resolve the statute’s ambiguities that the E.O. identified. This section further explains how the FCC has jurisdiction to issue regulations, outlines the background and history of section 230, explains its structure, and shows how courts have relied upon its ambiguities to make overly expansive interpretations. Finally, it examines how the section’s ambiguities should be resolved. Specifically, NTIA respectfully requests the FCC to: • clarify the relationship between 230(c)(1) and (c)(2); • explain the meaning of “good faith” and “otherwise objectionable” in section 230(c)(2); • specify how the limitation on the meaning of “interactive computer service” found in section 230(f)(2) should be read into section 230(c)(1); and, • explicate the meaning of “treated as a speaker or publisher” in section 230(c)(1). A. The Commission’s Power to Interpret Section 230 of the Communications Decency Act Section 201(b) of the Communications Act (Act) empowers the Commission to “prescribe such rules and regulations as may be necessary in the public interest to carry out this chapter.” 45 Under this authority, the FCC should promulgate rules to resolve ambiguities in Section 230. The Supreme Court has confirmed that “the grant in section 201(b) means what it 45 47 U.S.C. § 201(b). 15 says: The FCC has rulemaking authority to carry out the ‘provisions of this Act.’” Section 230, in turn, was incorporated into the Act – in the same portion of the Act, Title II, as section 201(b) – by the Telecommunications Act of 1996 (1996 Act). The fact that section 230 was enacted after section 201(b) is of no consequence; the Supreme Court repeatedly has held that the Commission’s section 201(b) rulemaking power extends to all subsequently enacted provisions of the Act, specifically identifying those added by the Telecommunications Act of 1996. 46 Thus, the Commission has authority under section 201(b) to initiate a rulemaking to implement section 230. That broad rulemaking authority includes the power to clarify the language of that provision, as requested in the petition. The Commission has authority to implement section 230 through regulation even if this section was added to the 1934 Act through the amendments in the Telecommunications Act of 1996. It does not matter if the provision specifically mentions or contemplates FCC regulation. For instance, section 332(c)(7), which was also added to the Act by the 1996 Act, limits State and local decision-making on the placement, construction, or modification of certain wireless service facilities. The section makes no mention of FCC authority, only alluding to the Commission in passing and giving it no role in the provision’s implementation. The Supreme Court nonetheless, upheld Commission’s authority to issue regulations pursuant to section 46 AT&T Corp. v. Iowa Utilities Bd., 525 U.S. 366, 378 (1999) (“We think that the grant in § 201(b) means what it says: The FCC has rulemaking authority to carry out the “provisions of this Act,” which include §§ 251 and 252, added by the Telecommunications Act of 1996”); City of Arlington v. FCC, 668 F.3d 229, 250 (5th Cir. 2012), aff’d, 569 U.S. 290 (2013) (“Section 201(b) of that Act empowers the Federal Communications Commission to “prescribe such rules and regulations as may be necessary in the public interest to carry out [its] provisions. Of course, that rulemaking authority extends to the subsequently added portions of the Act.”). 16 332(c)(7) for the simple reason that it was codified within the 1934 Act, and section 201(b) empowers the Commission to promulgate rules interpreting and implementing the entire Act. 47 Similarly, in Iowa Utilities, the Supreme Court ruled that the FCC had rulemaking authority to implement sections 251 and 252 of the Act. 48 As with section 332, these sections did not explicitly grant the Commission power over all aspects of their implementation, arguably excluding intrastate and other areas. Nonetheless, the Court ruled that “§ 201(b) explicitly gives the FCC jurisdiction to make rules governing matters to which the 1996 Act applies.” 49 These two decisions, and their underlying rationales, compel the same result for a Commission rulemaking to interpret section 230, and the rationale is simple and inarguable: if Congress chooses to codify a section into the 1934 Communications Act, then section 201(b) gives the FCC the power to clarify and implement it through regulation. Neither section 230’s text, nor any speck of legislative history, suggests any congressional intent to preclude the Commission’s implementation. This silence further underscores the presumption that the Commission has power to issue regulations under section 230. As the Fifth Circuit noted with respect to section 332(c)(7), “surely Congress recognized that it was legislating against the background of the Communications Act’s general grant of rulemaking authority to the FCC.” 50 Accordingly, if Congress wished to exclude the Commission from the interpretation of section 230, “one would expect it to have done so explicitly.” Congress did not do so and, as was the case for section 332(c)(7), that decision 47 City of Arlington, 569 U.S. at 293 (“Of course, that rulemaking authority [of section 201(b)] extends to the subsequently added portions of the Act”). 48 Iowa Util. Bd., 525 U.S. at 378-87. 49 Iowa Util. Bd., 525 U.S. at 380. 50 Arlington, 668 F.3d at 250. 17 opens an ambiguity in section 230 that the Commission may fill pursuant to its section 201(b) rulemaking authority. B. Background to Section 230 Section 230 reflects a congressional response to a New York state case, Stratton Oakmont, Inc. v. Prodigy Servs. Co., decided in 1995. 51 In Stratton Oakmont, a New York trial court reasoned that Prodigy had become a “publisher” under defamation law because it voluntarily deleted some messages from its message boards “on the basis of offensiveness and ‘bad taste,’” and was liable for the acts of its agent, the “Board Leader” of the message board, who it had hired to monitor postings on its bulletin board. The court held that Prodigy, having undertaken an affirmative duty to remove content, therefore was legally responsible for failing to remove an allegedly defamatory posting. 52 The U.S. Court of Appeals for the Ninth Circuit explained that: “[t]he Stratton Oakmont court concluded that when a platform engages in content 51 Fair Hous. Council of San Fernando Valley v. Roommates.Com, LLC, 521 F.3d 1157, 1163 (9th Cir. 2008) (“Section 230 was prompted by a state court case holding Prodigy responsible for a libelous message posted on one of its financial message boards”); Barnes v. Yahoo!, Inc., 570 F.3d 1096, 1101 (9th Cir. 2009) (“This is not surprising, because, as we and some of our sister circuits have recognized, Congress enacted the Amendment in part to respond to a New York state court decision, Stratton Oakmont, [citations omitted,] which held that an internet service provider could be liable for defamation.”); Barrett v. Rosenthal, 40 Cal. 4th 33, 44, 146 P.3d 510, 516 (2006) (“The legislative history indicates that section 230 was enacted in response to an unreported New York trial court case.”); Sen. Rep. No. 104-230, 2d. Session at 194 (1996) (“One of the specific purposes of [section 230] is to overrule Stratton Oakmont v. Prodigy and any other similar decisions”); see also H.R. Conf. Rep. No. 104-458, at 208 (“The conferees believe that [decisions like Stratton Oakmont] create serious obstacles to the important federal policy of empowering parents to determine the content of communications their children receive through interactive computer services”); 141 Congressional Record H8469–H8470 (daily ed., June 14, 1995) (statement of Rep. Cox, referring to disincentives created by the Stratton Oakmont decision); Blumenthal v. Drudge, 992 F. Supp. 44, 52 n.13 (D.D.C. 1998) (“the legislative history makes clear that one of the primary purposes of Section 230 was to overrule the Stratton Oakmont decision”). 52 Stratton Oakmont, 1995 WL 323720 at *4. 18 moderation, or ‘some voluntary self-policing,’ the platform becomes ‘akin to a newspaper publisher, and thus responsible for messages on its bulletin board that defamed third parties.’” 53 Stratton Oakmont applied established tort law, which makes “publishers” liable for defamatory material. 54 Traditionally, tort law defines “publication” as simply the “communication intentionally or by a negligent act to one other than the person defamed.” 55 But because the publication element of a defamation claim can also be satisfied when someone unreasonably fails to remove a communication exhibited via means in his possession or control, the Stratton Oakmont court concluded that Prodigy’s content moderation or “voluntary selfpolicing” of the bulletin board rendered Prodigy a publisher of a defamatory statement on its board. Therefore, Prodigy was liable as a publisher. 56 Stratton Oakmont distinguishes an earlier case, Cubby, Inc. v. CompuServe, Inc., 57 which ruled an internet bulletin board was not the publisher of material on its bulletin board. The key distinguishing factor was that in Cubby, CompuServe did not moderate postings. The court ruled that CompuServe was not a publisher, but rather what tort law terms a “distributor,” i.e., one “who merely transmit[s] defamatory content, such as news dealers, video rental outlets, 53 Fair Hous. Council, 521 F.3d at 1163. Barnes, 570 F.3d at 1104, citing W. Page Keeton, et al., Prosser and Keeton on the Law of Torts § 113, at 799 (5th ed. 1984) (“[E]veryone who takes part in the publication, as in the case of the owner, editor, printer, vendor, or even carrier of a newspaper is charged with publication.”); see also Cianci v. New Times Publ’g Co., 639 F.2d 54, 60–61 (2d Cir.1980) (noting the “blackletter rule that one who republishes a libel is subject to liability just as if he had published it originally”). 55 Restatement (Second) of Torts § 577. 56 Stratton Oakmont, 1995 WL 323710 at *5 (“PRODIGY’s conscious choice, to gain the benefits of editorial control, has opened it up to a greater liability than CompuServe and other computer networks that make no such choice.”); Barnes, 570 F.3d at 1102 (“publication involves reviewing, editing, and deciding whether to publish or to withdraw from publication third-party content”); see Rodney A. Smolla, Law of Defamation § 4:77 (2d ed., 1999). 57 Cubby, 776 F.Supp. 135. 54 19 bookstores, libraries, and other distributors and vendors.” 58 “Distributors” are subject to liability “if, but only if, they know or have reason to know of the content’s defamatory character.” 59 Thus, publishers had strict liability for materials they published, whereas distributors only had liability for publishing defamation with actual or constructive knowledge of its defamatory character. 60 The Stratton Oakmont court reasoned that, in Cubby, CompuServe “had no opportunity to review the contents of the publication at issue before it was uploaded into CompuServe’s computer banks,” and, therefore, CompuServe had no liability for defamatory posts on platforms that it owned and controlled as distributor. 61 While following established common law tort rules, the Stratton Oakmont and Cubby cases presented internet platforms with a difficult choice: voluntarily moderate unlawful or obscene content and thereby become liable for all messages on their bulletin boards, or do nothing and allow unlawful and obscene content to cover their bulletin boards unfiltered. In litigation, Prodigy claimed that the “sheer volume” of message board postings it received—by our current standards a humble “60,000 a day”—made manually reviewing every message impossible. If forced to choose between taking responsibility for all messages and deleting no messages at all, it would take the latter course. 62 Thus, given the technological differences between an internet platform and a bookstore or library, the former’s ability to aggregate a much greater volume of information, traditional liability rules became strained. Tort law risked disincentivizing platforms from editing or moderating any content for fear they would become liable for all third-party content. 58 Smolla § 4:92. Restatement (Second) of Torts § 581(1) (1977). 60 Prosser, supra note 54, § 113 at 803. 61 Stratton Oakmont, 1995 WL 323710 at *2-3. 62 Stratton Oakmont, 1995 WL 323710 at *3. 59 20 Congress intended section 230 to address this difficult liability problem, but nothing in the law’s history, purpose or text allows for the conclusion that internet platforms should avoid all responsibility for their own editing and content-moderating decisions. Indeed, section 230 was originally titled the “Online Family Empowerment” amendment to the Communications Decency Act, which was titled, “protection for private blocking and screening of offensive material.” 63 Responding to pornography and obscene material on the web, Congress designed section 230 to encourage platforms to moderate specific types of content, mostly related to sexual material inappropriate to minors. Congress did not intend a vehicle to absolve internet and social media platforms—which, in the age of dial-up internet bulletin boards, such as Prodigy, did not exist—from all liability for their editorial decisions. Representatives Christopher Cox and Ron Wyden floated the bill that became section 230 as an alternative to Senator J. James Exon’s bill that criminalized the transmission of indecent material to minors. 64 In public comments, Representative Cox explained that the section 230 would reverse Stratton Oakmont and advance the regulatory goal of allowing families greater power to control online content. 65 The final statute reflected his stated policy: “to encourage the development of technologies which maximize user control over what information is received by individuals, families, and schools who use the Internet and other interactive computer 63 Telecommunications Act of 1996, Pub. L. No. 104-104, title V, Sec. 509 (1996). Robert Cannon, The Legislative History of Senator Exon’s Communications Decency Act: Regulating Barbarians on the Information Superhighway, 49 Fed. Comm. L.J. 51 (1996); Felix T. Wu, Collateral Censorship and the Limits of Intermediary Immunity, 87 Notre Dame L. Rev. 293, 316 (2011); 141 Cong. Rec. H8468-69 (daily ed. Aug. 4, 1995); Ashcroft v. Am. Civil Liberties Union, 535 U.S. 564, 564 (2002) (“[T]he Communications Decency Act reflected Congress’s response to the proliferation of pornographic, violent and indecent content on the web Congress’ first attempt to protect children from exposure to pornographic material on the Internet.”). 65 See 141 Cong. Rec. H8469-70 (daily ed. Aug. 4, 1995) (statement of Rep. Cox). 64 21 services.” 66 The comments in the Congressional record from supporting congressmen and women—and it received strong bi-partisan support—reveal an understanding that the Online Family Empowerment amendment, now codified as section 230, as a non-regulatory approach to protecting children from pornography, 67 intended to provide incentives for “Good Samaritan” blocking and screening of offensive material. C. Section 230(c)’s Structure To further these goals, Congress drafted the “Good Samaritan” exception to publisher liability. Section 230(c)(1) has a specific focus: it prohibits “treating” “interactive computer services,” i.e., internet platforms, such as Twitter or Facebook, as “publishers.” But, this provision only concerns “information” provided by third parties, i.e., “another internet content provider” 68 and does not cover a platform’s own content or editorial decisions. The text of section 230(c)(1) states: (c) Protection for “Good Samaritan” blocking and screening of offensive material: (1) Treatment of publisher or speaker 66 47 U.S.C. § 230(b)(3). See 141 Cong. Rec. H8470 (1995) (statement of Rep. White) (“I want to be sure we can protect [children] from the wrong influences on the Internet. But . . . the last person I want making that decision is the Federal Government. In my district right now there are people developing technology that will allow a parent to sit down and program the Internet to provide just the kind of materials that they want their child to see. That is where this responsibility should be, in the hands of the parent. That is why I was proud to cosponsor this bill that is what this bill does . . . .”); id., (statement of Rep. Lofgren) (“[The Senate approach] will not work. It is a misunderstanding of the technology. The private sector is out giving parents the tools that they have. I am so excited that there is more coming on. I very much endorse the Cox-Wyden amendment . . . .”). 68 47 U.S.C. § 230(c)(1). 67 22 No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider. Section (c)(2) also has a specific focus: it eliminates liability for interactive computer services that act in good faith “to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” 69 Subsection (c)(2) governs the degree to which some of the platform’s own content moderation decisions receive any legal protection, stating: “No provider or user of an interactive computer service shall be held liable on account of(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected . . . .” Here, Congress protects “any action . . . taken in good faith to restrict access to or availability of material.” This means any social media platform’s editorial judgment, moderation, content editing or deletion receives legal immunity, but the plain words of the provision indicate that this protection only covers decisions to restrict access to certain types of enumerated content. As discussed infra, these categories are quite limited and refer primarily to traditional areas of media regulation—also consistent with legislative history’s concern that private regulation could create family-friendly internet spaces—and only actions within these categories taken in “good faith.” 69 47 U.S.C. § 230(c)(2). 23 D. Expansive Court Rulings Tied to Early Platforms and Outdated Technology Courts have recognized that “Congress enacted this provision for two basic policy reasons: to promote the free exchange of information and ideas over the Internet and to encourage voluntary monitoring for offensive or obscene material.” 70 Congress intended sections 230(c)(1) and (c)(2) to protect platform openness and monitoring for certain specific issues. But, as discussed infra, ambiguous language in these statutes allowed some courts to broadly expand section 230’s immunity from beyond its original purpose into a bar any legal action or claim that involves even tangentially “editorial judgment.” 71 These subsequent protections established from “speaker or publisher” are overly broad and expansive, and often have absolutely nothing to do with the original harm section 230 was meant to remedy: relieving platforms of the burden of reading millions of messages to detect for defamation as Stratton Oakmont would require. Far and above initially intended viewer protection, courts have ruled section 230(c)(1) offers immunity from contracts, 72 consumer fraud, 73 revenge pornography, 74 70 Carafano v. Metrosplash.com, Inc., 339 F.3d 1119, 1122 (9th Cir. 2003). See, e.g., Sikhs for Justice “SFJ”, Inc. v. Facebook, Inc., 144 F.Supp.3d 1088, 1094–1095 (N.D.Cal. 2015). 72 Caraccioli v. Facebook, Inc., 167 F. Supp.3d 1056, 1064-66 (N.D. Cal. 2016) (dismissing breach of contract claim and Cal. Bus. & Prof. Code § 17200 unfair practices claim); Lancaster v. Alphabet Inc., No. 2016 WL 3648608, at *5 (N.D. Cal. July 8, 2016) (dismissing claim for breach of covenant of good faith and fair dealing); Jurin v. Google, Inc., 695 F. Supp. 2d 1117, 1122–23 (E.D. Cal. 2010) (dismissing claim for fraud); Fed. Agency of News LLC, et al. v. Facebook, Inc., 395 F. Supp. 3d 1295 (N.D. Cal. 2019) (dismissing discrimination claims under Title II and 42 U.S.C. § 1983); Obado v. Magedson, 43 Media L. Rep. 1737 (D.N.J. 2014) (dismissing claim for promissory estoppel), aff’d, 612 F. App’x 90 (3d Cir. 2015). 73 See Gentry v. eBay, Inc., 121 Cal. Rptr. 2d 703 (Cal. Ct. App. 2002); Hinton v. Amazon, 72 F. Supp. 3d 685, 687 (S. D. Miss. 2014); Oberdorf v. Amazon, 295 F. Supp. 3d 496 (Mid. D. PA Dec. 21, 2017). 74 Jones v. Dirty World Entertainment Holding LLC, 755 F.3d 398 (6th Cir. 2014); S.C. v. Dirty World LLC, 40 Media L. Rep. 2043 (W.D. Mo. 2012); Poole v. Tumblr, Inc., 404 F. Supp. 3d 637 (D. Conn. 2019). 71 24 anti-discrimination civil rights obligations, 75 and even assisting in terrorism. 76 By expanding protections beyond defamation, these courts extend to platforms a privilege to ignore laws that every other communications medium and business must follow and that are no more costly or difficult for internet platforms to follow than any other business. The problem of overly expansive interpretations for section 230 is not merely hypothetical. Tens of thousands of Americans have reported, among other troubling behaviors, online platforms “flagging” content as inappropriate, even though it does not violate any stated terms of service; making unannounced and unexplained changes to company policies that have the effect of disfavoring certain viewpoints; and deleting content and entire accounts with no warning, no rationale, and no recourse. As FCC Commissioner Brendan Carr has observed, social media such as Twitter “punis[h] speakers based on whether it approves or disapproves of their politics.” 77 One can hardly imagine a result more contrary to Congress’s intent to preserve on the internet “a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity.” 78 Further, by making contract and consumer fraud claims concerning moderation unenforceable under section 230, courts seriously injure section 230’s goal “to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services.” 79 Content moderation policies become, as FCC Commissioner Brendan 75 Sikhs for Justice “SFJ”, Inc., 144 F. Supp.3d 1088, 1094-1095. Force, 934 F.3d at 57. 77 Jon Brokin, Arstechnica, FCC Republican excitedly endorses Trump’s crackdown on social media, May 29, 2020, https://arstechnica.com/tech-policy/2020/05/fcc-republican-excitedlyendorses-trumps-crackdown-on-social-media/. 78 47 U.S.C. § 230(a)(1). 79 47 U.S.C. § 230(b)(2). 76 25 Carr recently described Twitter’s moderation policy, “free speech for me, but not for thee.” 80 Further, if interactive computer services’ contractual representations about their own services cannot be enforced, interactive computer services cannot distinguish themselves. Consumers will not believe, nor should they believe, representations about online services. Thus, no service can credibly claim to offer different services, further strengthening entry barriers and exacerbating competition concerns. Much of this overly expansive reading of section 230 rests on a selective focus on certain language from Zeran, a case from the United States of Appeals for the Fourth Circuit. 81 The line of court decisions expanding section 230 in such extravagant ways relies on Zeran’s reference to: “lawsuits seeking to hold a service provider liable for its exercise of a publisher's traditional editorial functions—such as deciding whether to publish, withdraw, postpone or alter content— are barred.” 82 This language arguably provides full and complete immunity to the platforms for their own publications, editorial decisions, content-moderating, and affixing of warning or factchecking statements. 83 But, it is an erroneous interpretation, plucked from its surrounding context and thus removed from its more accurate meaning. 80 News Break, Brendan Carr Decries Twitter Censorship as ‘Free Speech for Me, but Not for Thee, June 11, 2020, https://www.newsbreak.com/news/1582183608723/brendan-carr-decriestwitter-censorship-as-free-speech-for-me-but-not-for-thee. 81 Zeran, 129 F.3d at 327. 82 Zeran, 129 F.3d at 330. 83 These lines from Zeran have led some courts to adopt the so-called three part section 230(c)(1) test: (1) whether Defendant is a provider of an interactive computer service; (2) if the postings at issue are information provided by another information content provider; and (3) whether Plaintiff's claims seek to treat Defendant as a publisher or speaker of third party content. Okeke v. Cars.com, 966 N.Y.S.2d 843, 846 (Civ. Ct. 2013), citing Nemet Chevrolet, Ltd. v. Consumeraffairs.com, Inc., 564 F. Supp. 2d 544, 548 (E.D. Va. 2008), aff’d, 591 F.3d 250 (4th Cir. 2009). As the text explains, this so-called test errs in the third prong. The question is not whether the claim treats defendant as a publisher or speaker—after all, virtually every legal claim (contract, fraud, civil rights violations) would do so. The question is whether liability is 26 In fact, the quotation refers to third party’s exercise of traditional editorial function—not those of the platforms. As the sentence in Zeran that is immediately prior shows, section 230 “creates a federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service.” In other words, the liability from which section 230(c)(1) protects platforms is that arising from the content that the third-party posts—i.e. the “information” posted by “another information provider” and those information providers’ editorial judgments. In light of the history of publisher and distributor liability law upon which section 230 draws, as well as its actual text, the best way to interpret the distinction between section 230(c)(1) and (c)(2) is as follows: Section 230(c)(1) applies to acts of omission—to a platform’s failure to remove certain content. In contrast, section 230(c)(2) applies to acts of commission—a platform’s decisions to remove. Section 230(c)(1) does not give complete immunity to all a platform’s “editorial judgments.” E. Need for FCC Regulations: Ambiguities in Section 230 Section 230 contains a number of ambiguities that courts have interpreted broadly in ways that are harmful to American consumers, free speech, and the original objective of the statute. First, as discussed below, uncertainty about the interplay between section 230(c)(1) and (c)(2) has led many courts to a construction of the two provisions that other courts consider to be anomalous or lead to rendering section 230(c)(2) superfluous. Second, the interplay between section 230(c)(1) and (c)(2) does not make clear at what point a platform’s moderation and presentation of content becomes so pervasive that it becomes an information content provider based on the content of third-party information. Requiring platforms to monitor the content of thousands of posts was the impetus behind section 230. 27 and, therefore, outside of section 230(c)(1)’s protections. Third, critical phrases in section 230(c)(2)— the “otherwise objectionable” material that interactive computer service providers may block without civil liability; and the “good faith” precondition for activating that immunity—are ambiguous on their face. And, with respect to the former, courts have posited starkly divergent interpretations that can only create uncertainty for consumers and market participants. Finally, what it means to be an “information content provider” or to be “treated as a publisher or speaker” is not clear in light of today’s new technology and business practices. The Commission’s expertise makes it well equipped to address and remedy section 230’s ambiguities and provider greater clarity for courts, platforms, and users. 1. The Interaction Between Subparagraphs (c)(1) and (c)(2) Ambiguity in the relationship between subparagraphs (c)(1) and (c)(2) has resulted in courts reading section 230(c)(1) in an expansive way that risks rendering (c)(2) a nullity. Numerous district court cases have held that section 230(c)(1) applies to removals of content, not section 230(b)(2) with its exacting “good faith” standard.” 84 For instance, in Domen v. Vimeo, a federal district court upheld the removal of videos posted by a religious groups’ questioning a California law’s prohibition on so-called sexual orientation change efforts (SOCE), and the law’s effect on pastoral counseling. Finding the videos were “harassing,” the court upheld their removal under both section 230(c)(1) and section (c)(2), ruling that these sections are coextensive, rather than aimed at very different issues. 85 In doing so, the court rendered section 84 Domen v. Vimeo, Inc., 433 F. Supp. 3d 592, 601 (S.D.N.Y. 2020); Lancaster v. Alphabet, Inc., 2016 WL 3648608 (N.D. Cal. July 28, 2016); Sikhs for Justice “SFJ”, Inc., 144 F.Supp.3d 1088. 85 Domen, 433 F. Supp. 3d at 601 (“the Court finds that Vimeo is entitled to immunity under either (c)(1) or (c)(2)”). 28 230(c)(2) superfluous—reading its regulation of content removal as completely covered by section 230(c)(1)’s regulation of liability for user-generated third party content. The Commission should promulgate a regulation to clarify the relationship between the two provisions so that section 230(c)(1) does not render section 230(c)(1) superfluous. To determine how these subparagraphs interact—or as E.O. 13925 specifically instructs: “to clarify and determine the circumstances under which a provider of an interactive computer service that restricts access to content in a manner not specifically protected by subparagraph (c)(2)(A) may also not be able to claim protection under subparagraph (c)(1),” 86 the FCC should determine whether the two subsections’ scope is additive or not. While some courts have read section 230(c)(1) “broadly,” 87 few have provided any principled distinction between the two subsections. NTIA urges the FCC to follow the canon against surplusage in any proposed rule. 88 Explaining this canon, the Supreme Court holds, “[a] statute should be construed so that effect is given to all its provisions, so that no part will be inoperative or superfluous, void or insignificant . . . .” 89 The Court emphasizes that the canon “is strongest when an interpretation would render superfluous another part of the same statutory scheme.” 90 While some district courts, such as Domen discussed above, have ruled that section 230(c)(1) applies to content removal, which is section 230(c)(2)’s proper domain, those courts 86 E.O. 13925 § 2(b)(i). See Force, 934 F.3d at 64. 88 Marx v. General Revenue Corp., 568 U.S. 371, 385 (2013). 89 Corley v. United States, 556 U.S. 303, 314 (2009), quoting Hibbs v. Winn, 542 U.S. 88, 101 (2004). 90 Marx, 568 U.S. at 386; see also Fair Hous. Council, 521 F.3d 1157 at 1167-68 (avoiding superfluity in interpret the “developer” exception in Section 230(f)(3) of the CDA). 87 29 that have explicitly inquired into the proper relationship between the two subparagraphs have followed the surplusage canon—ruling that the provisions cover separate issues 91 and “address different concerns.” 92 “Section 230(c)(1) is concerned with liability arising from information provided online,” while “[s]ection 230(c)(2) is directed at actions taken by Internet service providers or users to restrict access to online information.” 93 Thus, “[s]ection 230(c)(1) provides immunity from claims by those offended by an online publication, while section 230(c)(2) protects against claims by those who might object to the restriction of access to an online publication.” 94 Courts have refused to “interpret[] the CDA . . . [to allow] the general immunity in (c)(1) [to] swallow[] the more specific immunity in (c)(2)” because subsection (c)(2) immunizes only an interactive computer service’s “actions taken in good faith.” 95 NTIA suggests that the FCC can clarify this relationship between section 230(c)(1) and section 230(c)(2) by establishing the following points. First, the FCC should make clear that section 230(c)(1) applies to liability directly stemming from the information provided by thirdparty users. Section 230(c)(1) does not immunize a platforms’ own speech, its own editorial decisions or comments, or its decisions to restrict access to content or its bar user from a platform. Second, section 230(c)(2) covers decisions to restrict content or remove users. NTIA, therefore, requests that the Federal Communications Commission add the below Subpart E to 47 CFR Chapter I: Subpart E. Interpreting Subsection 230(c)(1) and Its Interaction With Subsection 230(c)(2). 91 See, e.g., Zango, 568 F.3d at 1175 (holding that (c)(2) is a “different . . . statutory provision with a different aim” than (c)(1)). 92 Barrett, 40 Cal. 4th 33. 93 Id. at 49 (emphasis added). 94 Id. (emphasis added). 95 e-ventures Worldwide, LLC v. Google, Inc., (M.D. Fla. Feb. 8, 2017) 2017 U.S. Dist. LEXIS 88650, at *9. 30 § 130.01 As used within 47 U.S.C. 230, 47 CFR Chapter I, Subchapter A and within this regulation, the following shall apply: (a) 47 U.S.C. 230(c)(1) applies to an interactive computer service for claims arising from failure to remove information provided by another information content provider. Section 230(c)(1) has no application to any interactive computer service’s decision, agreement, or action to restrict access to or availability of material provided by another information content provider or to bar any information content provider from using an interactive computer service. Any applicable immunity for matters described in the immediately preceding sentence shall be provided solely by 47 U.S.C. § 230(c)(2). (b) An interactive computer service is not a publisher or speaker of information provided by another information content provider solely on account of actions voluntarily taken in good faith to restrict access to or availability of specific material in accordance with subsection (c)(2)(A) or consistent with its terms of service or use. 2. The Meaning of Section 230(c)(2) Section 230(c)(2)’s ambiguities include (1) how to interpret “otherwise objectionable” and (2) “good faith.” a. “Otherwise objectionable” If “otherwise objectionable” means any material that any platform “considers” objectionable, then section 230(b)(2) offers de facto immunity to all decisions to censor content. And some district courts have so construed section 230(c)(2). 96 But, many courts recognize 96 Domen v. Vimeo, Inc., 2020 U.S. Dist. L 7935 (S.D.N.Y. Jan. 15, 2020), appeal filed No 20616 (Feb. 18, 2020) (“Section 230(c)(2) is focused upon the provider’s subjective intent of what is ‘obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.’ That section ‘does not require that the material actually be objectionable; rather, it affords protection for blocking material “that the provider or user considers to be’ objectionable.”‘); Langdon v. Google, Inc., 474 F. Supp. 2d 622, 631 (D. Del. 2007) (“Plaintiff argues there was no refusal to run his ads on the basis they were obscene or harassing, and that Defendants cannot create ‘purported reasons for not running his ads.’ He omits, however, reference to that portion of § 230 which provides immunity from suit for restricting material that is ‘otherwise objectionable.’”). 31 limiting principles. Many look to the statutory canon of ejusdem generis, which holds that catchall phases at the end of a statutory lists should be construed in light of the other phrases. 97 In this light, section 230(c)(2) only applies to obscene, violent, or other disturbing matters. 98 Understanding how the section 230(c)(2) litany of terms has proved difficult for courts in determining how spam filtering and filtering for various types of malware fits into the statutory framework. Most courts have ruled that “restrict[ing] access” to spam falls within the section 97 Washington State Dep’t of Soc. & Health Servs. v. Guardianship Estate of Keffeler, 537 U.S. 371, 372 (2003) (“under the established interpretative canons of noscitur a sociis and ejusdem generis, where general words follow specific words in a statutory enumeration, the general words are construed to embrace only objects similar to those enumerated by the specific words”). 98 Darnaa, LLC v. Google, Inc., 2016 WL 6540452 at *8 (N.D. Cal. 2016) (“The context of § 230(c)(2) appears to limit the term to that which the provider or user considers sexually offensive, violent, or harassing in content.”); Song Fi, Inc. v. Google, Inc., 108 F. Supp. 3d 876, 883 (N.D. Cal. 2015) (“First, when a statute provides a list of examples followed by a catchall term (or ‘residual clause’) like ‘otherwise objectionable,’ the preceding list provides a clue as to what the drafters intended the catchall provision to mean,” citing Circuit City Stores v. Adams, 532 U.S. 105, 115 (2001)). This is the rationale for the canon of construction known as eiusdem generis (often misspelled ejusdem generis), which is Latin for ‘of the same kind); National Numismatic v. eBay, 2008 U.S. Dist. LEXIS 109793, at *25 (M.D. Fla. Jul. 8, 2008) (“Section 230 is captioned ‘Protection for ‘Good Samaritan’ blocking and screening of offensive material,’ yet another indication that Congress was focused on potentially offensive materials, not simply any materials undesirable to a content provider or user”); Sherman v. Yahoo! Inc., 997 F. Supp. 2d 1129 (S.D. Cal. 2014) (text messages allegedly violate Telephone Consumer Protection Act; Yahoo! raised section 230(c)(2)(B) as a defense) (“The Court declines to broadly interpret ‘otherwise objectionable’ material to include any or all information or content. The Ninth Circuit has expressed caution at adopting an expansive interpretation of this provision where providers of blocking software ‘might abuse th[e CDA] immunity to block content for anticompetitive purposes or merely at its malicious whim, under the cover of considering such material “otherwise objectionable” under § 230(c)(2).”); Goddard v. Google, Inc., 2008 U.S. Dist. LEXIS 101890 (N.D. Cal. Dec. 17, 2008) (‘[i]t is difficult to accept . . . that Congress intended the general term “objectionable” to encompass an auction of potentially-counterfeit coins when the word is preceded by seven other words that describe pornography, graphic violence, obscenity, and harassment.’ In the instant case, the relevant portions of Google’s Content Policy require that MSSPs provide pricing and cancellation information regarding their services. These requirements relate to business norms of fair play and transparency and are beyond the scope of § 230(c)(2).”). 32 230(c)(2) framework, although that is difficult perhaps to see as a textual matter. 99 Spam, though irritating and destructive of the online experience, does not fit clearly into the litany in section 230, at least as courts have understood this litany. The spam cases have prompted courts to examine the thread that runs through the list in section 230. A recent Ninth Circuit case perceptively sees the challenge: On one hand, “decisions recognizing limitations in the scope of immunity [are] persuasive,” 100 and “interpreting the statute to give providers unbridled discretion to block online content would . . . enable and potentially motivate internet-service providers to act for their own, and not the public, benefit.” 101 In addition, the court did recognize that “the specific categories listed in § 230(c)(2) vary greatly: [m]aterial that is lewd or lascivious is not necessarily similar to material that is violent, or material that is harassing. If the enumerated categories are not similar, they provide little or no assistance in interpreting the more general category. We have previously recognized this concept.” 102 Yet, in fact, the original purpose of the Communications Decency Act—“to remove disincentives for the development and utilization of blocking and filtering technologies that empower parents to restrict their children’s access to objectionable or inappropriate online 99 Asurvio LP v. Malwarebytes Inc., 2020 U.S. Dist. LEXIS 53906 (N.D. Cal. Mar. 26, 2020) (allegation that M is wrongfully classifying A’s software as malware); 4PC Drivers Headquarters, LP v. Malwarebytes Inc., 371 F. Supp. 3d 652 (N.D. Cal. 2019) (malware); Shulman v. FACEBOOK.com, 2018 U.S. Dist. LEXIS 113076 (D.D.C. Jul. 9, 2018) (spam); Holomaxx Technologies v. Microsoft Corp., 783 F. Supp. 2d 1097 (N.D. Cal. 2011) (spam); Smith v. Trusted Universal Stds. in Elec. Transactions, Inc., 2010 U.S. Dist. LEXIS 43360 (D. N.J. May 4, 2010) (deletion of spam); e360insight v. Comcast Corp., 546 F. Supp. 2d 605 (N.D. Ill. 2008) (spam); Zango v. Kapersky Lab., 568 F.3d 1169 (9th Cir. 2009) (competitive blocking software). 100 Enigma Software Grp. USA, v. Malwarebytes, Inc., 946 F.3d 1040, 1050 (9th Cir. 2019). 101 Id. 102 Id. at 1051. 33 material” 103—suggests that the thread that combines section 230(c)(2)’s concepts are those materials that were objectionable in 1996 and for which there was already regulation—regulation which Congress intended section 230 to provide incentives for free markets to emulate. The first four adjectives in subsection (c)(2), “obscene, lewd, lascivious, filthy,” are found in the Comstock Act as amended in 1909. 104 The Comstock Act prohibited the mailing of “every obscene, lewd, or lascivious, and every filthy book, pamphlet, picture, paper, letter, writing, print, or other publication of an indecent character.” 105 In addition, the CDA used the terms “obscene or indecent,” prohibiting the transmission of “obscene or indecent message.” 106 The Act’s second provision declared unconstitutional in Reno v. ACLU, section 223(d), prohibits the knowing sending or displaying of “any comment, request, suggestion, proposal, image, or other communication that, in context, depicts or describes, in terms patently offensive as measured by contemporary community standards, sexual or excretory activities or organs, regardless of whether the user of such service placed the call or initiated the communication.” 107 This language of “patently offensive . . .” derives from the definition of indecent speech set forth in the Pacifica decision and which the FCC continues to regulate to this day. 108 103 47 U.S.C. § 230(a)(4). Section 3893 of the Revised Statutes made by section 211 of the Criminal Code, Act of March 4, 1909, c. 321, 35 Stat. 1088, 1129; United States v. Limehouse, 285 U.S. 424, 425 (1932) (stating that “Section 211 of the Criminal Code (18 USCA § 334) declares unmailable ‘every obscene, lewd, or lascivious, and every filthy book, pamphlet, picture, paper, letter, writing, print, or other publication of an indecent character”) (additional citation added). The phrase is repeated in numerous state statutes. 105 Id. at 424-6. 106 47 U.S. § 223(a) (May 1996 Supp.). 107 521 U.S. 844 (1997). 108 FCC v. Pacifica Found., 438 U.S. 726, 732 (1978) (“patently offensive as measured by contemporary community standards for the broadcast medium, sexual or excretory activities and organs”). 104 34 The next two terms in the list “excessively violent” and “harassing” also refer to typical concerns of communications regulation which were, in fact, stated concerns of the CDA itself. Congress and the FCC have long been concerned about the effect of violent television shows, particularly upon children; indeed, concern about violence in media was an impetus of the passage of the Telecommunications Act of 1996, of which the CDA is a part. Section 551 of the Act, entitled Parental Choice in Television Programming, requires televisions over a certain size to contain a device, later known at the V-chip. This device allows viewers to block programming according to an established rating system. 109 The legislation led to ratings for broadcast television that consisted of violent programming. 110 The FCC then used this authority to require televisions to allow blocking technology. 111 And, of course, Congress and the FCC have long regulated harassing wire communications. Section 223, Title 47, the provision which the CDA amended and into which the CDA was in part codified, is a statute that prohibits the making of “obscene or harassing” 109 47 U.S.C. § 303(x). See Technology Requirements to Enable Blocking of Video Programming Based on Program Ratings, 63 Fed. Reg. 20, 131 (Apr. 23, 1998) (“[T]he Commission is amending the rules to require . . . technological features to allow parents to block the display of violent , sexual, or other programming they believe is harmful to their children. These features are commonly referred to as ‘v-chip’ technology.”). Finding that “[t]here is a compelling governmental interest in empowering parents to limit the negative influences of video programming that is harmful to children,” Congress sought to “provid[e] parents with timely information about the nature of upcoming video programming and with the technological tools” to block undesirable programming by passing the Telecommunications Act of 1996 (the “Telecommunications Act”). 110 FCC News, Commission Finds Industry Video Programming Rating System Acceptable, Report No. GN 98-3 (Mar. 12, 1998), available at https://transition.fcc.gov/Bureaus/Cable/News Releases/1998/nrcb8003.html. 111 Amy Fitzgerald Ryan, Don’t Touch That V-Chip: A Constitutional Defense of the Television Program Rating Provisions of the Telecommunications Act of 1996, 87 Geo. L.J. 823, 825 (1999), citing Lawrie Mifflin, TV Networks Plan Ratings System, Orange County Reg., Feb. 15, 1996, at A1. 35 telecommunications. 112 These harassing calls include “mak[ing] or caus[ing] the telephone of another repeatedly or continuously to ring, with intent to harass any person at the called number” or “mak[ing] repeated telephone calls or repeatedly initiates communication with a telecommunications device, during which conversation or communication ensues, solely to harass any person at the called number or who receives the communication.” 113 Roughly half of the States also outlaw “harassing” wire communications via telephone. 114 Congress enacted the Telephone Consumer Protection Act (TCPA), recently upheld in most part by the Supreme Court, 115 to ban “automated or prerecorded telephone calls, regardless of the content or the initiator of the message,” that are considered “to be a nuisance and an invasion of privacy.” 116 112 47 U.S.C. § 223. 47 U.S.C. § 223(a)(1)(D) & (E) (2012). 114 See, e.g., (Arizona) Ariz. Rev. Stat. § 13-2916 (“It is unlawful for any person, with intent to terrify, intimidate, threaten or harass a specific person or persons, to do any of the following: 3. Otherwise disturb by repeated anonymous, unwanted or unsolicited electronic communications the peace, quiet or right of privacy of the person at the place where the communications were received.”); (California) Cal. Pen. Code § 653m(b) (“Every person who, with intent to annoy or harass, makes repeated telephone calls or makes repeated contact by means of an electronic communication device, or makes any combination of calls or contact, to another person is, whether or not conversation ensues from making the telephone call or contact by means of an electronic communication device, guilty of a misdemeanor. Nothing in this subdivision shall apply to telephone calls or electronic contacts made in good faith or during the ordinary course and scope of business.”); (Maryland) Md. Code Ann., Crim. Law § 3-804 (“A person may not use telephone facilities or equipment to make: (1) an anonymous call that is reasonably expected to annoy, abuse, torment, harass, or embarrass another; (2) repeated calls with the intent to annoy, abuse, torment, harass, or embarrass another”); (Oklahoma) 21 Okl. St. § 1172 (“It shall be unlawful for a person who, by means of a telecommunication or other electronic communication device, willfully either: 6. In conspiracy or concerted action with other persons, makes repeated calls or electronic communications or simultaneous calls or electronic communications solely to harass any person at the called number(s)”). 115 Barr v. Am. Ass’n of Political Consultants, 140 S. Ct. 2335 (2020) (upholding the Act except for its debt-collection exception). 116 Telephone Consumer Protection Act of 1991, 105 Stat. 2394, 2395, codified at 15 U.S.C. § 6101. 113 36 Thus, the cases that struggled over how to fit spam into the list of section 230(c)(2) could simply have analogized spam as similar to harassing or nuisance phone calls. The regulatory meanings, as understood in 1996 and used in the Communications Decency Act, itself, constitute the thread that unites the meanings of “obscene, lewd, lascivious, filthy, excessively violent, and harassing.” All deal with issues involving media and communications content regulation intended to create safe, family environments. Compelling that conclusion is “the presumption of consistent usage—the rule of thumb that a term generally means the same thing each time it is used . . . [particularly for] terms appearing in the same enactment.” 117 To ensure clear and consistent interpretations of the terms used in subsection 230(c)(2), NTIA requests, therefore, that the FCC add the below Subpart E to 47 CFR Chapter I: Subpart E. Clarifying Subsection 230(c)(2). § 130.02 As used within 47 U.S.C. 230, 47 CFR Chapter I, Subchapter A and within this regulation, the following shall apply: (a) “obscene,” “lewd,” “lascivious,” and “filthy” The terms “obscene,” “lewd,” “lascivious,” and “filthy” mean material that: i. taken as a whole, appeals to the prurient interest in sex or portrays sexual conduct in a patently offensive way, and which, taken as a whole, does not have serious literary, artistic, political, or scientific value; ii. depicts or describes sexual or excretory organs or activities in terms patently offensive as measured by contemporary community standards; to the average person, applying contemporary community standards; or iii. signifies the form of immorality which has relation to sexual impurity, and have the same meaning as is given them at common law in prosecutions for obscene libel. (b) “excessively violent” The term “excessively violent” means material that: i. is likely to be deemed violent and for mature audiences according the Federal Communications Commission’s V-chip regulatory regime and TV Parental Guidance, promulgated pursuant to Section 551 of the 1996 117 United States v. Castleman, 572 U.S. 157, 174 (2014), citing IBP, Inc. v. Alvarez, 546 U.S. 21, 33–34 (2005) (Scalia, J., conc.). 37 ii. Telecommunications Act, Pub. L. No. 104-104, § 551, 110 Stat. 139-42 (codified at 47 U.S.C. § 303; § 330(c)(4)); or constitutes or intends to advocate domestic terrorism or international terrorism, each as defined in 18 U.S.C. § 2331 (“terrorism”). (c) “harassing” The term “harassing” means any material that: i. that sent by an information content provider that has the subjective intent to abuse, threaten, or harass any specific person and is lacking in any serious literary, artistic, political, or scientific value; ii. regulated by the CAN-SPAM Act of 2003, 117 Stat. 2699; or iii. that is malicious computer code intended (whether or not by the immediate disseminator) to damage or interfere with the operation of a computer. (d) “otherwise objectionable” The term “otherwise objectionable” means any material that is similar in type to obscene, lewd, lascivious, filthy, excessively violent, or harassing materials. b. “Good faith” The phrase “good faith” in section 230(c) is also ambiguous. On one hand, most courts, in interpreting the phrase, have looked to pretext, dishonesty, or refusing to explain wrongful behavior when finding good faith or lack thereof in the removal of content. As the United States Court of Appeals for the Ninth Circuit explains, “unless § 230(c)(2)(B) imposes some good faith limitation on what a blocking software provider can consider ‘otherwise objectionable’ . . . immunity might stretch to cover conduct Congress very likely did not intend to immunize.” Under the generous coverage of section 230(c)(2)(B)’s immunity language, a blocking software provider might abuse that immunity to block content for anticompetitive purposes or merely at its malicious whim, under the cover of considering such material “otherwise objectionable.” 118 At the same time, some courts, focusing the words “the provider or user considers to be 118 Zango, 568 F.3d at 1178 (Fisher, J., concurring). The Ninth Circuit has adopted Judge Fisher’s reasoning. See Enigma, 946 F.3d at 1049. 38 obscene,” see the provision’s immunity available whenever an interactive computer service simply claims to consider the material as fitting within the provision’s categories. Thus, “good faith” simply means the existence of some “subjective intent.” 119 Good faith requires transparency about content moderation disputes processes. In order to qualify for section 230(c)(2)’s immunity, a social media platform, or any interactive computer service, must demonstrate in a transparent way that when it takes action pursuant to section 230(c)(2), it provides adequate notice, reasoned explanation, or a meaningful opportunity to be heard.” 120 To ensure clear and consistent interpretation of the “good faith” standard, NTIA requests that the FCC further add the below to newly requested 47 CFR Chapter I Subchapter E Section 130.02: (e) “good faith” A platform restricts access to or availability of specific material (including, without limitation, its scope or reach) by itself, any agent, or any unrelated party in “good faith” under 47 U.S.C. § (c)(2)(A) if it: i. restricts access to or availability of material or bars or refuses service to any person consistent with publicly available terms of service or use that state plainly and with particularity the criteria the interactive computer service employs in its content-moderation practices, including by any partially or fully automated processes, and that are in effect on the date such content is first posted; ii. has an objectively reasonable belief that the material falls within one of the listed categories set forth in 47 U.S.C. § 230(c)(2)(A); iii. does not restrict access to or availability of material on deceptive or pretextual grounds, and does not apply its terms of service or use to restrict access to or availability of material that is similarly situated to material that the interactive computer service intentionally declines to restrict; and iv. supplies the interactive computer service of the material with timely notice describing with particularity the interactive computer service’s reasonable factual basis for the restriction of access and a meaningful opportunity to respond, unless the interactive computer service has an objectively 119 120 Domen, 433 F.Supp. 3d 592. E.O. 13925, Sec. 2(b). 39 reasonable belief that the content is related to criminal activity or such notice would risk imminent physical harm to others. 3. Section 230(c)(1) and 230(f)(3) Section 230(c)(1) places “information content providers,” i.e., entities that create and post content, outside its protections. This means any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the internet, does not receive the statute’s shield. Numerous cases have found that interactive computer service’s designs and policies render it an internet content provider, outside of section 230(c)(1)’s protection. But the point at which a platform’s form and policies are so intertwined with users’ postings so as to render the platform an “information content provider” is not clear. Courts have proposed numerous interpretations, most influentially in the Ninth Circuit in Fair Housing Council of San Fernando Valley v. Roommates.Com. 121 There, the court found that “[b]y requiring subscribers to provide the information as a condition of accessing its service, and by providing a limited set of pre-populated answers, Roommate becomes much more than a passive transmitter of information.” 122 The court continued, “[w]e interpret the term ‘development’ as referring not merely to augmenting the content generally, but to materially contributing to its alleged unlawfulness. In other words, a website helps to develop unlawful content, and thus falls within the exception to section 230, if it contributes materially to the alleged illegality of the conduct.” 123 But, this definition has failed to provide clear guidance, with courts struggling to define “material contribution.” 124 121 Fair Hous. Council, 521 F.3d at 1166. Id. 123 Id. at 1167–68 (emphasis added); see also Dirty World Entertainment, 755 F.3d at 411. 124 See, e.g., People v. Bollaert, 248 Cal. App. 4th 699, 717 (2016). 122 40 Further, not all courts accept the material contribution standard. The Seventh Circuit concludes that “[a] company can, however, be liable for creating and posting, inducing another to post, or otherwise actively participating in the posting of a defamatory statement in a forum that that company maintains.” 125 Other circuits conclude that a website becomes an information content provider by “solicit[ing] requests” for the information and then “pa[ying] researchers to obtain it.” 126 This confusion stems from the difference between the way an online bulletin board worked in the 1990s, which simply posted content, and how social media works today. As Federal Trade Commissioner Rohit Chopra explained, new social media shape and control information and online experience often as an expression of platforms’ and their advertisers’ goals rather than their users’: “[Section 230] seeks to foster an environment where information and ideas can flourish. If a company is just helping move information from point A to point B, that company is just like the mail carrier or the telegraph company. That makes sense . . . . But the tech market has dramatically shifted in the decades since this law was enacted . . . . I would argue that once platforms started prioritizing their paid predictions, the content became more a reflection of advertisers targeting users, than users’ own preferences.” 127 In light of modern technology, the FCC should clarify the circumstances under which an interactive computer service becomes an information content provider. Interactive computer services that editorialize particular user comments by adding special responses or warnings appear to develop and create content in any normal use of the words. Analogously, district 125 Huon v. Denton, 841 F.3d 733, 742 (7th Cir. 2016). 126 FTC v. Accusearch Inc., 570 F.3d 1187, 1199–1200 (10th Cir. 2009). 127 Rohit Chopra, Tech Platforms, Content Creators, and Immunity, American Bar Association, Section of Antitrust Law Annual Spring Meeting, Washington, D.C. (Mar. 28, 2019) (transcript available online at https://www.ftc.gov/system/files/documents/public_statements/1510713/chopra_aba spring meeting 3-28-19 0.pdf (last visited June 15, 2020)). 41 courts have concluded that when interactive computer services’ “employees . . . authored comments,” the interactive computer services would become content providers. 128 In addition, prioritization of content under a variety of techniques, particularly when it appears to reflect a particularly viewpoint, might render an entire platform a vehicle for expression and thus an information content provider. To clarify when interactive computer services become information content providers through developing and creating content through the presentation of user-provided material, NTIA requests that the FCC add the below Subpart E to 47 CFR Chapter I: Subpart E. Clarifying Subsection 230(f)(2). § 130.03 As used within 47 U.S.C. 230, 47 CFR Chapter I, Subchapter A and within this regulation, the following shall apply: For purposes of 47 U.S.C. § 230(f)(3), “responsible, in whole or in part, for the creation or development of information” includes substantively contributing to, modifying, altering, presenting or prioritizing with a reasonably discernible viewpoint, commenting upon, or editorializing about content provided by another information content provider. 4. “Treated as a Publisher or Speaker” Finally, the ambiguous term “treated as a publisher or speaker” is a fundamental question for interpreting that courts in general have not addressed squarely. One of the animating concerns for section 230 was court decisions holding online platforms liable as publishers for third-party speech, when in fact they were merely passive bulletin boards. By prohibiting an interactive computer service from being “treated” as a publisher or speaker, therefore, section 230 could be interpreted as not converting non-publisher platforms into publishers simply because they passively transmit third-party content. That does not, however, mean that the 128 Huon, 841 F.3d at 742. 42 statute meant to immunize online platforms when they actually act as publishers and exert significant control over the third-party speech and the message it conveys. FCC Chairman Pai made a similar point by asking if selective content moderation based on ideology eventually becomes “editorial judgment”: Are these tech giants running impartial digital platforms over which they don’t exercise editorial judgment when it comes to content? Or do they in fact decide what speech is allowed and what is not and discriminate based on ideology and/or political affiliation? 129 If content-moderating can never, no matter how extreme or arbitrary, become editorializing that no longer remains the “speech of another,” then section 230(c)(1) will subsume section 230(c)(2) and eliminate liability for all interactive computer services’ decisions to restrict content. Interpreting “speaker or publisher” so broadly is especially harmful when platforms are opaque and deceptive in their content-monitoring policies. This concern is hardly theoretical, given the highly inconsistent, baffling, and even ideologically driven content moderating decisions that the large interactive computer services have made, at least according to numerous accounts. For instance, one interactive computer service made the editorial decision to exclude legal content pertaining to firearms, 130 content that was deemed acceptable for broadcast television, 131 thereby chilling the speech of a political candidate supportive of gun rights. Another interactive computer service has suppressed the 129 Ajit Pai, What I Hope to Learn from the Tech Giants, FCC Blog (Sept. 4, 2018), https://www.fcc.gov/news-events/blog/2018/09/04/what-i-hope-learn-tech-giants. 130 Facebook, Inc., Facebook Prohibited Content: 7. Weapons, Ammunition, or Explosives, https://www.facebook.com/policies/ads/prohibited content/weapons (last visited June 15, 2020). 131 Maria Schultz, Facebook pulls ad from gun-toting Georgia candidate taking on Antifa: ‘Big Tech censorship of conservatives must end’, Fox News (June 6, 2020), https://www.foxnews.com/politics/facebook-pulls-ad-from-gun-toting-georgia-candidate-bigtech-censorship-of-conservatives-must-end. 43 speech of an American politician for “glorifying violence” 132 while permitting that of a foreign politician glorifying violence to pass without action, 133 as publicly noted by the FCC Chairman. 134 Still another interactive computer service, purporting to be a document repository and editing service, 135 deleted a controversial paper about a potential therapy for COVID-19, 136 stating simply that it was in violation of the site terms of service. 137 A major food-workers’ union has objected to social media-implemented internal communication networks for companies, or “intranets,” implementing automated censorship to prevent discussions of unionization. 138 At common law, as a general matter, one is liable for defamation only if one makes “an affirmative act of publication to a third party.” 139 This “affirmative act requirement” ordinarily 132 Alex Hern, Twitter hides Donald Trump tweet for ‘glorifying violence’, The Guardian (May 29, 2020), https://www.theguardian.com/technology/2020/may/29/twitter-hides-donald-trumptweet-glorifying-violence. 133 White House official Twitter account (May 29, 2020), https://twitter.com/WhiteHouse/status/1266367168603721728. 134 Ajit Pai verified Twitter account (May 29, 2020), https://twitter.com/AjitPaiFCC/status/1266368492258816002. 135 Google, Inc., Google Docs “About” page, https://www.google.com/docs/about/ (last visited June 15, 2020) (“Google Docs brings your documents to life with smart editing and styling tools to help you easily format text and paragraphs. Choose from hundreds of fonts, add links, images, and drawings. All for free . . . . Access, create, and edit your documents wherever you go — from your phone, tablet, or computer — even when there’s no connection.”). 136 Thomas R. Broker, et al., An Effective Treatment for Coronavirus (COVID-19), (Mar. 13, 2020), page archived at https://archive.is/BvzkY (last visited June 15, 2020). 137 Google, Inc., Google Docs result for https://docs.google.com/document/d/e/2PACX-1vTig18ftNZUMRAj2SwRPodtscFio7bJ7GdNgbJAGbdfF67WuRJB3ZsidgpidB2eocFHAVjIL7deJ7/pub (last visited June 15, 2020) (“We’re sorry. You can’t access this item because it is in violation of our Terms of Service.”). 138 United Food and Commercial Workers International Union, Facebook Censorship of Worker Efforts to Unionize Threatens Push to Strengthen Protections for Essential Workers During COVID-19 Pandemic (June 12, 2020), http://www.ufcw.org/2020/06/12/censorship/. 139 Benjamin C. Zipursky, Online Defamation, Legal Concepts, and the Good Samaritan, 51 Val. U. L. Rev. 1, 18 (2016) , available at https://scholar.valpo.edu/cgi/viewcontent.cgi?article=2426&context=vulr. 44 “depict[s] the defendant as part of the initial making or publishing of a statement.” 140 The common law also recognized a “narrow exception to the rule that there must be an affirmative act of publishing a statement.” 141 A person “while not actually publishing—will be subjected to liability for the reputational injury that is attributable to the defendant’s failure to remove a defamatory statement published by another person.” 142 Such a duty might apply where a defendant has undertaken an affirmative duty to remove. Stratton Oakmont embodies the latter idea: The court held that Prodigy, having undertaken to moderate some content on its page, thereby assumed an affirmative duty to moderate all content on its site. At common law, then, the publication element of defamation could be satisfied either through the rule—an affirmative act—or the exception—an omission where an affirmative duty applies. Section 230(c)(1)’s “treated as the publisher or speaker” could plausibly be understood to foreclose liability only if a defendant would satisfy the exception. Satisfying the exception subjects one to defamation liability as if he were the publisher or speaker of the content, although he did not “actually publish[]” the content. 143 He is not a “true publisher” in the sense of satisfying the affirmative act requirement, but he is deemed or regarded as if he were because he had an affirmative duty to moderate. 144 This interpretation of section 230(c)(1) reads it to foreclose the very argument courts may have been on track to embrace after Stratton Oakmont, viz., that a platform has an affirmative duty to remove defamatory content and will be treated as satisfying the publication element of defamation for nonfeasance in the same way as a true publisher. Section 230(c)(1) states—in the face of Stratton Oakmont’s contrary holding—a 140 Id. at 19. Id. at 20. 142 Id. at 21 (citing Restatement (Second) of Torts § 577(2) (Am. Law Inst. 1977)). 143 Zipursky, 51 Val. L. Rev. at 21. 144 Id. at 45. 141 45 general rule: There is no affirmative duty to remove. For that reason, section 230(c)(1) should be construed to concern only failures to remove and not takedowns, and not to apply when a platform “actually publishes” content. NTIA suggests that the FCC can clarify the ambiguous phrase “speaker or publisher” by establishing that section 230(c)(1) does not immunize the conduct of an interactive service provider that is actually acting as a publisher or speaker in the traditional sense. Two points follow. First, when a platform moderates outside of section 230(c)(2)(A), section 230(c)(1) does not provide an additional, broader immunity that shields content takedowns more generally. Such affirmative acts are outside of the scope of (c)(1). Second, when a platform reviews thirdparty content already displayed on the internet and affirmatively vouches for it, editorializes, recommends, or promotes such content on the basis of the content’s substance or message, the platform receives no section 230(c)(1) immunity. NTIA therefore requests that the FCC further add the below to newly requested Subpart E to 47 CFR Chapter I: Subpart E. Clarifying Subsection 230(f)(2). § 130.04 (c) An interactive computer service is not being “treated as the publisher or speaker of any information provided by another information content provider” when it actually publishes its own or third-party content. Circumstances in which an interactive computer service actually publishes content include when: (i) it affirmatively solicits or selects to display information or content either manually by the interactive computer service’s personnel or through use of an algorithm or any similar tool pursuant to a reasonably discernible viewpoint or message, without having been prompted to, asked to, or searched for by the user; and (ii) it reviews third-party content already displayed on the Internet and affirmatively vouches for, editorializes, recommends, or promotes such content to other Internet users on the basis of the content’s substance or messages. This paragraph applies to a review conducted, and a recommendation made, either manually by the interactive computer service’s personnel or through use of an algorithm or any similar tool. 46 (d) An interactive computer service does not publish content merely by: (i) providing content in a form or manner that the user chooses, such as non-chronological order, explicit user preferences, or because a default setting of the service provides it, and the interactive computer service fully informs the user of this default and allows its disabling; or (ii) transmitting, displaying, or otherwise distributing such content, or merely by virtue of moderating third-party content consistent with a good faith application of its terms of service in force at the time content is first posted. Such an interactive computer service may not, by virtue of such conduct, be “treated as a publisher or speaker” of that third-party content. 47 U.S.C. § 230(c)(1). VI. Title I and Sections 163 and 257 of the Act Permit the FCC to Impose Disclosure Requirements on Information Services With roots in the Modified Final Judgment for the break-up of AT&T 145 and codified by the Telecommunications Act of 1996, 146 the term “information service” refers to making information available via telecommunications. Under FCC and judicial precedent, social media sites are “information services.” As such, courts have long recognized the Commission’s power to require disclosure of these services under sections 163 and 257. A. Social media are information services Section 230(f)(2) explicitly classifies “interactive computer services” as “information services,” as defined in 47 U.S.C. § 153(20). 147 Further, social media fits the FCC’s definition of 145 United States v. Am. Tel. & Tel. Co., 552 F. Supp. 131, 179 (D.D.C. 1982), aff'd sub nom. Maryland v. United States, 460 U.S. 1001 (1983) (observing that “‘Information services’ are defined in the proposed decree at Section IV(J) as: the offering of a capability for generating, acquiring, storing, transforming, processing, retrieving, utilizing or making available information which may be conveyed via telecommunications”). 146 47 U.S.C. § 153(24). 147 Id. (“[T]he offering of a capability for generating, acquiring, storing, transforming, processing, retrieving, utilizing, or making available information via telecommunications, and includes electronic publishing, but does not include any use of any such capability for the management, control, or operation of a telecommunications system or the management of a telecommunications service.”). 47 enhanced services. 148 In Brand X, the Supreme Court explained, “The definitions of the terms ‘telecommunications service’ and ‘information service’ established by the 1996 Act are similar to the Computer II basic-and enhanced-service classifications” with “‘information service’—the analog to enhanced service.” 149 Numerous courts have ruled that search engines, browsers and internet social media precursors such as chat rooms are information services. 150 Courts have long recognized edge providers as information services under Title I. For example, in Barnes, the U.S. Court of Appeals for the Ninth Circuit classifies Yahoo’s social networking services an “information service,” interchangeably with “interactive computer service,” and in Howard v. Am. Online, the same court designates America Online’s messaging facilities “enhanced services.” 151 148 47 CFR § 64.702 (“[S]ervices, offered over common carrier transmission facilities used in interstate communications, which employ computer processing applications that act on the format, content, code, protocol or similar aspects of the subscriber’s transmitted information; provide the subscriber additional, different, or restructured information; or involve subscriber interaction with stored information.”). 149 Nat’l Cable & Telecommunications Ass’n v. Brand X Internet Servs., 545 U.S. 967, 977 (2005). 150 Mozilla Corp. v. F.C.C., 940 F.3d 1, 34 (D.C. Cir. 2019) (“But quite apart from the fact that the role of ISP-provided browsers and search engines appears very modest compared to that of DNS and caching in ISPs’ overall provision of Internet access, Petitioners are in a weak posture to deny that inclusion of ‘search engines and web browsers’ could support an ‘information service’ designation . . . since those appear to be examples of the ‘walled garden’ services that Petitioners hold up as models of ‘information service’-eligible offerings in their gloss of Brand X.”) (internal citations omitted); FTC v. Am. eVoice, Ltd., 242 F. Supp. 3d 1119 (D. Mont. 2017) (Email and online “chat rooms” “were enhanced services because they utilized transmission lines to function, as opposed to acting as a pipeline for the transfer of information . . . . ‘This conclusion is reasonable because e-mail fits the definition of an enhanced service.’” (quoting Howard v. Am. Online Inc., 208 F.3d 741, 746 (9th Cir. 2000)). “Also excluded from coverage are all information services, such as Internet service providers or services such as Prodigy and America-On-Line.” H.R. Rep. No. 103-827, at 18 (1994), as reprinted in 1994 U.S.C.C.A.N. 3489, 3498 151 Barnes, 570 F.3d at 1101. 48 B. Several statutory sections empower the FCC to mandate disclosure Beyond having jurisdiction over social media as information services, the FCC has clear statutory authority to impose disclosure requirements under sections 163 and 257 of the Communications Act. Section 163 charges the FCC to “consider all forms of competition, including the effect of intermodal competition, facilities-based competition, and competition from new and emergent communications services, including the provision of content and communications using the Internet” and “assess whether laws, regulations, regulatory practices . . . pose a barrier to competitive entry into the communications marketplace or to the competitive expansion of existing providers of communications services.” 152 Section 257(a) of the Communications Act requires the FCC to examine market entry barriers for entrepreneurs and other small businesses in the provision and ownership of telecommunications services and information services.” 153 In its 2018 Internet Order, the Commission relied on section 257 to impose service transparency requirements on providers of the information service of broadband internet access. It reasoned that doing so would reduce entry barriers. 154 Similar reasoning applies to requiring transparency for social media. Clear, current, readily accessible and understandable descriptions of an interactive computer service provider’s content moderation policies would help enterprising content providers fashion their offerings so that they can be provided across multiple 152 47 U.S.C. § 163. 47 U.S.C. § 257(a) (2018). While section 257 was amended and repealed in part, its authority remained intact in section 163. “Congress emphasized that ‘[n]othing in this title [the amendment to the Telecommunications Act creating section 163] or the amendments made by this title shall be construed to expand or contract the authority of the Commission.” Mozilla, 940 F.3d at 47 citing Pub. L. No. 115-141, Div. P, § 403, 132 Stat. at 1090. 154 Federal Communications Commission, In the Matter of Restoring Internet Freedom, 33 F.C.C. Rcd. 311 (2018). 153 49 platforms with reduced costs and friction for the provider and fewer disruptions to user experiences. 155 Perhaps more important, information about an interactive computer service provider’s content moderation policies would help entities design filtering products that could improve the providers’ implementation of those policies, or assist consumers in remedying the gaps they may see in the providers’ policies. Certainly, empowering consumers with blocking technologies that they choose and control—rather than accepting a platform’s top-down centralized decisions, would directly advance section 230’s policy of encouraging “the development of technologies which maximize user control over what information is received by individuals, families, and schools who use the Internet and other interactive computer services.” 156 Increasing transparency about online platforms’ content moderation practices would also enable users to make more informed choices about competitive alternatives. Consumers today have a one-way relationship with social media transparency; platforms know everything about consumers, but consumers know very little about how or why platforms exercise influence or direct control over consumers’ speech. Certain information disappears or becomes difficult to find, while other information is promoted and prominently displayed. Inevitably, some consumers and content creators begin to worry that secretive forces within platform providers are manipulating social media for ends that can only be guessed at. 157 Such suspicion is inevitable when there is so little transparency about the process behind the social media visibility of user-provided content, even when policies are applied fairly and no 155 See supra Section IV. 47 U.S.C. § 230(a)(3). 157 Rod Dreher, Google Blacklists Conservative Websites (July 21, 2020), https://www.theamericanconservative.com/dreher/google-blacklists-conservative-websites/. 156 50 wrongdoing has taken place. By increasing transparency to consumers, platforms would ensure that consumers can choose to consume social media whose policies they agree with without fear that manipulations to which they did not consent are happening behind the scenes. The importance of disclosure to our communications networks cannot be underestimated. Chairman Pai recognizes that democracies must require transparency and to ensure the proper function of essential communications networks. 158 That is why, when eliminating Title II common carrier so-called “network neutrality” regulations, Chairman Pai’s FCC retained Title I disclosure requirements for broadband access service providers. The same is true for other information service providers. Speaking of the social media platforms, FCC Chairman Ajit Pai asked “how do these companies make decisions about what we see and what we don’t? And who makes those decisions?”159 For social media, it is particularly important to ensure that large firms avoid “deceptive or pretextual actions stifling free and open debate by censoring certain viewpoints,” 160 or engage in deceptive or pretextual actions (often contrary to their stated terms of service) to stifle viewpoints with which they disagree.” 161 158 Federal Communications Commission, In the Matter of Restoring Internet Freedom, WC Docket No. 17-108, Declaratory Ruling, Report And Order, And Order (Jan, 4, 2018) ¶ 209, available at https://www.fcc.gov/document/fcc-releases-restoring-internet-freedom-order (“‘Sunlight,’ Justice Brandeis famously noted, ‘is . . . the best of disinfectants.’ This is the case in our domain. Properly tailored transparency disclosures provide valuable information to the Commission to enable it to meet its statutory obligation to observe the communications marketplace to monitor the introduction of new services and technologies, and to identify and eliminate potential marketplace barriers for the provision of information services. Such disclosures also provide valuable information to other Internet ecosystem participants.”). 159 Ajit Pai, What I Hope to Learn from the Tech Giants (Sept. 4, 2018), https://www.fcc.gov/news-events/blog/2018/09/04/what-i-hope-learn-tech-giants (last visited June 15, 2020). 160 E.O. 13925, Section 2(a). 161 Id. 51 To prevent these ends, NTIA requests that the FCC further add the below to Subpart E to 47 CFR Chapter I Subchapter A Part 8: § 8.2 Transparency for Interactive Computer Services. Any person providing an interactive computer service in a manner through a mass-market retail offering to the public shall publicly disclose accurate information regarding its content-management mechanisms as well as any other content moderation, promotion, and other curation practices of its interactive computer service sufficient to enable (i) consumers to make informed choices regarding the purchase and use of such service and (ii) entrepreneurs and other small businesses to develop, market, and maintain offerings by means of such service. Such disclosure shall be made via a publicly available, easily accessible website or through transmittal to the Commission. VII. Conclusion For the foregoing reasons, NTIA respectfully requests that the Commission institute a rulemaking to interpret Section 230 of the Communications Act. Respectfully submitted, ____________________ Douglas Kinkoph Performing the Delegated Duties of the Assistant Secretary for Commerce for Communications and Information July 27, 2020 52 APPENDIX A: Proposed Rules 47 CFR Chapter I, Subchapter E Part 130 – Section 230 of the Communications Decency Act. Interpreting Subsection 230(c)(1) and Its Interaction With Subsection 230(c)(2). § 130.01 As used within 47 U.S.C. 230, 47 CFR Chapter I, Subchapter A and within this regulation, the following shall apply: (a) 47 U.S.C. 230(c)(1) applies to an interactive computer service for claims arising from failure to remove information provided by another information content provider. Section 230(c)(1) has no application to any interactive computer service’s decision, agreement, or action to restrict access to or availability of material provided by another information content provider or to bar any information content provider from using an interactive computer service. Any applicable immunity for matters described in the immediately preceding sentence shall be provided solely by 47 U.S.C. § 230(c)(2). (b) An interactive computer service is not a publisher or speaker of information provided by another information content provider solely on account of actions voluntarily taken in good faith to restrict access to or availability of specific material in accordance with subsection (c)(2)(A) or consistent with its terms of service or use. (c) An interactive computer service is not being “treated as the publisher or speaker of any information provided by another information content provider” when it actually publishes its own or third-party content. Circumstances in which an interactive computer service actually publishes content include when: (i) it affirmatively solicits or selects to display information or content either manually by the interactive computer service’s personnel or through use of an algorithm or any similar tool pursuant to a reasonably discernible viewpoint or message, without having been prompted to, asked to, or searched for by the user; (ii) it reviews third-party content already displayed on the Internet and affirmatively vouches for, editorializes, recommends, or promotes such content to other Internet users on the basis of the content’s substance. This paragraph applies to a review conducted, and a recommendation made, either manually by the interactive computer service’s personnel or through use of an algorithm or any similar tool. (d) An interactive computer service does not publish content merely by: (1) providing content in a form or manner that the user chooses, such as non-chronological order, explicit user preferences, or because a default 53 setting of the service provides it, and the interactive computer service fully informs the user of this default and allows its disabling; or (2) transmitting, displaying, or otherwise distributing such content, or merely by virtue of moderating third-party content consistent with a good faith application of its terms of service in force at the time content is first posted. Such an interactive computer service may not, by virtue of such conduct, be “treated as a publisher or speaker” of that third-party content. 47 U.S.C. § 230(c)(1). Clarifying Subsection 230(c)(2). § 130.02 As used within 47 U.S.C. 230, 47 CFR Chapter I, Subchapter A and within this regulation, the following shall apply: (a) “obscene,” “lewd,” lascivious” and “filthy” The terms “obscene,” “lewd,” “lascivious,” and “filthy” mean material that iv. taken as a whole, appeals to the prurient interest in sex or portrays sexual conduct in a patently offensive way, and which, taken as a whole, does not have serious literary, artistic, political, or scientific value; v. depicts or describes sexual or excretory organs or activities in terms patently offensive as measured by contemporary community standards; to the average person, applying contemporary community standards; or vi. signifies the form of immorality which has relation to sexual impurity, and have the same meaning as is given them at common law in prosecutions for obscene libel. (b) “excessively violent” The term “excessively violent” means material that iii. is likely to be deemed violent and for mature audiences according the Federal Communications Commission’s V-chip regulatory regime and TV Parental Guidance, promulgated pursuant to Section 551 of the 1996 Telecommunications Act Pub. L. No. 104-104, § 551, 110 Stat. 139-42 (codified at 47 U.S.C. § 303; § 330(c)(4)); iv. constitutes or intends to advocate domestic terrorism or international terrorism, each as defined in 18 U.S.C. § 2331 (“terrorism”). (c) “harassing” The term “harassing” means any material that iv. that sent by an information content provider that has the subjective intent to abuse, threaten, or harass any specific person and is lacking in any serious literary, artistic, political, or scientific value; v. regulated by the CAN-SPAM Act of 2003, 117 Stat. 2699; or vi. that is malicious computer code intended (whether or not by the immediate disseminator) to damage or interfere with the operation of a computer. (d) “otherwise objectionable” The term “otherwise objectionable” means any material that is similar in type to obscene, lewd, lascivious, filthy, excessively violent, or harassing materials. 54 (e) “good faith” A platform restricts access to or availability of specific material (including, without limitation, its scope or reach) by itself, any agent, or any unrelated party in “good faith” under 47 U.S.C. § (c)(2)(A) if it: v. restricts access to or availability of material or bars or refuses service to any person consistent with publicly available terms of service or use that state plainly and with particularity the criteria the interactive computer service employs in its contentmoderation practices, including by any partially or fully automated processes, and that are in effect on the date such content is first posted; vi. has an objectively reasonable belief that the material falls within one of the listed categories set forth in 47 U.S.C. § 230(c)(2)(A); vii. does not restrict access to or availability of material on deceptive or pretextual grounds, and does not apply its terms of service or use to restrict access to or availability of material that is similarly situated to material that the interactive computer service intentionally declines to restrict; and viii. supplies the interactive computer service of the material with timely notice describing with particularity the interactive computer service’s reasonable factual basis for the restriction of access and a meaningful opportunity to respond, unless the interactive computer service has an objectively reasonable belief that the content is related to criminal activity or such notice would risk imminent physical harm to others. Clarifying Subsection 230(f)(2). § 130.03 As used within 47 U.S.C. 230, 47 CFR Chapter I, Subchapter A and within this regulation, the following shall apply: For purposes of 47 U.S.C. § 230(f)(3), “responsible, in whole or in part, for the creation or development of information” includes substantively contributing to, modifying, altering, presenting with a reasonably discernible viewpoint, commenting upon, or editorializing about content provided by another information content provider. 47 CFR Chapter I Subchapter A Part 8 ---Internet Freedom. § 8.2 Transparency for Interactive Computer Services. Any person providing an interactive computer service in a manner through a mass-market retail offering to the public shall publicly disclose accurate information regarding its contentmanagement mechanisms as well as any other content moderation, promotion, and other curation practices of its interactive computer service sufficient to enable (i) consumers to make informed choices regarding the purchase and use of such service and (ii) entrepreneurs and other small businesses to develop, market, and maintain offerings by means of such service. Such disclosure shall be made via a publicly available, easily accessible website or through transmittal to the Commission. 55