Code of Conduct on countering illegal hate speech online: One year after Fact sheet June 2017 Věra Jourová Directorate-General for Justice and Consumers Commissioner for Justice, Consumers and Gender Equality One year after its adoption, the Code of Conduct on countering illegal hate speech online (1) has delivered significant progress. After one year, the results of the second monitoring exercise, which involved a larger sample of organisations located in 24 EU countries, show that significant progress has been made by the social media platforms participating in the Code of Conduct. They improved both their efficiency and speed in assessing notifications. The IT companies have strengthened their reporting systems to make it easier for users to report illegal hate speech. They have also trained their staff and increased cooperation with civil society. By implementing the Code of Conduct, the IT companies have strengthened and widened their networks of trusted flaggers throughout Europe. This is important since the monitoring process has shown that cooperation between IT companies and civil society organisations leads to a higher quality of notifications, more effective handling times and better reactions to notifications. Results also show some improvement in the coherence of treatment between the notifications coming from trusted reporters/flaggers or from general users. In terms of transparency and feed-back provided to users in response to notifications, the monitoring exercise reveals that there is still scope for progress. In this context, the mid-term review of the Digital Single Market Strategy confirmed the need to continue working towards minimum procedural requirements for the ‘notice and action’ procedures of online intermediaries. These would include quality criteria for notices, counter-notice procedures, reporting obligations, third-party consultation mechanisms and dispute resolution systems. Ensuring efficient cooperation between the IT companies and national authorities is another important objective of the Code of Conduct. The establishment of national contact points in the Member States is progressing. In this context, the Commission will continue to support the monitoring of the implementation of the Code of Conduct by civil society organisations. (1) http://ec.europa.eu/justice/fundamental-rights/files/hate_speech_code_of_conduct_en.pdf Justice and Consumers 2 Code of Conduct on countering illegal hate speech online: One year after Results of the 2nd monitoring exercise of the implementation of the Code of Conduct 1. Notifications of illegal hate speech > In the second monitoring exercise, 2 575 notifications were submitted to the IT companies taking part in the Code of Conduct. This represents a fourfold increase compared to the first monitoring exercise in December 2016. > The geographical coverage of the exercise substantially increased: 31 civil society organisations and 3 national authorities, located in 24 EU countries, sent notifications relating to hate speech deemed illegal to the IT companies during a period of 7 weeks, (20 March to 5 May 2017). In order to establish trends, this exercise used the same methodology as the first monitoring exercise (see Annex). > Out of the total number of notifications, 1830 cases were submitted through the reporting channels available to general users, while 745 cases were submitted through specific channels available only to trusted flaggers/reporters. > Facebook received the largest amount of notifications (1273 cases), followed by YouTube (658 cases) and Twitter (644 cases). Microsoft did not receive any notification. > In addition to flagging the content to the IT companies, the organisations taking part in the monitoring exercise submitted 212 of the cases to the police, public prosecutor’s bodies or other national authorities. 2. Removal rates Overall, 1 522 of the notifications (59.1 %) led to the removal of the notified content, while in 1 053 cases (40.9 %) the content remained online. Facebook removed the content in 66.5 % of cases, Twitter in 37.4 % and YouTube in 66 % of the cases. This represents a substantial improvement for all three companies compared to the results presented in December 2016, where the overall rate was 28.2 %. Removals per IT company (in %) 1st Monitoring (Dec. 2016) 2nd Monitoring (May 2017) 28.3 % Facebook Twitter YouTube 66.5 % 19.1 % 37.5 % 48.5 % 66.0 % Code of Conduct on countering illegal hate speech online: One year after 3 Rate of removals per EU country (in %) (2) 100 % 90 % 80 % 70 % 60 % 50 % 40 % 30 % 20 % 10 % 0 % ES 1 monitoring 0.0 (Dec. 2016) st IE PT NL 14.3 MT HR DK UK 3.4 20.5 CZ BE 6.9 RO SI 0.0 PL LV EL SK AT DE IT 11.4 52.0 3.6 EE FR CY LT HU 49.5 2nd monitoring 17.2 20.0 21.0 29.5 33.3 33.6 38.9 39.3 46.5 51.2 53.3 56.8 60.3 63.3 71.4 71.4 76.1 80.1 81.7 81.8 82.0 84.8 90.9 94.5 (May 2017) 3. Time of assessment of notifications > In 51.4 % of cases IT companies assessed notifications in less than 24 hours, in 20.7 % in less than 48 hours, in 14.7 % in less than a week and in 13.2 % it took more than a week. > Facebook assessed the notifications in less than 24 hours in 57.9 % of the cases and in less than 48 hours 24.9  % of cases. The corresponding figures for YouTube are 42.6 % and 14.3 % and for Twitter 39 % and 13.7 %, respectively. There is a positive overall trend in the time of assessment compared to the results of the first monitoring exercise in December 2016. 40 % of all responses were received within 24 hours while another 43 % arrived after 48 hours. (2) The table does not reflect the global issue on illegal hate speech online in a specific country and it is based on the number of notifications sent by each individual organisation. 4 Code of Conduct on countering illegal hate speech online: One year after 4. Coherence of treatment of notifications irrespective of the reporting channels > Out of the total number of notifications, 71.1 % of cases were submitted through the channels available to general users, while 28.9 % of the cases were notified through the channels only available to trusted flaggers/reporters. Amount of notifications received through different reporting channels (number of cases) General user Trusted flagger / reporter 922 Facebook 351 419 Twitter 225 489 YouTube 169 > Breaking down the removals by reporting channel, 56.5 % notifications made using channels available to general users led to the removal of the notified content, while a higher removal rate of 65.6 % was recorded for notifications made using the trusted flaggers/reporters channel. > Compared to the first monitoring exercise, the removal rates between the two reporting channels are converging, narrowing the gap in difference of treatment depending on the source of the notification (trusted flaggers or general users). > Some differences in treatment still persist. Removal rates according to reporting channel (in %) General user (Dec. 2016) Trusted flagger / Reporter (May 2017) (Dec. 2016) (May 2017) 28.0 % 64.2 % Facebook 29.0 % 72.6 % 5.0 % Twitter 31.5 % 33.0 % 48.5 % 29.0 % YouTube 63.2 % 68.0 % 74.0 % Code of Conduct on countering illegal hate speech online: One year after 5 5. Feedback to users and transparency > Data shows a large disparity between IT companies when giving feedback to notifications made. While Facebook sent feedback in 93.7 % of the cases, Twitter did so in only 32.8 % of cases and YouTube in 20.7 % of the cases. > Twitter and YouTube provide more feedback when reporting comes from trusted flaggers: Twitter provided feedback to 68.9 % of notifications made using the trusted flaggers’ channel, but only gave feedback to 13.4% of those notifications made by general users. For YouTube the corresponding figures were 35.5 % and 15.6 % respectively. Feedback provided to different types of user (in %) General user Trusted flagger / Reporter 93.0 % 95.7 % Facebook Twitter YouTube 13.4 % 68.9 % 15.6 % 35.5 % 6. Grounds for reporting hatred > Xenophobia (17.8 %), which includes anti-migrant hatred, has been reported, together with anti-Muslim hatred (17.7 %), as the most recurrent ground of hate speech, followed by ethnic origin (15.8 %). > The results confirm the predominance of hatred against migrants and refugees. Notifications per ground of hate speech (in %) 15.8 % 12.7 % Ethnic origin Sexual orientation 9.1 % National origin 17.7 % Anti-Muslim hatred 8.7 % Antisemitism 8.7 % Race 17.8 % Xenophobia (including anti-migrant hatred) 4.5 % Religion 2.8 % Gender 2.2 % Other 6 Code of Conduct on countering illegal hate speech online: One year after ANNEX Methodology of the exercise • The second exercise was carried out for a period of 7 weeks, from 20 March to 5 May 2017, using the same methodology as the first monitoring exercise. • 31 organisations and 3 public bodies (France, Romania and Spain) reported a total sample of 2 575 notifications from all the Member States except for Finland, Sweden, Bulgaria and Luxembourg. An additional 25 cases were reported to other social platforms. • The organisations only notified the IT companies about content deemed to be “illegal hate speech” under national laws transposing the EU Council Framework Decision 2008/913/JHA (3) on combating certain forms and expressions of racism and xenophobia by means of criminal law. • Notifications were submitted either through reporting channels available to all users, or via dedicated channels only accessible to trusted flaggers/reporters. • The organisations having the status of trusted reporter/flagger often used the dedicated channels to report content which they previously notified anonymously (using the channels for all users) to check if the outcomes could diverge. Typically, this happened in cases when the IT companies did not send feedback to a first notification and content was kept online. • The organisations participating in the second monitoring exercise are the following: BELGIUM CEJI - A Jewish contribution to an inclusive Europe 11 cases Centre interfédéral pour l’égalité des chances (UNIA) 41 cases CYPRUS Aequitas 92 cases 124 cases 161 cases 109 cases CZECH REPUBLIC In Iustitia 99 cases LATVIA Mozaika Latvian Centre for Human Rights DENMARK Anmeldhad.dk / Reporthate.dk 72 cases LITHUANIA National LGBT Rights Oganisation (LGL) GERMANY jugendschutz.net – 112 cases Freiwillige Selbstkontrolle Multimedia-Diensteanbieter e.V. (FSM) 53 cases ESTONIA Estonian Human Rights Centre 99 cases IRELAND ENAR Ireland 35 cases GREECE SafeLine / Forth SPAIN Movimiento contra la intolerancia (MCI) Fundación Secretariado Gitano Federación Estatal de Lesbianas, Gais, Transexuales y Bisexuales (FELGTB) Spanish Observatory on Racism and Xenophobia (OBERAXE) 7 cases 59 cases 51 cases 39 cases 110 cases FRANCE Ligue Internationale Contre le Racisme et l’Antisémitisme (LICRA) – 115 cases Platforme PHAROS – 31 cases CROATIA Centre for Peace Studies 128 cases ITALY Ufficio Nazionale Antidiscriminazioni Razziali (UNAR) 197 cases HUNGARY Háttér Society MALTA Malta LGBTIQ Right Movement (MGRM) 103 cases 15 cases NETHERLANDS Meldpunt Internet Discriminatie (MiND) Magenta Foundation 6 cases 55 cases AUSTRIA Zivilcourage und Anti-Rassismus-Arbeit (ZARA) 142 cases POLAND HejtStop / Projekt: Polska 121 cases PORTUGAL Associação ILGA Portugal 100 cases ROMANIA Active Watch - 93 cases Romanian Police 46 cases SLOVENIA Spletno oko 81 cases SLOVAKIA eSlovensko 14 cases UNITED KINGDOM Community Security Trust (CST) Tell Mama/Faith Matters 69 cases 10 cases (3) http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:L:2008:328:0055:0058:en:PDF