Creating a French framework to make social media platforms more accountable: Acting in France with a European vision Interim mission report “Regulation of social networks – Facebook experiment” Submitted to the French Secretary of State for Digital Affairs May 2019 Executive Summary Social networks allow any member of society to publish any content they wish and share it with other users of the network. They are thereby revolutionising the media industry and communications by offering individuals and civil society a direct means of expression. It is no longer necessary to use conventional media to communicate publicly. Using social networks therefore considerably increases individuals’ ability to exercise their freedom of expression, communicate and obtain information. Nevertheless, the opportunities offered by social networking services can lead to unacceptable abuses of those same freedoms. These abuses are being committed by isolated individuals or organised groups to which the leading social networks – including Facebook, YouTube, Twitter and Snapchat, to cite just the largest – are not providing an adequate response. Yet through their ordering of published content and moderation policies, social networks have the ability to take direct action against the worst abuses to prevent or respond to them and thereby limit the damage to social cohesion. Public intervention to force the biggest players to assume a more responsible and protective attitude to our social cohesion therefore appears legitimate. Given the civil liberty issues at stake, this intervention should be subject to particular precautions. It must (1) respect the wide range of social network models, which are particularly diverse, (2) impose a principle of transparency and systematic inclusion of civil society, (3) aim for a minimum level of intervention in accordance with the principles of necessity and proportionality and (4) refer to the courts for the characterisation of the lawfulness of individual content. The current approach of self-regulation of social networks is interesting, as it demonstrates that platforms may be part of the solution to the problems observed. They have come up with varied and agile solutions, e.g. removal, less exposure, reminder of common rules, education and victim support. But self-regulation is still evolving, remains too reactive (after the appearance of harm), and lacks credibility due to the extreme asymmetry of information, which gives rise to a sense of “story-telling” which nourishes suspicion about the reality of the platform’s actions. The public policy response must find a balance between a punitive approach, which is vital for sending a strong political signal to the perpetrators of abuses, and the approach of making social networks increasingly accountable through preventive regulation, capitalising on platforms’ capacity for selfregulation. Given the unique and ubiquitous nature of social networks, which transcend the borders of Member States and offer a unique service in different areas, this ex-ante regulation must be adopted and implemented at European level. The current “installation country” approach – according to which only the country in which the social network’s headquarters is based can intervene to regulate this network – has proven inefficient. The damage caused by the excesses and abuses of social networks to social cohesion in destination Member States is difficult to observe from the installation Member State. Any French initiative should therefore aim to reverse the current European approach to focus instead on the destination country, in which the platform is responsible to the Member State where the damage has occurred. This would strengthen each Member State’s ability to address the consequences of globalisation. This objective must be taken into account when designing a regulatory function for social networks so that the solution appears relevant for our main European partners, even though the policy on regulating media industries differs significantly from one state to another. The implementation of an ex-ante regulatory function should respect three conditions: (1) to adopt a compliance approach, according to which the regulator supervises the correct implementation of preventive or corrective measures, but does not focus on the materialisation of risks nor try to regulate the service provided, (2) to concentrate on the systemic actors capable of creating significant damages to our societies, without creating entry barriers for new European operators, (3) to stay agile to confront future p. 2 challenges in a rapidly evolving digital environment. Legislative measures should therefore aim to create an institutional capacity to regulate rather than a regulation specifically applicable to current problems. That regulation could be based on the following five pillars: First pillar: A public regulatory policy guaranteeing individual freedoms and platforms’ entrepreneurial freedom. Second pillar: A prescriptive regulation focusing on the accountability of social networks, implemented by an independent administrative authority and based on three obligations for the platforms:    Obligation of transparency of the function of ordering content, Obligation of transparency of the function which implements the Terms of Service and the moderation of content, Obligation to defend the integrity of its users. Third pillar: Informed political dialogue between the operators, the government, the legislature and civil society. Fourth pillar: An independent administrative authority, acting in partnership with other branches of the state, and open to civil society. Fifth pillar: A European cooperation, reinforcing Member States’ capacity to act against global platforms and reducing the risks related to implementation in each Member State. p. 3 Foreword The mission’s objective was to explore a general framework for the regulation of the social networks, starting from the fight against online hatred and relying on the voluntary cooperation, outside any legal framework, of Facebook1 (see the appended mission letter). The purpose of this experiment is to explore how a new system to regulate social networks could be established to complement existing instruments and better achieve public policy objectives in terms of the reconciliation of public freedoms and the safeguarding of public order on social networks. Although the exchanges with Facebook thus focused on hate content, the mission’s conclusions may be applied to all the issues raised by the publication of content on social networks. This interministerial mission team2 comprises seven high-level experts and three permanent reporters from a range of ministries – Culture, Interior, Justice, Economy, Prime Ministerial services - DILCRAH3 (Interministerial Delegation to Combat Racism, Antisemitism and Anti-LGBT Hate), DINSIC4 (Interministerial Delegation of Digital and Information and Communication Systems) and independent administrative authorities - ARCEP5 (electronic communications and postal authority) and CSA6 (audiovisual regulatory authority). The mission worked with Facebook throughout January and February. Over the course of several working days with the mission in Paris, Dublin (location of its European headquarters) and Barcelona (location of one of the moderation centres), Facebook’s representatives presented its policy for moderating hateful content, its organisation, and the resources it devotes to this as well as its internal procedures. Meetings were held to examine specific topics in depth, including the use of algorithms in the moderation system to detect hateful content and the basic principles of algorithms that order content for Facebook users. Although the mission received a very open welcome from Facebook, it did not have access to particularly detailed, let alone truly confidential information. This was due to the speed of the work, the lack of a formal legal framework and the limits of Facebook’s transparency policy. The mission is nevertheless convinced that this limitation did not affect its results, as its goal was not to evaluate the relevance of Facebook’s mechanisms, but to imagine “rules of the game”7, which could be adopted by the legislator to create a long-term regulatory framework for global actors operating abroad, such as Facebook. In this respect, the report does not detail Facebook's mechanisms for moderating the fight against the dissemination of hateful content online. Nevertheless, the reader can refer to documents published by Facebook to better understand the mechanisms for moderating the social network and in particular the community standards, the report on the transparency of content management on its platform, and the 1Following an agreement between the company’s Chairman, Mark Zuckerberg, and the President of the Republic announced at the Internet Governance Forum in November 2018. 2The composition of the mission is appended to this report. 3 Délégation interministérielle à la lutte contre le racisme, l’antisémitisme et la haine anti-LGBT is France’s Inter-ministerial delegation for the fight against racism, antisemitism and anti-LGBT hatred. 4 The Direction interministérielle du numérique et du système d'information et de communication de l’État is the Interministerial Directorate for Digital Technology and the Government Information and Communications System. 5 The Autorité de régulation des communications électroniques et des postes regulates France’s telecommunications. 6 The Conseil supérieur de l’audiovisuel regulates France’s electronic media. 7The mission is convinced in this respect that the regulators’ first power should be its right to demand the communication of any information necessary for the accomplishment of its mission in a legally enforceable framework. p. 4 "hard questions blog” on which Facebook regularly publishes reflections on the subject of moderation (in English). The mission also presented its approach to associations fighting hate speech, at a seminar organised by CNNum (National Digital Council) on 14 February and 15 February 2019. The mission finally completed its work with a study trip to Berlin to better understand the experience of the German NetzDG law, a mission in London and a series of meetings with public operators – Inria, Platform Pharos; Centre de lutte contre les criminalités numérique, Secrétariat général aux affaires européennes, Conseil national du numérique, Direction générale des entreprises, Direction générale du Trésor), and private entities and NGOs (Reporters sans frontières, le CERRE, la Quadrature du net, Webedia, Netino, Snap, Google, and Twitter). This report formulates proposals which, if adopted, need to be fleshed out. This report (1) identifies some key features of social networking services, (2) analyses some public policy approaches to those services and (3) recommends creating a new regulatory system based on five pillars and (4) presents a focus on the concept of the transparency of algorithms and its implementation. Due to scheduling constraints, several topics were not examined. In particular, the mission did not conduct a study of the competitive impact of the proposed regulatory scheme on other social network service offerings. However, the regulatory system should be careful not to create an insurmountable entry barrier for mid-sized market players or new entrants, and, consequently, to unduly favouring the consolidation of the hegemonic actors by enacting regulatory barriers. Furthermore, the mission focused its study on public content, but it is clear that hateful content is also present in private or closed groups on social networks, and that there is currently a trend for increasing dissemination of content within these limited groups and on messaging services. It is more complex to intervene on these environments where exchanges can be covered by the secrecy of private correspondence and especially encrypted "from end to end", rendering illusory any moderation by the platform itself since the content exchanged between the users has no visibility. Finally, the report does not deal with intervention methods for “non-cooperative” social networks which do not correspond to a traditional economic rationale, whether they are militant extremists (4chan, 8chan, etc.) or controlled directly or indirectly by a sovereign state pursuing political objectives. p. 5 I – Social networking services By enabling everyone to publish content and share it with other users, social networks are revolutionising the media industry and communications by offering individuals and civil society a direct means of expression. Nevertheless, the possibilities offered by social networking services give rise to unacceptable abuses by isolated individuals or organised groups, to which the operators are not providing a sufficient response or are even contributing to via their content ordering systems. 1.1 Although the purpose of all social networking services is to share and disseminate content to the public online, they are nevertheless very diverse Social networking services are defined by the ability to disseminate content produced by their users to all or some of the other users on that network. Social networking services are provided by different types of operators, differentiated by their legal status, their type, their target, their economic model, the type of content published 8 and their distribution methods. This falls into two categories: - Social networking services offered on an ancillary basis: Thematic or general discussion forums on websites (e.g. jeuxvideo.com and comment spaces on media websites such as lemonde.fr or lefigaro.fr) constitute a basic form of a social network: content organisation is rudimentary (mainly by chronological order) and, when monetised by advertising, this is usually not combined with user content, but is adjacent; - Social networking services offered as the main focus: Social networking platforms, which may be general, like Facebook or Twitter, or structured around a particular content type or format, like YouTube (videos), Pinterest (photos), TikTok (short videos) or Snapchat (short videos and photos). All of these services offer some or all of the following content and features: user-generated content (UGC) as the main content, promotional content9, content from professional publishers, content accessible to everyone and/or content which is restricted to a select group of users, individual accounts with or without a screen name, or discussion areas attached to a community or an event. The content ordering system on a social networking service may be personalised (i.e. specific to each user) and be more or less sophisticated depending on the volume of published content. Platforms increasingly offer a private messaging service with social networking services: Direct message for Twitter, Messages for YouTube, Messenger (and WhatsApp) for Facebook. In some cases, as with Snapchat for example, content dissemination to a closed group may be the default option, but the user may make the content available to the entire network at any time. Monetisation methods vary widely from one service to the next, from user-independent advertisements and shared content, to advertisements targeted according to the user’s favourite content or targeted according to the user viewing the content. Revenues may be shared with the content publisher, as is the case for YouTube and Snapchat. The content may be a text, a hyperlink, an image, a sound, a video (sometimes in real time), a computer programme and/or any combination of these six elements. 9 This may be traditional advertising, a product placement, sponsored content, etc. 8 p. 7 Beyond the distinction by type of operators and functionality, the size of the social network is a central criterion to take into account: from 2 billion users for Facebook, with a worldwide presence, to a few thousand users on some discussion forums. Finally, models and uses are not static and in fact evolve very quickly. Services generate new uses while, conversely, user behaviour is constantly inspiring operators to adapt their services. An attempt at a legal definition of a social networking service was first introduced in the bill proposed by French MP Laetitia Avia, designed to combat online hatred. The definition of operators of online platforms set out in Article L.111-7 of the French Consumer Code isolates operators of online platforms “offering an online communication service to the public based on connecting several parties in order to share public content”.10 At European level, the Audiovisual Media Services Directive incidentally introduces the concept of “social media services”. Its Article 1 defines “audiovisual media services” and it states, in recitals (4) and (5), that social audiovisual media services are those whose content is created by users.11 A social network may be defined as an online service allowing its users to publish content of their choice and thereby make them accessible to all or some of the other users of that service. 1.2 Social networks are revolutionising the media industry and communications by offering individuals and civil society a direct means of expression. In that sense, they represent a great step forward for freedom of expression With a capacity to host and distribute mass content for a very low marginal cost, social networks are a new form of media enabling direct expression, without pre-selection of authors or content, or any journalistic intermediation. A social network allows the exchange of content that it has neither created nor pre-selected, subject to compliance with rules of publication issued by the social network (see below). This lack of creation or selection, which distinguishes social networks from traditional news media, allows everyone to express themselves, to publicise and disseminate their opinions or content of their choice and to access new sources of information. The ability of an individual, an association or a private or public operator to express itself publicly is no longer dependent on the editorial choices of traditional media. These networks are creating new forms of social relations, transcending geographical limitations (and even linguistic limitations as a result of translation tools) and subverting both historical social structures and the primacy of the territorial organisation of states and our societies. New “digital” associations, intangible yet very real communities, have sprung up to share information or areas of interest or to unite around a common cause. The opportunities offered by these new communication vehicles are reflected in user behaviour. Social networks are now vital tools for accessing and disseminating information. One-third of French people and It is marked by the definition set by German legislators: the German NetzDG, adopted in 2017, defines the social networks of for-profit internet platforms, which is intended to allow users to share any content with other users or to make that content accessible to the public. Platforms managing editorial or journalistic content are excluded. Law no. 2018-1202 of 22 December 2018, relating to combating the manipulation of information, targets all operators of online platforms within the meaning of the French Consumer Code. 11When video sharing is an “essential feature”, those services fall into the category of “video sharing platforms”, distinct from that of “audiovisual media services” and subject to simplified regulations mainly intended to protect young users and tackle the dissemination of hate content. 10 p. 8 half of 18 to 24-year-olds obtain their information from social networks, while video-sharing platforms represent half of all news videos watched on the internet.12 1.3 Social networking services define content ordering and therefore exert a form of de facto rather than legal editorialization, which is generally unobservable and non-transparent All of the content published on a social network cannot be presented to users without ordering. The volume of content published necessarily requires the platform to define an order to display and to carry out a selection, without prejudice to the user's ability to search for specific content if they so choose. The content which the user will actually view will depend firstly on the layout of his or her interface and the use of algorithms to prioritise and personalise presentation of the various content. Unlike traditional media, the ordering of content on social network services is usually personalised (except in forums) and everyone sees the result of the personalisation when accessing the service. However, the overall effects of this ordering on all users are not observable. Furthermore, the operators providing social network services do not always reveal the precise criteria used to define the presentation of content. These ordering criteria may be numerous, and their weighting varies according to the purpose of the service (supposed interest of the content, identity of the author, whether they are paid-for, user’s preferences and behaviour, etc.). More generally, the ordering function gives operators of social network services the capacity to accelerate or, on the contrary, slow down the dissemination of certain content. The existence of this function of ordering content, constituting a form of de facto editorialization, cannot question the legal status of the operators or lead to legal requalification of hosting providers as publishers, since the majority of social network services do not carry out any selection prior to the publication of content. The existence of this information organisation system plays a key role in the dissemination of information and in social networks’ ability to prevent or increase damage to social cohesion. 1.4 The freedoms of communication and public expression offered by social network operators lead to unacceptable abuses by isolated individuals and organised groups to which the social networks are not providing an adequate response Whether paid-for, free or paying, the majority of content published on social networks does not pose any difficulty13. As a result of this capacity for large-scale communication and expression, however, combined with a feeling of relative anonymity and impunity, social networks are also forums for the exchange of unacceptable content and behaviour (content inciting hatred, terrorist content, child pornography, online harassment and identity theft) which can have a significant impact on social cohesion and harmony (spreading of false information and unfounded rumours, attempts to fraudulently manipulate public opinion by individuals or groups with political or financial objectives). Most operators have implemented terms of use which indicate the categories of content which are accepted on the service as well as moderation mechanisms when those rules are not respected by users. Given the volume of content published and the statistical approach taken by algorithmic tools, however, social network operators are currently unable to prevent all risk of their services being abused. In fact, the Reuters Institute, Digital News Report 2018. Out of 10,000 Facebook content views, for example, between 23 and 27 apparently contain scenes of explicit violence (Facebook’s transparency report, figures for Q3 2018). 12 13 p. 9 efforts deployed are still largely perfectible, especially by those with a large audience. In addition, little information is made public on how terms of use are defined and implemented or how the moderation system works. The fight against the dissemination via social network services of harmful content to users and social cohesion involves looking at how rules are defined, moderation of content already posted and, potentially, their ordering system, particularly if it involves the personalisation of content. Even if the abuses are committed by users, social networks’ role in the presentation and selective promotion of content, the inadequacy of their moderation systems and the lack of transparency of their platforms’ operation justify intervention by the public authorities, notwithstanding the efforts made by certain operators. The development of public policies designed to prevent abuses and misuse of social networks therefore appears necessary but should be subject to particular precautions in several respects. Firstly, it will be necessary to take into account the diversity of operators providing these types of services and, at least initially, to concentrate on those with the most influence over our societies. Secondly, any state intervention must be strictly necessary, proportionate and transparent whenever it affects public freedoms that are as important as the freedom of expression and freedom of communication. p. 10 II – Promoting a new public policy approach The inadequacy and lack of credibility of the self-regulatory approach adopted by the largest platforms justify public intervention to make them more responsible. That intervention must be based on a balance to be defined between the punishment of authors of harmful content and pragmatic and flexible ex ante regulation of operators providing social networking services, within a revised European framework. 2.1 Platforms that are using self-regulation with limited results The work carried out with Facebook, supplemented by discussions with other operators, show that the platforms are striving to develop a self-regulation approach. In the case of Facebook, the mission found that the system has self-regulatory mechanisms endowed with increasing dedicated resources:      recent transparency on the detailed content of “community standards”; increase of human resources and development of mass processing algorithms dedicated to the moderation system; current development of “distributed” moderation tools available to users; internal organisation of the function of moderation, publication of transparency reports; attention to the establishment of open governance structures extending beyond platform representatives, in particular a supervisory board made up of independent experts, responsible for reviewing moderation decisions. As for YouTube, the self-regulatory approach of Google’s video-sharing platform includes tools to educate users of the platform in prohibited behaviour or if they are the victims of the aggressive behaviour of other users. For example, the platform has employed influencers to try to change user behaviour, especially among the youngest of them. However, the effectiveness of what seem a priori commendable initiatives remains to be assessed. In addition, the mission was able to observe that the moderation system not only involves the removal of content considered toxic, but that there is a range of possible responses, depending on the type of content and the degree of potential damage it could cause (quarantining, hiding with a prevention message, dereferencing, warning, etc.). The speed of deployment and progress made during the last 12 months by an operator such as Facebook show the benefits of capitalising on this self-regulatory approach already being used by the platforms, by expanding and legitimising it. The self-regulatory capacity observed at these operators providing a social network makes it possible to position them as key elements in the solution to social cohesion issues raised by the presence of certain content on these platforms. This solution cannot be reduced to simply removing obviously illicit content, but must be enhanced in order to avoid harm (prevention) and respond in all possible situations based on their severity and the risk to users: quarantine, deceleration, demonetisation, reminder of the community rules, targeted education, etc. Nevertheless, even anticipating the full effect of operators’ stated ambitions, the mission found that it would not offer a sufficient response to public policy concerns: p. 11  The extreme asymmetry of information between social network operators, on the one hand, and public authorities and civil society, on the other, considerably undermines the credibility of a self-regulation approach. Neither the public authorities nor civil society know how much credence to give to operators’ statements. They have access to practically the same level of information as a user. None of the information made public by the platforms concerning their self-regulatory actions can be corroborated by objective facts. This limitation is inner to the functioning of the main social network services, due to the personalisation of the content provided. Creating an account on these platforms allows users to see only a tiny fraction of them. Only the platform itself is able to measure impacts at a global scale. This lack of credibility is heightened by the enormous volume of content and number of users of the platforms, necessarily requiring processing by algorithms based on a statistical approach. Being unable to prove the existence of a systemic failure by the platform, the public authorities and representatives of civil society are reduced to highlighting individual examples of unmoderated or poorly moderated content. Yet these isolated failures are insufficient to prove a potential systemic failure. The persistent dissatisfaction of the public authorities can be explained in particular by their inability to assess the measurable reality and value of the self-regulation carried out by these operators, due to a lack of information validated by a trusted third party.  Self-regulation remains too “inward-looking” No doubt because of its lack of maturity, self-regulation remains unconvincing because social network operators hold all the cards: they draw up their terms of use, decide to what extent to be bound by them, modify them as necessary without any public formalities, interpret them without the possibility of appeal and report on their implementation in the form and frequency they consider appropriate. Due to their recent, competing and disparate nature, social network services have each developed their own model of self-regulation. The minimum level of credibility normally provided by a sectoral approach, which allows for “peer review” – e.g. the approach taken by the ARPP (French Advertising Regulatory Authority) – is absent here and does not seem to be contemplated.  Self-regulation is agile but is not subject to any form of supervision Social network platforms are agile. They have developed with an entrepreneurial spirit which, sometimes deliberately, disregards certain regulatory constraints in order to preserve their ability to innovate. They constantly test the efficiency and relevance of their user interface, their algorithms and the organisation of their moderation function, particularly using A/B testing 14. Voluntarily giving up, even partially, what has been their main strength therefore remains a major challenge for their management teams. Today, it may well be argued that the major platforms are developing a self-regulatory approach not in order to assimilate and fully address general public policy objectives, but rather to contain any risk of coercive intervention by the public authorities and pressure from civil society, in order to avoid damage to their reputation. In this context, all of the initiatives taken, however relevant, lack credibility and are difficult to assess. Technique involving testing a feature, editorial, graphical interface or new algorithm on two different groups of users to assess its effects. 14 p. 12 Today, as far as the mission is aware, no social network has adopted truly enforceable rules in terms of providing users with information about terms of use, processes to amend them or mechanisms to involve civil society or the public authorities in their development, even in an advisory capacity. The credibility of the self-regulatory approach clearly suffers as a result. An approach that puts the social networks at the heart of the regulatory model seems quite relevant. In this model, the social network platform incorporates public interest objectives, modifies its organisation, adapts itself to this “social” objective and acts either upstream at the design stage, to prevent difficulties, abuses and other misuse of its service, or downstream, to address unacceptable behaviour by its users. To borrow the GDPR term “Privacy by design” relating to personal data, we could speak of “Accountability by design” for the processing of content by the social networks. 2.2 Developing a public policy to make the platforms more accountable The aim of the public intervention model is therefore not to regulate activity, i.e. impose functional or technical constraints on the services provided, but to make the social network operators more accountable by a legally binding obligation to come up with resources and to be accountable. Such a model, insofar as it minimises public interference in the functioning of a media industry whose core purpose is to serve as a medium for individual expression, would also have the virtue of minimising criticism concerning the risk of the manipulation of information by the public authorities. This criticism is inherent to the industry’s purpose. It should not deter public intervention, but it does call for special precautions. More direct regulatory interventions, such as those in the energy, transport, telecommunications, the traditional audiovisual media and online gambling industries, would also seem to be less appropriate, since they involve activities clearly attributable to a given national territory and therefore to the jurisdiction of a single regulator. However, the social networks often transcend geographical national borders. A discussion of content written in French inciting hatred of refugees, published by a user located outside the European Union, may be the subject of comments, also potentially hateful, by a set of other French-speaking users located anywhere in the world. An intervention method using co-regulatory mechanisms that imposes the internal assimilation of public interest objectives, without defining the methods, would make it possible to limit the impact on social network services. This new method of public intervention, focused on creating a duty to defend the integrity of the social network and its members, on the one hand, and on improving the credibility of selfregulation, on the other, would not undermine the founding principles of social networks, in other words their unique, ubiquitous and agile nature. 2.3 A public policy dynamic that must find a balance between a punitive approach and making social networks increasingly accountable through preventive regulation In several European countries, the initial public policy response to issues identified on social networks has been to implement or strengthen punitive sanctions targeting the authors of content deemed unlawful as well as the platforms, which, because they display and host the content, appear to bear the same liability as the author, or at least to be “accomplices”. The punitive policy is necessary in that it expresses the rules adopted by society in a clear and visible way. It is also required on a purely political level in situations of manifest disruption of public order. The punitive policy is effective only if it is comprehensible and enforced, so as to avoid any feeling of impunity for the authors of unacceptable content. p. 13 The punitive policy, in particular because it opens up the possibility of recognition of harm and its compensation, is an essential tool, but it cannot achieve all public policy objectives. It is limited by the fact that it necessarily intervenes ex post, to sanction unlawful behaviour recognised as such by a court. The powers devolved to criminal and civil authorities and the legislative timetable currently make it impossible to anticipate changes in social networks and the disruption they can cause. Unlike traditional media, social networks do not select each item of content published on the service. This is a defining characteristic of such services. Punitive measures against them therefore raise several difficulties. The social network finds itself in the position of a censor, ex post, of all users’ posts on its network, essentially after these are signalled (using the platform’s interface) or notified (LCEN’s specific arrangements) by users or the public authorities. By imposing an absolute standard of conformity that does not take into account the volume of the published content, the audience or the statistical nature of the processing, punitive measures risk encouraging over-moderation and thereby infringing freedom of expression, which is constitutionally and conventionally protected. Moreover, the punitive approach requires the platforms to judge the manifest lawfulness of a content themselves. They consider that this lawfulness is particularly difficult to assess from the triple perspective of the legislative intention, prosecution practices – which are by definition more selective depending on the chances of prosecution – and case law of national and international courts, which is based on balancing freedom of expression against public order imperatives. To the best of our knowledge, the establishment of “guidelines on manifestly hateful content” by an administrative authority, even an independent one, does not seem to be a very satisfactory solution. Lastly, the scope of content that is not “manifestly unlawful” (grey zone) varies according to geography and does not easily lend itself to European or international harmonisation, particularly when it comes to content that could be qualified as inciting hatred, because of its historical, cultural and legislative associations specific to each state. Particularly since punitive measures are often implemented or strengthened after a crisis arousing strong public opinions and calling for, and authorising, a strong political response. Except in exceptional cases, these crisis situations are local and do not cross borders (or to a minimal extent). The conditions for supranational harmonisation of punitive responses are therefore very difficult to meet. Punitive measures should be supplemented by the adoption of a second public policy component designed to make platforms more accountable by establishing an obligation of transparency and to defend the integrity of the social network and its members, by creating targeted and comprehensible incentives. 2.4 European cooperation to be reviewed At the current stage of European integration, social cohesion is primarily established at the level of Member States. Although services are global, the damage resulting from their existence occurs at a national level. In Europe, due to the different languages in use, communities on social networks are formed on the basis of linguistic affiliation and most frequently on a national or sub-national basis. The accountability of social network operators should be organised at the level of Member States, which are more directly affected by abuses, rather than at the level of the European Union itself, which remains removed from crises and their consequences in terms of public order and social harmony. As a result of the coherent legal space it offers, the European Union nevertheless is in a unique position to allow Member States to act coherently in respect of global and geographically ubiquitous operators. The p. 14 European Union offers the ability to bring the combined weight of Member States to bear against the power of the large social networking platforms when those states adopt the same standard of regulation. The European Union is also in a strong position to reduce the risks of failure or excessive regulation by public policies, by reducing political risk at the level of each individual Member State. This capacity has particularly been demonstrated in the telecommunications sector, where the transformation of public monopolies into a competitive industry has benefited significantly from the European capacity to moderate occasionally excessive regulatory decisions and to overcome national inertia. Nevertheless, the opportunities to take advantage of European construction require a new ambition:  The dialogue between the operators, the Member States and the European Commission on the monitoring of the self-regulation of illegal content seems to be bearing fruit in light of the results of the fourth evaluation of the EU Code of Conduct on combating illegal online hate speech, published last February15. Nevertheless, there is undoubtedly room for progress: this initiative suffers from the great distance between the European level at which implementation occurs and the location of the damage. Moreover, although the commitments provided for in the framework of this approach are relevant, the voluntary nature of the commitments, without any penalty mechanisms, is revealing its limitations (see above);  the legal regulatory framework was built around the principle of the country of origin, giving exclusive regulatory responsibility to the Member State in which the service is established. The ability of each Member State, apart from the one hosting the service, to tackle any mistakes by a global player is therefore drastically reduced by this European cooperation, therefore increasing the political risk in the destination country (in which the damage is produced). Yet the authorities of the country where a breach by the service was identified are those best placed to establish and correct that breach, especially when it involves assessing an abuse of freedom of expression (e.g. the publication of hate content) which must be assessed in light of the social, political, cultural and historical context of the state affected. Moreover, the political risk between Member States is also simultaneously increased since a single Member State receives the exclusive benefit from the platform's establishment on its territory, reducing its incentive to intervene in the event of breaches by a platform while other Member States suffer the potential damage and remain powerless to act; This structure, based on the jurisdiction of the country of origin, is reflected in the Audiovisual Media Services Directive and in the version of the draft regulation to combat the dissemination of terrorist content adopted by the Council of Member States last December. The establishment of a European regulation based on the principle of jurisdiction of the installation country would strip the Member States of their ability to take action against the large social network platforms present on their territories. The establishment of national regulations by each Member State would produce a high risk of incompatibilities between those regulations due to the unique and ubiquitous nature of social networking services, which would then become subject to contradictory and therefore ineffective rulings. These national systems would be exposed to the risk of non-compliance with treaties. In contrast, cooperation based on a European regulation and the principle of the “destination country” could reinforce each Member State's capacity to cope with the difficulties generated by global players such as the largest social networks, while reducing the political risk for those players: 15 http://europa.eu/rapid/press-release_IP-19-805_fr.html p. 15   By organising the platform's accountability according to the destination country, it creates an incentive for each social network to prevent abuses in each region. This restores geographical coherence between the location of the regulatory function and the location of the damage; A common framework, laying down uniform obligations, defined at European level via a directly applicable regulation, would guarantee the consistency and uniformity of the legal standard across all regions. This is essential in respect of global players enjoying practical ubiquity. Each Member State becomes responsible for implementing a common rule within its territory. This enables coordinated action by Member States and their regulatory authorities. It also allows the establishment of mechanisms to mitigate the political risk for each Member State at a European level using various proven mechanisms, including peer review, establishment of a board of regulators, direct supervision by the European Commission and judicial review by the ECJ. These structures have already been tested and implemented in respect of the regulation on net neutrality in the European Union. In the absence of such a mechanism, and in view of the issues at stake, Member States risk unilaterally to launch legislative initiatives with disparate scopes and obligations, as demonstrated by the German law, to the detriment of the digital single market approach and to the benefit of existing players which would be the only ones able to support the economic burden of a series of national regulations (thereby encouraging a “winner takes all” outcome). The challenge of setting up a French regulatory framework for social networks must be viewed in light of its ability, under the new EU presidency, to serve a proposal to:  reverse the current European trend, move away from the logic of the installation country, which weakens the sovereignty of Member States and their capacity to tackle globalisation, and switch to a destination country approach in order to strengthen the Member States;  reduce political risk;  and increase the legitimacy of European integration. Unlike the punitive approach, the establishment of an ex-ante regulatory framework, adopting a compliance approach that is focused on creating strong incentives to make the social network and its members accountable as well as strengthening the credibility of selfregulation, offers a unique opportunity to propose a change in Europe’s orientation. 2.5 A regulatory policy based on a compliance approach to be applied and designed with pragmatism and agility In the financial sector, governments have attempted to promote the credible and long-term commitment by financial institutions to actively contribute to achieving the public interest objectives of combating money laundering, drug trafficking and the financing of terrorism. Banking supervisory authorities have therefore devoted their efforts to imposing and monitoring obligations of means, i.e. compliance with certain preventive rules, rather than punishing failures when the risks being combated materialise (without prejudice to criminal proceedings in that case). Therefore, the banking supervisory authorities do not intervene when it is found that a financial institution has been the channel for channelling funds used for unlawful purposes, but when it finds that a financial institution is not implementing a prescribed prevention measure, regardless of whether or not the financial institution is implicated in unlawful behaviour. This intervention approach is designed to create targeted incentives for platforms to participate in achieving a public interest objective without having a direct normative action on the service offered. Applied to social networking services, this type of intervention involves identifying the few generic obligations likely to create the right incentives, particularly by increasing the effectiveness of political p. 16 dialogue with Member States’ political institutions and their civil societies. This implies imposing strong obligations on the transparency of key systems unobservable from the outside, i.e. the moderation system (and procedures for developing and updating the terms of use that underlies it), as well as the use of algorithms for targeting and personalising the content presented. This approach must be implemented progressively and pragmatically according to the size of the operators and their services:  Only services with the most influence due to their size – and therefore the most dangerous in terms of their potential impact in the event of abusive use – should be subject to these obligations and to ex ante compliance checks by the regulator. The regulator should focus mainly on active supervision of systemic actors.  Mid-size services should be allowed an initial presumption of compliance and receive support from the regulator, which could encourage their accountability commitment by issuing recommendations and implementing measures to increase pooling of common assets (e.g. database identifying unlawful content, access to annotated data sets for machine learning by moderation algorithms) with the largest platforms, in an open, transparent approach resulting in lowering barriers to entry. Nevertheless, if the regulator finds that a mid-size service is in clear breach of these obligations of means and that this results in excessive manifestation of the harmful effects being combated by the public policy, the regulator must then be able to ask the operator providing that service to take appropriate measures and, if these measures are not implemented by the platform, also be able, in a reasoned decision, to impose an enforceable ex ante compliance procedure on it, similar to that imposed on the most influential services.  Finally, concerning the smallest platforms, the regulator must be able to act only through dialogue and issuing recommendations, but without coercive action, without undermining its capacity to call the procureur’s attention to any act that might be subject to criminal procedures. Mechanisms enforcing penal and civil accountability, particularly under the LCEN (French law to promote confidence in the digital economy), nevertheless remain applicable to all social networking services, regardless of their size. These are not subject to the regulator’s action but rather to judicial bodies and common law procedures. The legislative framework should allow gradual implementation of these mechanisms and recognise the new regulatory system, as well as a capacity to define and gradually refine the obligations imposed, taking an agile approach in order to adapt quickly to changing social networks. In other words, the law establishing this regulatory framework must define the nature of the obligations, without seeking to define detailed procedures. Otherwise, the regulatory framework could easily be bypassed by “overly” agile operators. p. 17 III - Organisation of the regulatory function in France within a framework defined at the European level It appears necessary to supplement the punitive measures against authors publishing unlawful content or seeking to manipulate social networks with a proactive dialogue approach, based on strengthening the political dialogue between the public authorities and the actors concerned. Creating the conditions for constructive and regular dialogue with social network platforms should enable them to switch to the proposed solutions by encouraging them to adopt a responsible approach to their users and society and to prevent abusive use of their services. This regulatory policy could be based on the following five pillars: First pillar A public regulatory policy with broad objectives guaranteeing individual freedoms and entrepreneurial freedom To unite political energies both at national and European level and to bring together political institutions and civil society, the objectives of the regulatory system must be to defend the exercise of all rights and freedoms on social media platforms: o o o Individuals’ freedom of expression and communication, with individuals therefore being entitled to understand how the platform respects that freedom; Individual freedom of users to be protected in their physical and moral integrity, including on social networks in the digital space; Social networks’ entrepreneurial freedom, including the right to define and apply terms of use, to exercise an unrestricted information ordering system and to innovate (especially for smaller operators). The objectives could also include secondary objectives of: o o pluralism of social network services and therefore a public policy position ensuring that new services are supported and that no entry barriers are created; social cohesion, by encouraging the social networks to develop “positive” uses of their services, i.e. that strengthen social relations. p. 19 Second pillar A prescriptive regulation focusing on the accountability of the social networks, implemented by an independent administrative authority o A regulation with coercive action limited to the sole organising platforms at the level of each Member State, with two thresholds:    o The regulation would automatically apply for services for which the number of monthly users rises beyond a certain percentage of the population of the Member State (between 10% and 20%). The regulatory system would also be applied only following a reasoned decision by the regulator in the event of an identified and persistent breach for services with a monthly number of users lying between 0% and 5% of the population of the Member State. It should be noted that the lower this second application threshold, the more stringent and demanding the impact test must be in order to comply with a general principle of proportionality.16 The regulation is not applicable below these thresholds, but the common law provisions of the LCEN remain in force, allowing action for civil and criminal liability of the operators in case of breaches. Transparency obligations that concern the key functions of the social networks17:   The function of “ordering content”: that is to say, the methods of presentation, prioritisation and targeting of the content published by the users, including when they are promoted by the platform or by a third party in return for remuneration; The system for implementing the terms of use and moderating content, including the methods for the elaboration of these community rules, the procedures, the human and technological resources implemented to ensure compliance with these Terms and to fight against illegal content. This system must be able to be audited by the regulator and/or by an independent auditor of the platform. Transparency can be seen in, for example:  The obligation to notify the platform's decision to the author of moderated content (except legitimate exceptions, e.g. if required by the public prosecutor) and the person who flagged the content (where applicable); independent and extra-judicial mechanism for reviewing the platform's decision (without prejudice to a judicial remedy);  Use of automated processing tools: what tools are used, for what types of content, with which human supervision? How is their effectiveness and accuracy assessed?  Procedures for cooperating with “trusted flaggers”: list, selection procedures, “privileges” attached to that status, statistical data on the number of reports examined, the number of contents detected proactively, the follow-up given (removal, maintenance, etc.), the appeals processed, etc. This second threshold could be replaced by a “malfeasance” criterion in order to give the regulator the capacity to tackle any social networks raising issues. Nevertheless, the question arises of the appropriate level of the regulator’s resources and the necessity of creating a criterion for abandonment by the regulator in respect of small operators that no longer raise issues. 17 The details of the transparency obligations set out below are given for illustrative purposes. They do not necessarily need to be included exhaustively in the text of the law defining the regulatory system and many may be subject to the regulatory discretion of the government or the regulator. 16 p. 20  o Statistics concerning moderation efficiency: decision times (whatever the decision may be), false positive/negative rates, virality/audience of content contrary to community standards before it was withdrawn (see concept of prevalence), etc. Obligation to defend the integrity of the social network and its members By this obligation, which is close to the Anglo-American “duty of care”, the social networks are responsible for protecting their integrity and that of their members, i.e. to protect users from abuse by other members and third-party attempts to manipulate the platform. The obligation of means would allow intervention by the public authorities if it appeared that platforms’ approach, currently voluntary, to ensuring that their users can have confidence, through the creation of “trust and safety” systems or the moderation system, lack resources. p. 21 Third pillar Broad, informed political dialogue conducted transparently between the government, the regulator, the actors and civil society o Using its regulatory power, the government sets the thresholds for triggering obligations and defines the terms of the transparency obligations applicable to the functions of ordering content and implementation of terms of use, as well as to the obligation to defend the integrity of the social network and its members. o The government approves the regulatory decisions made by the regulator. o The government organises the political dialogue with social networks by involving the regulator and civil society. o The scope and effectiveness of political dialogue is enhanced by the regulator’s targeted actions aimed at promoting the social networks’ accountability. The government is then able to continue its action via political dialogue on all social issues with the social networks by involving civil society (NGOs, regions and the educational and academic communities)18. o Central agencies reposition themselves to support the government in its political dialogue by building on the reduction of the information asymmetry between platforms and political institutions due to prescriptive regulations. o Where applicable, the platforms make voluntary commitments to the government, which are subject to verification and enforcement by the regulator. For example, the implementation of an action plan to tackle a newly identified abuse, improvement of transparency metrics for the coming year, etc. 18Specifically, the content of the terms of use remains within the social networks’ entrepreneurial freedom, although the transparency provided by the regulator makes them subject to a political dialogue. For example, Facebook acknowledges that it has changed the scope of its community regulations to better protect refugees from hate speech under political pressure. p. 22 Fourth pillar An independent administrative authority, acting in partnership with other branches of the state, and open to civil society o In the regulatory system proposed, the independent administrative authority guarantees the accountability of social networks, for the benefit of the government and civil society. o It implements coercive regulation autonomously but must not be autocratic or hegemonic. It regulates the accountability of the large social network platforms by policing the transparency obligations of content ordering and moderation systems, as well as the obligation to defend the integrity of the social network and its members. It is neither the regulator of social networking services as a whole, nor the regulator of the contents that are published on them. It does not have jurisdiction over all contents taken individually. It cooperates with the state agencies, under the authority of the government the legal system. o It does not directly impose restrictions on the definition of the social networking services offered, but imposes the publication and dissemination of information, the veracity and relevance of which it seeks to qualify with the help of civil society (NGOs, regions and the educational and academic communities). It verifies the effectiveness of the resources deployed to comply with the obligation to defend the integrity of the social network and its members. o It must have wide-ranging access to information held by the platforms, including the ability to use borrowed identities and to require special access to algorithms to verify the accuracy of the description published by the social network19. It cannot be challenged on the grounds of business secrecy or personal data protection, without undermining its obligation to protect the data it requires in accordance with the GDPR and trade secrecy laws. o It has an administrative sanctioning power enabling it to impose (1) mandatory publicity on the social network for these users and/or its commercial partners (i.e. the advertisers responsible for the platform’s turnover), and (2) pecuniary sanctions up to a maximum of [4%] 20of the total global turnover of the social network operator. These sanctioning powers may be exercised only after formal notice. o It has the mandate and legal competence to set up links to enable academic research on the platforms using their data, in compliance with GDPR. o It has a mandate to encourage the pooling of resources and knowledge for the benefit of smaller social networks, thereby contributing to lowering entry barriers. o It actively participates in the European regulators’ network and supports the government’s action in the negotiation of European policy. Through the implementation of direct, real-time, targeted and proportional access to social network information systems via dedicated interfaces (APIs). 20The mission has not conducted specific work on the level of pecuniary penalties to be set and uses the amounts provided by the GDPR. 19 p. 23 Fifth pillar European cooperation which reinforces Member States’ capacity to deal with global platforms and reduces the risks of implementation in each Member State In view of the power of global platforms, the mission proposes that the European Union must organise the networking capacity of governments and their civil societies by joining forces. This structure strengthens Member States’ role as guarantors of social cohesion in a globalised world. This European level coordination must be based on: o o o o A European regulation in order to recognise the global character intrinsic in any digital platform; ensure the full effectiveness of coordinated and networked action by national authorities in front of global players (in particular via common procedures, common APIs, etc.); and reduce the risks of implementation in each Member State. National implementation according to the destination country21 rule to make the platforms responsible locally in each Member State and in the regions where they may create damage. Concerted action between national authorities and open to civil societies in order to increase the effectiveness of verification of the platforms’ transparency. European mechanisms to reduce the risk of excessive regulation by a Member State ("check and balance"), an essential corollary of the competing competence of each Member State: national and European public consultation on regulatory decisions or recommendations, a mechanism for referral of the opinions of the national personal data regulator, coordination and coherence of national regulatory decisions by a collegial body bringing together the national regulators and the European Commission. Member States’ competing jurisdiction should be limited by prerequisites: (1) a threshold expressed as the average number of monthly users as a percentage of the population at national level to establish the regulator's automatic jurisdiction, and (2) a lower threshold when combined with the finding of manifest harm in the destination country, and (3) limited power and penalties in proportion to the potential consequences suffered in the Member State. 21 p. 24 IV – Focus on the transparency of algorithms Increasingly complex algorithms Users of social network experience algorithms every day, from content rating on the newsfeed, to insertion of sponsored content, algorithms to moderate content contrary to the terms of use, friend suggestions, etc. When dealing with the transparency of these algorithms, it is necessary to take the definition of the word algorithm (“Set of operating rules whose application makes it possible to solve a stated problem using a limited number of operations”), while also encompassing the algorithm's proposed input data, data that has been previously used to “train” the algorithm, the context data, etc. Many algorithms are based on statistical approaches and therefore provide probabilistic answers such as the probability that you might like content or click on a product. Some use machine learning techniques that involve trying to mimic human choices, for example, the moderators’ choice to accept or reject content grouped into collections of annotated content. Most of them are also part highly personalised. Finally, the algorithms also evolve significantly over time, sometimes updated daily. All of these factors lead to a paradigm shift for the intelligibility of algorithms. Each algorithm may have billions of avatars that do not all behave in exactly the same way, depending on the user or the country. Nevertheless, it is necessary for public action to extract the general principles. A need for algorithmic transparency Algorithms are tools that may be misused or misappropriated. The importance they have gained on social networking platforms and the abuses they may cause (promotion of hate speech, ineffective moderation, interference by a sovereign state in the public debate, etc.) have made state intervention vital. This involves transparency, i.e. the means to make the underlying logic intelligible, the main processing principles applied by the algorithms. This first level, which requires little intervention from public authorities, allows a relative unveiling of the impenetrable workings of certain algorithms. it may also point to possible operating biases (whether due to developers’ conscious or unconscious choices, to computer programmes or to data bias), but above all it fuels the public debate on the social questions raised by the widespread use of algorithms. In real terms, algorithmic transparency takes a variety of forms. For example, for a private individual without any particular technical skills, it could mean publishing the key criteria that led to a result concerning him or her (information ranking, a recommendation, targeted advertising, etc.) or the reasons for a particular decision (moderation of a post or lack of response following a report). A more expert operator will be interested in more comprehensive measurements of algorithms’ performance (false positive or negative rate in moderation) or explanations of the processing architecture in the form of decision trees or other graphic representations revealing the data taken into account by the algorithm and its influence on the results. The academic world will surely be interested in the publication of reference datasets, making it possible to challenge platforms’ moderation choices, without which it is impossible to reproduce the results of a learning algorithm. Transparency cannot simply be declared. It is a particularly complex task to check the integrity of the algorithms used by companies. The regulator must have the resources to do this using statistical measures, the provision of API testing tools and third-party certification or compliance mechanisms. p. 25 Public action at two levels Transparency will be effective only if it results from regular dialogue with operators and a process of trial and error mimicking the development process of those algorithms. The key principles for action by the public authorities therefore clearly need to be defined by legislation in order to impose transparency on the algorithms, while giving flexibility to the regulator responsible for applying the law. The legislator must therefore define the legal principles to be followed by the regulatory authorities while adapting to particular contexts. In practice, it will particularly be necessary to establish a proper equilibrium between the principle of transparency and the protection of business secrecy and to define general obligations of intelligibility in respect of the relevant algorithms. It would not be advisable, however, to define particular metrics in legislation or to impose specific implementation procedures. For example, Article 14 of the law relating to combating the manipulation of information is not understood by the sector, which finds it either excessively or insufficiently specific (what happens if the algorithm is customised? what happens if the algorithm is updated? what does “share of direct access” mean in the context of a ranking algorithm?). Since the law has empowered the regulator to extract information, the regulator is able to study the specific characteristics of each algorithm on a case-by-case basis in order to ensure effective transparency. It is this approach – by definition experimental and involving co-construction and trial and error – that will allow the regulator to develop its policy tools incrementally at a regulatory level. This could then subsequently lead to the emergence of possible synergies to define broader principles governing the transparency of algorithms. Beyond the technical and legal issues surrounding algorithms and their transparency, the aim of regulation will be to bring the ethical issues and moral and political choices raised by algorithms into the public debate, to clearly reveal the “algorithmic policy”. The regulator's accumulated knowledge and the data it will collect will then be able to enhance societal and academic debates better by responding to questions on the range of issues underlying the concept of transparency, for example regarding:    p. 26 The data: o What data is used to train the algorithms? How is it collected? Is it personal data? o What data is used in the algorithm’s input parameters? o Does this data present biases? The model: o What is the processing flow followed? What algorithmic components are used? o What supervisory/monitoring mechanisms are used in algorithmic learning? o What personalisation is carried out? o Does the algorithm reproduce biases? Can it be misused? What are potential abuses to avoid? Inferences: o What are the false positive/false negative rates? o Which metrics can be used to report on the algorithm’s performance? o What confidence interval may be applied to the result? o What procedures are used to correct errors? Appendix 1 Mission letter MINISTRY OF THE ECONOMY AND FINANCE STATE SECRETARIAT FOR DIGITAL AFFAIRS MINISTRY OF ACTION AND PUBLIC ACCOUNTS Mr Frédéric Potier Prefect on public service mission, DILCRAH Mr Serge Abiteboul, Inria researcher Member of the Arcep College Paris, 11 March, 2019 Re.: Mission letter – Facebook mission Dear Sirs, Social networks now occupy an essential place in our society by offering their users powerful spaces and tools for exchanging ideas, discussing and sharing content. In this respect, they provide a fantastic opportunity to exercise freedom of expression and freedom of communication, the foundations of our democratic society. While they are symbols of progress and spaces for freedom, social networks are also forums for the dissemination of unlawful content, which can be seen by a potentially very large audience. Freedom of expression and freedom of communication, like other media, must be reconciled with other principles that may limit the ability to exercise them, including respect for the dignity of the human person. The laws of the Republic obviously apply in the digital space, both in respect of users and platforms. However, the volume of content, the speed with which it spreads on social networks and its impact on society justify the complementary implementation of a systemic regulation of moderation systems on social networks. Developed by social networks on their own initiative, existing moderation tools adopted on a selfregulatory basis which, while relevant in some respects, do not provide sufficient guarantees for the exercise of the fundamental rights of our fellow citizens. 139 Rue de Bercy - 75572 Paris Cedex 12 p. 29 These private initiatives can no longer manage without support from the public authorities. This was the conviction that led me to entrust you with a mission to investigate and make proposals concerning social networks’ content moderation systems. The lessons of this mission will complete the conclusions of the mission report entrusted by the Government to Ms Laetitia Avia, Mr Karim Amellal and Mr Gil Taieb.1 It is in this context that Facebook, taking an experimental approach, has agreed to work with your factfinding mission and present its moderation system and its development prospects, with particular attention to combating the dissemination of content that incites hatred. On the basis of this unprecedented collaboration with a private operator, you will assess these selfregulation systems adopted by Facebook. You will particularly study the algorithmic processing developed and used by Facebook in this respect. You will propose recommendations, whose risks and opportunities you will have first analysed, and in particular the precautions to be considered in order to build a national regulatory framework which can be developed on a larger scale, particularly in Europe, in view of the global nature of social networks. This experiment should be seen as a first step to reflect enabling very specific consideration of the best ways to ensure that all social networks, not just Facebook, apply very high standards and quality requirements in moderating the content that they host. To carry out this mission, you will consult all stakeholders and take into account any initiatives already under way in other countries. You will draw on a team of experts whose composition is set out in the appendix and on three rapporteurs made available for the duration of the mission. With your agreement, Mr Benoit Loutrel will be the general reporter. You will report periodically on the progress of your work to a steering committee made up of the offices of the relevant ministers and chaired by my representative. The lessons learned from this experiment will feed into the national and European regulatory work in this field. I wish to have your report available by 30 June 2019 at the latest. Mounir Mahjoubi Report submitted to the prime minister on 20 September 2018, entitled “Strengthening the fight against racism and anti-Semitism on the internet”. 1 p. 30 Appendix 2 The members of the mission Serge Abiteboul Director of IT research at the Inria and the École Normale Supérieure in Paris, member of the Collège de l’Arcep, Serge Abiteboul is an expert in databases, information and knowledge. He was a professor (Collège de France, Stanford, Oxford University, etc.), a member of the CNNum, and a startupper. He is a member of France’s Academy of Sciences. He is also a blogger and author. Frédéric Potier After a career in the Ministry of the Interior and the ministerial cabinet, in May 2017 Frédéric Potier was appointed prefect on public service mission and interministerial delegate to the fight against racism, antiSemitism and anti-LGBT hatred (DILCRAH). In this post, he guides the State’s public policy against certain forms of discrimination and hate speech and acts. Côme Berbain As an engineer from the corps des Mines, and with a doctorate in cryptology, Côme Berbain is the State director for digital technology. His career has alternated between private entities (Orange, Trusted Logic) and public entities (Ministry of Defence, ANSSI) in the fields of digital transformation and cybersecurity. He was an adviser to the office of the Secretary of State in charge of digital affairs in 2017 and 2018. Jean-Baptiste Gourdin A graduate of the Paris Institute of Political Studies (ENA), Jean-Baptiste Gourdin is department head and Deputy Director-General of the Directorate General of Media and Cultural Industries (DGMIC). He is a Magistrate at the Cour des comptes and was chief of staff of the CSA President and coordinator of the mission "Act 2 of the cultural exception". Jacques Martinon A court magistrate, Jacques Martinon started his career as a trial judge. Since 2016, he has joined the Department of criminal affairs and pardons (DACG) and is head of the Justice Department’s mission to fight cybercrime. He is a contributor to the SGDSN’s cyberdefense strategy magazine, and has been a trainer for the INHESJ/IHEDN joint national session on “digital sovereignty and cybersecurity”. p. 31 Gilles Schwoerer Gilles Schwoerer is a gendarmerie officer and worked in a wide variety of State sectors (Army, Departmental Gendarmerie, specialised gendarmeries related to aeronautics) before joining the Centre for combating digital crime (C3N) in the Gendarmerie Judiciary Centre as Deputy Chief of C3N. Aude Signourel After 7 years at the Ministry of Justice and 3 years in the cabinet of the prefect of Seine-Saint-Denis, Aude Signourel joined the Directorate of Civil Liberties and Legal Affairs at the Ministry of the Interior. Since 2017, she has been legal advisor to the Cybercrime sub-directorate of the Central Directorate of the Judicial Police, which hosts the PHAROS platform. The mission rapporteurs Sacha Desmaris Sacha Desmaris graduated as a lawyer from the University of Paris 2 Pantheon-Assas, and is now is head of the Audiovisual Economics Department at the Directorate of Studies, Economic Affairs and Forecasting at the Higher audiovisual council (CSA) where she was a mission head from 2016 to 2019, after having worked four years at the general secretariat of the M6 Group. Pierre Dubreuil Pierre Dubreuil obtained his engineering degree from Telecom ParisTech and the École Normale Supérieure at Paris-Saclay, and is a specialist in machine learning and co-founder of a startup; he is a mission head within the Telecommunications and Postal Regulatory Authority (Arcep). Benoît Loutrel An Inspector General of the INSEE, Benoit Loutrel specialised in industrial economics and regulation, and was appointed Director General of the Telecommunications and Postal Regulatory Authority (Arcep) from 2013 to 2016, after having worked there from 2004 to 2010. He was director of the "digital" programme on investments for the future from 2010 to 2013. He was director of public affairs for Google France for a few months in 2017. p. 32 p. 33 p. 34