Ditt? n: of ?all"? Structure for the White Paper on arti?cial intelligence a European approach The development of artificial intelligence (All will have profound impacts on our societies. The purpose of tltis White Paper is to put forward proposals to develop a European approach to arti?cial intelligence. which will help prepare our societies for the challenges and opponunities that attilicial intelligence is creating. These proposals cover the main building blocks of a European approach. including actions to support the development and uptake of arti?cial intelligence, actions to facilitate access to data and the key pillars of a future regulatory framework for ani?eia! intelligence. With this White Paper, the Commission launches a broad consultation process and invites all relevant stakeholders to comment on the proposals for this European approach. 'l'lr?re is currently no consensus at intemational level on the definition of the term "arti?cial intelligence". The term is used to describe a variety of technologies with certain common features. The High-Level Expen Group on Arti?cial Intelligence . set up by the Commission described arti?cial .. I "Wit-'5 intelligence systems as "so?tvare (and possibly also hardware) systems designed by lrnmans that. given a complex goal. act in the physical or digital dimension in: perceiving their environment through data acquisition, interpreting tire collected structured or unstructured data. reasoning on tire knowledge. or processing the itnforntation. derived from this data and deciding tire best netionfs) to tnlre to achieve tlre given goal .?-enswers -- -.- it The objective of the European approach to ar1ifteial intelligence is to promote the development and uptake of arti?cial intelligence across Europe, while ensuring that the technology is developed and used in a way that respects European values and principles. Given that other major economies, in particular the US and China, are Supporting ar1iticia intelligence, it is essential to ensure that European citizens and companies cart both bene?t from the technology and shape the way it develops. Beyond produclivity and efficiency gains, arti?cial intelligence promises to enable humans to develop analytical capacities not yet reached, opening the way to new discoveries and helping to solve some of the world's biggest challenges: front treating chronic diseases or reducing fatality rates in trallic aceidean to lighting climate change or anticipating cybersecurity threats. l-lovrever, Europe can only seize these opportunities if it can strengthen its leadership in developing attiliclal intelligence applications and increase the uptake of Iltis technology. Artificial intelligence has a transformational potential for the industry. It helps make products, services and processes more efficient. it can also help factories to stay in or return to Europe. It improves products, services, processes and business models in all economic sectors. 11 can help companies identify, which machines will need maintenance before they break down. It is For further detail. please see the ll April 20?) opinion ofthe High-Level Expert Group on nnit?ieial intelligence. Draft at our ?ngertips when we translate teats nnline. It is making life easier for tire visually impaired by assisting them in perceiving objects in their daily lives. At home. a smart thermostat can reduce energy bills by up to 25%. by analysing tlte habits of the people who live in tire house and adjusting llre temperature acCordingly. Arti?cial intelligence also ltas potential In improve the delivery crl' public services by "taking them more efficient and accessible and to better allocate scarce resources and budgets. Arti licial intel ligencc is a strategic technology that can hrin tremendous opponent ties. the same time, it has distinct characteristics that raise speci?c challenges in terms of govemarrcc. and in relation to the safety and liability ol?deviccs and systems equipped with it. These characteristics Include autonomy to. g. performing tasks in complex environments without constant guidance). opacity and the ability to improve performance by learning from experience Whrle the of arti?cial intelligence systems is that they will spot patterns to the data and will make decisions faster than huntans do. the risk is that they may make inappropriate decisions, and that the reasoning behind those decisions may not be known. This raises concerns related to liability. discrimination and transparency. which should be addressed in a regulatory framework. This White Paper is structured as follows: Section 2 describes the existing policy framework For arti?cial intelligence at the Eli level and beyond. 0 Section 3 outlines in more detail the policy actions in suppon ot" the development and uptake arti?cial intelligence across Europe, including on investntentt skills, and small and medium?sized enterprises. 0 Section 4 sets out ideas on how best to facilitate access to data, which is a prerequisite for developing the vast majority of totlays? arti?cial intelligence systems. 0 Section 5 constitutes the main pan of the White Paper, setting the key elements of a future comprehensive European legislative homework for arti?cial intelligence. which respects European values and principles. 0 Section 6 contains the conclusions selling out the Commission's intention of the next steps and the relevant tirneline for receiving the contributions of stakeholders. The White Paper is accompanied by three other documents: or the Report on the broader implications of arti?cial intelligence, tntentct of'l?hings and other digital technologies for the EU safety and liability framework; 0 [the proposal for a new Council Regulation on high pcrforrttance and the review ol'the Coordinated Plan on Arti?cial Intelligence tCOMtZOE?lxxir}. Draft :1 5 QUE. 2. roLtcv FRAMEWORK EU framework This White Paper builds on the existing policy framework. including the Communication on ?J'trtificial Intelligence for Europe' and the Coordinator] Plan on Artificial Intelligence developed together with Member States. The Communication focuses on tltrec key pillars of the European artificial intelligence strategy; suppon for the technological and. industrial capacity and the uptake of arti?cial intelligence across the economy, preparing for the sociov economic changes brought about by ani?cial intelligence. and ensuring an appropriate ethical and legal framework. It also launched the work of the European .i'tl Alliance as a forum to bring together a broad range of stakeholders. as well as the High-Level Expert Group on Arti?cial intelligence, and the Expert Group on Liability.r and New Technologies. With the subsequent Communication of April 2019 the Commission welcomes the Ethics Guidelines for Trustworthy Arti?cial Intelligence prepared by the High-Level Expert Group. The guidelines list seven requirements that arti?cial intelligence systems should meet in order to he trustworthy: human agencyr and oversight: technical robustnesa and safety; privacy and data governance; transparency; diversity. nondiscrimination and fairness: societal and environmental well-being; and accountability. This approach also includes tools to help put translating the ethical principles into practice. industry,r and other stakeholders have recently' tested tltesc tools. International aspects The work on arti?cial intelligence has in?uenced international discussions. When developing its ethical guidelines the High-Level Expert Group involved a number of non-EU organisations (from US and Canada] and as several governmental thervers [from Japan and Singapore). In parallel the EU was closely involved in developing the Organisation for Economic (Co-operation and Development?s ethical principles for arti?cial intelligence. which were subsequently.I endorsed by the GZU. Given that China and the US remain the most important global players in artificial intelligence, the EU seeks to cooperate with them based on a strategic approach that protects the interests mainstreaming [European standards. accessing key resources including data. creating a level playing ?eld]. "lite Commission is convinced that international cooperation must be based on a like-minded approach to the iEU's fundamental values, such as the rcs'ptxt For human dignity. pluralism. non-discrimination attd protection ot?privacy.? Under the Partnership Instrument. the Fomlnissinn will ?nance a ?15 million project Ihal will temper-anon With partners. in order to promote the arti?cial intelligence ethical euidclutcs and to adopt common principles and operational conclusions. 3. n: cal-123.73 SUPPORTING THE AND UPTAKE Ul" ARTIFICIAL INTELLIGENCE Europe is well-placed to bene?t from the potential of ani?cial intelligence. not only as a user but also as a producer of arti?cial intelligence. It has excellent research centres. which publish more scienti?c articles related to arti?cial intelligence than any other region in the world. a world?leading position in robotics. husiness-to-business markets as well as competitive manufacturing and services sectors. from automotive to healthcare. from energ'Itr to ?nancial to agriculture. It holds large amounts of public and industrial data and has well-recognised technologyr and industrial in low power consumption. and safe and secure digital systems that an; essonlial for the further developmenl of artificial intelligence. One reason for Europe?s strong position in terms of research is the EU funding programme which has proven instrumental in federating avoiding duplications, and leveraging public and private investments in the Member States. Over the past two years. the EU funding for activities related to arti?cial intelligence has gone up by ELS billion. i.e. an increase of run. relative to the previous period. However, investment in research and innovation in Europe is still a fraction of the public and private investments in other regions of the world. Some ?32 billion were invested in arti?cial intelligence in Europe in 2016. compared to around Elli billion in North America and 66.5 billion in Asia. To respond to the challenge. Europe needs to increase signi?cantly investment levels. The Coordinated Plan on Arti?cial Intelligence developed with Member States. Norway and Switzerland has proven invaluable in building stronger cooperation on arti?cial intelligence in Europe and in creating synergies for maximising investments into the artificial intelligence value chain. ?sonv. soc re 2,so zaps 2.00 gran-e :50 Emma 1.00 ?ns. 0 Cl tun 200x c" ?9 1 -. . ?13} Lift 0?19 we; ?of :33 1dplayers 0 Hr AI players over GDP prover-2r over GDP: Source: Europe should leverage its to expand its position on markets along the value chain. from hardware manufacturing through software all the way to services. This is already happening to an extent: Europe produces more than a quarter of industrial and professional service robots leg. for precision tanning. security. health. logistics}. and plays an important role in the development and exploitation of platforms providing services to companies and organisations applications to progress towards the "intelligent enterprise" and e-govenunent. Europe has a weak position in constuner applications and online plalfonns which results in a competitive disadvantage in data access. However. there are also opportunities. Whereas around 30% of the current 40 zetabvtes of data is stored in data centres. many of which are Drn? or off)!? controlled by non-European operators, the advent of the Internet of Things and edge computing corrld result in a radical change the distribution of data. As a result, 80% of the lift zetahy1es of data that is expected to be available in 2025 should be stored locally at the {ll?llelwot'lts in factories, hospitals. etc. Similarly. Europe should expand into the area of specialised processors and augment its computing capabilities. lCurrently. tltis market is dominated by third countries but this could change with the help of the European Processor initiative, which addresses the development of low?power computing systems for both edge and ??ll generation high performance computing. Moreover, Europe is leading in neuromorphic solutions that are ideally suited to automation of industrial processes [industry 4.0] and transport modes. They offer improvements of several orders of magnitude in energy efficiency; availability of testing and experimentation facilities will greatly help the application of neurotnorplric solutions. In parallel. Europe will continue to lead progress in the algorithmic Foundations of Al, building on its own scientific excellence. This will require building bridges between existing silosT such as machine lcamin and symbolic approaches. where Europe is historically very strong. Such efforts will support Europe's technological sovereignty in the long term. Ell-level funding can ensure cross-fertilisation of European developments in arti?cial intelligence and in areas where this will make a difference and the efforts required go beyond what any single Member State can achieve. Key proposals to address the abovcamentiorted issues include: r? Establishing a norm-leading artificial intelligence computing and dare in Era-ope: a comprehensive data and computing infrastructure using as a basis High Performance Computing centres and edge computing capacities. through the litrrollPC.? Joint Undertaking. 'Ihc deployment of a next generation high performance eortrputing infrastructure will be complemented by a European federation of interoperable, ?exible and scalable cloud and computing infrastructures and targeted cloud-based artificial intelligence services (to be funded through Horizon Europe and the Digital Europe Programme]. Supporting the deployment of common European data spaces to taciliratc pooling and sharing of data front across Europe is also crucial and will he addressed in a specific initiatiVe. r? Feder'nring knowledge and achieving canciience: drawing on the Commission's long- term efforts to strengthen the European scienti?c community and make Europe the "place to be". we will reinforce European excellence centres for arti?cial intelligence and facilitate their collaboration and networking through strengthened coordination. Currently, there is no single institution which can be recognised as a leader by the entire community it: all the four major sub-disciplines where Europe can lead, foundational research in al1i?cial intelligence algorithms, perception and interaction, robotics, and generation of chips for arti?cial intelligence. As a first step, the Commission helped foster consolidation in the individual sub-branches of arti?cial intelligence, addressing the fragmentation in these ?elds. In a second step. the Commission will strengthen cunl'dllt?li?? ol? the locally distributed networks. Moreover. it aims at establishing networks of leading universities to attract the best professors and scientists and offer world?leading master programmes in arti?cial intelligence [to be funded through Horizon Europe and the Digital Europe Programme). ?rnt?r n5 effol'Z? v" Supporting research and innovation to stay at the fore?ont and create new markets: a ?Lcadcrs IEiroup' will he set up with C-level representatives of major stakeholders. t0 develop an industrial slt?alegy and C?tt'lt?ltil l0 ilS 'litt: leader's (iroup will also olTer strategic guidance to a new public-private partnership on arti?cial intelligence, data and robotics involving all relevant stakeholders {funded through Horizon Europe], strengthening cooperation between academia and Funding [If large-scale testing facilities. including for neuromorphic, under the Digital Europe Programme will help bringing innovation closer to the market. if Fostering lite nptelre of arti?cial intelligence: improving the uptake (If arti?cial intelligence is a key task of the Digital Innovation Hubs. These Hubs will be strengthened and supported through the Digital Europe Programme which will also Support the uptake in high-impact application sectors such as healthcare leg. arti?cial intelligence for health imaging. genomies. testing medicines and medical devices], mobility [cross-border corridors for connected and automated mobility) and environmental modelling and monitoring (eg. a highly articulate prediction and crisis management capacity}. Additionally, the ani?cial intelligence?on-demand platform should become a reference point for knowledge [?l?lcd l0 Emil-16in! intelligenceI algorithms, tools, infrastructure. equipment, and data resources. Ensuring access to ?nance for nrtl?cr?ni intelligence innovators: a pilot Stit?ltle Will he launched under lnnovFin to provide equity ?nancing for arti?cial intelligence and blockchain innovative developments and will be scaled up through lnvestEll in 2021. 4. access To tiara Ensuring access to data for EU businesses and the public sector is a Prcf?qlliSile if? developing itl'lilittitll intelligence. 'lhis emerges from national artilicial intelligence strategies developed across the Eli. Data is an important driver of innovation, and creates new opportunities For growth, including for small and medium-sized enterprises. The optimal use of data can help us live healthier and. longer lives that are more environmentally friendly. I Tilt: EU can build on its comprehensive legal framework for data and its use in the economy, including the General Data Protection Regulation, the Regulation on the Free Flow of Data. and the Open Data Directive. The Annex. to this White Paper gives an overview of the existing legislation on data access and use in the EU and an assessment of its relevance for the development of arti?cial intelligence. - The Commission sees the development of common European data spaces to be a key measure for the problem at" data access. These spaces will combine the technical infrastructure for data sharing with governance mechanisms. 'l'hcy will be organised by sector {for example agriculture} or problem area [for example climate change). This White Paper presents a series of Further measures to ensure data availability in the common European data spaces. On some of these actionsI work has already started. Others are to be addressed in the near future. in a separate policy document, the Commission will present its overall strategy on data, including additional measures related to data access and use that require further analysis and discussion. Hm? n3 dill-"2921?: ?915 Datum?: 4? (?Mi-nu Projection ofd'ntn and computing growths (logarithmic scale}. Source: JRC based on Kanibail'a er nit. MM 0 Based on the recantly revised Open Data Directive, the Commission intends to adopt by early 202l an implementing not on high?value public sector datasets. Them dal?smg should be made available for free and in machine-readable fonnat, well suited for arti?cial intelligence development. This concerns geospatinl data, environmental and earth observation dam. meteorological, mobility and business data. and statistics. Further categories could be addod by way ol'u act. 0 A clear link needs to he made between the data policies and the EU-level investments (please see Section In particular, the Commission wants to support the development of common Eulopean data spaces under the Digital Europe programme. This includes also support to national agencies for publishing high-value datascLs. Kev Questions to lie [urllier addressed: ll?imr are the main issues concerning darn lijedfor training nrn'?et?al intelligence? Quality of dam? ?lmed dam? Inierom'mtbifi?'? Access to existing dam? 7? Are there any existing initiatives in the pri't-urc- secmr to improve and sharing ofdam for HIE pm?post? nth?mgt?na?? J50. can we help senile them up.? Ulnar, ?'th do their not exist.? I ll'hnr is the misting po?cv?'nmework to facilitate access to and use ofn?utu? ll'nit?l: problems COMM be better solved at national fare! and which a! the EU fern-l." 5. A Di'nr?i or of A REGULATORY FDR ARTIFICIAL INTELLIGENCE The regulatory framework for artificial intelligence has to be consistent with the overall objectives of the European approach to artificial intelligence. i.e. to promote Europe's innovation capacity in this new and promising ?eld. while simultanmusly ensuring that this technology is developed and used in a way that respects European values and principles. As any new technology. the use of arti?cial intelligence brings both new opportunities and new risks. In addition, arti?cial intelligence poses distinctive challenges from a regulatory point of view. as products and services based on artificial intelligence combine data dependency [data generation. processing and analysis) with almost omnipresent connectivity within new technological ecosystems, such as the Internet of Things and cloud computing. Arti?cial intelligence is already subject to an extensive body of EU legislation. on Fundamental rights (cg. data protection, non-discrimination. gender equality. asylum, copyright). consumer law, and product safety and liability. However. given the fast development of the anilicial intelligence technology. this legislation might not fully cover all of the speci?c risks that arti?cial intelligence brings. possibly revealing certain regulatory gaps or weaknesses that were not apparent before. This also includes a lack of effective regulatory tools to ensure that arti?cial intelligence complies with existing requirements. A balanced and values-based regulatory trainework will not only support the widespread adoption of this technology, but will also help European companies to bene?t fully from :1 friction-less single market to scale up their operations across Europe. It must carefully complement and build upon the existing and national legal frameworks. to provide policy continuityr and ensure legal certainty. Such a proportionate approach focused on addressing well-de?ned risks and gaps will help to avoid unnecessary additional regulatory and administrative burdens. and ensure that European innovation continues to thrive. PR OBI. DE INI In spite of the opportunities that artificial intelligence can provide. it can also lead to harm. A potential harm brought by arti?cial intelligence might be both material {loss of life. safety and health of individuals. damage to propu?ty. etc.) and immaterial [loss of privacy. limitations to the right of freedom of expression. human dignity. etc]. and can relate to a wide array of risks. rights, including discrimination, primer and data protection Bias and discrimination are inherent elements of any societal or economic activity Human decision-making is also prone to mistakes and biases. However. the same level of bias when present in an artificial intelligence could a?oat and discriminate many people without the social control mechanisms that govern human behaviour. in addition to discrimination. artificial intelligence mayr lead to breaches of other fundamental including freedom of expression. freedom ofassumhly. human dignity. private lile or right to lair trial and remedy. These rislo might be a result of flawed design ofattil'reial intelligence systems leg. the system is programmed to discard female job applications} or the input of biased data tog. the system formed of Europe research shows that a large number of fundamental rights could be lit-parted from the use ol'arti?cml intelligence. uni-"miter 'oritlnus-imtl-humiln?n Ural? (rs of is trained using only data from white males}. They can also occur when the system ?leams' during the use phase. for example when an arti?cial intelligence systems 'leams' that students With the best academic share the same postal codes which happen to be prevalently while in population. The risks will in such cases not stem from a ?aw in the original design of the system but from the practical impacts of the correlations or patterns that the system identi?es in a large dataset. r'tn erriployer was adrenising for a job opening in a malcvdominaled industry 1ria a social media platform. The platform's ad algorithm pushed jobs to only men to maximize returns on the number and quality of applicants. Source: .- 'onm Seirer'bet?. "Foot-book Accused ofAtioii-r?ng Bins Against in Job Art's. Tire N911- l?or'lr Titties, September trots. Arti?cial intelligence might also give rise to risks for privacy and protection of personal data.? For example. private and public actors can use arti?cial intelligence to idcntiiy people who want to remain anonymous. Employers can use arti?cial intelligence to observe the working patterns of their employees. Companies can track daily habits of people and listen in to private communication. Arti?cial intelligence technologies can be used for mass surveillance of the general population by state authorities. By analysing large amounts of non-personal data and identifying links among them, arti?cial intelligence can also be used to retrace and de- anonyrnise personal data about cenain people. Arti?cial intelligence programmes for facial analysis display gender and racial bias, demonstrating loa- errors for determining the gender of lighter-skinned men but high errors in delenninirrg gender for darker- slitinncd women. Source.- Lorry ?chie?y. "Snafu Find: Gender and Skirt?Eyre Bias in Commercial Systems. Affirm-hrs. i 20185. So??t'y a mi l't'a bifiry risks Arti?cial intelligence technologies may present new safety risks for users when they are embedded in products and senticcs. For example1 due to a llaw in the object recognition technology, an autonomous car can wrongly identify an object on the road and cause an accident involving injuries and material damage. As with the risks to fundamental rights, such risks can he a result of [laws in the desimi of the arti?cial intelligence technology. problems with the availability and quality of data or problems from machine learning. in case of connected objects the loss of connectivity may lead to safety risks. While some of these risks are not limited to products and services relying on arti?cial intelligence. the presence of arti?cial intelligence may increase or aggravate such safety risks. if these risks materialise, the characteristics of arti?cial intelligence make it more dif?cult to attribute liability. This in tum makes it dif?cult for victims of damages to seek remedies under the current EU and national liability legislatitart.S The General Data Protection Regulation and the ci?iiyacy Directive (new ePri'racy Regulation under negotiation} broadly address these risks but there might be a need to examine whether arti?cial intelligence systems post: additional risks. ?llte evaluation General Data Protection Regulation will be of relevance in this contest. The implications of arti?cial intelligence. lntemet of Things and other digital technologies for safety and liability legislation are analysed in the Commission Report accompanying this Wlirlc Paper. {Jr-Mi In the fatal accident of an Uber autonomous car in Arizona in 2013. the US National Traf?c Safety Board observer! that sullware installed in Ulier"s vehicles that helps it detect and classify other objects did not include a consideration for jaywalking pedestrians. As a result. the system failed to recognise the woman who it a wallti in; her bilr across the road as a person. Source ft?orrfr?tt'n'w. ethem'fr'nt'ertfentiotrszi-l center? RemMmt-Hm as NH prettier. pd!" - Tlte speci?c characteristics of arti?cial intelligence technologies, including complexity. autonomy and opacity [?blacI-t box-cffect'} may hamper the enforcement of existing EU law. Enforcement authorities might lack the means to verify how a given automated decision was taken, or whether existing rules were respected. Individuals and undertakings may face dif?culties with access to justice through private enforcement. Developers and users of ani?cial intelligence do not necessarily keep information that make it possible to trace back problematic decisions that artificial intelligence systems make. Enforcement authorities and victims of possible damage may therefore find it dif?cult to scrutinise these decisions. Victims of damage may not have effective access to justice and be less protected compared to when damage is caused by traditional technologies. These various risks of hann occurring will increase as the ?eld of applications for arti?cial intelligence widens and its use becomes more widespread. in Spain. the complexity of the process used by the public authorities to decide on a discount on energy bills to tit-risk inditiduals and families combined with the malfunctioning software and lack of information about the nature of rejections resulted in only million people out of 5,5 potential bene?ciaries pro?ting front the so?callcd Done Social. The former got'emmenl estimated 2,5 million people would receive the subsidy. Source: 0 Member States are already exploring options for national legislation to address the challenges of arti?cial intelligence. This ma}.r risk fragmenting the single market. A number of Member States Estonia, Germany. Italy. Latvia and Swuden] have highlighted the need for regulatory action in their national on arti?cial intelligence. Divergent national rules Ina}r create obstacles for companies who want to sell and operate arti?cial intelligence systems in the single market. Ensuring a common European approach would enable European companies to bene?t frorn smooth access to the single market and support their competitiveness at global markets. 3. EU LEGISLATIVE Flight GE INTI-ILL - The development and use of arti?cial intelligence is in principle fully covered by a comprehensive bodyr of EU legislation and further by national legislation. As regards the protection of fundamental rights. the EU legislative framework consists of the Charter of Fundamental Rights and sectoral legislation, including the Race Equality Directive. the Employment Equality Directive and the Framework Decision on combating racism and xenophobia. There is also a body of legislation concerning personal data protection and privacy. notably the General Data Protection Regulation. 0 The EU also has a legal framework for product safety and liabtlity that consists of the General Product Safety Directive and a number of sector-specific rules covering different categories of products front to toys and medical devices. This is by the Product Dt'oii its of Liability Directive that provides the rules for compensation for damage suffered by a consumer as a result ofdcfectivc products. While EU legislation in principle applies to artificial intelligence systems, the question of whether it addresses adequately the risks that arti?cial intelligence systems pose to fundamental rights. In consultation with Member States, businesses and other stakeholders. the Commission identi?ed the following weaknesses of the current legislative Framework: 0 Limitations of scope as regards fundamental rights: for example, the Charter of Fundamental Rights does not apply to situations involving only private sector patties. Similarly, the EU legislation on fundamental rights covers only certain situations. for example access to employment. social protection, education, public services such as housing. It is does not apply horizontally and does not address all possible grounds of discrimination which the Charter sets out. Limitations ofscopc ii't'tii respect to products: the EU product safety legislation only applies to the placing of products on the market. Therefore, the safety requirements do not apply to services based on arti?cial intelligence health services, ?nancial services, transport services}. Uncertainty as regards the division of responsibilities licorice" dwiiE'rem ecoitoiriic operators the supply chain: cenain economic actors who develop and integrate arti?cial intelligence into products are not covered by the EU legislation on product safety. The rules do not apply to the developer of arti?cial intelligence unless [slhe is at the same time the producer of the product. Changing of products: the integration of software. including artificial intelligence. into products can modify the functioning of products during their This is particularly true for products that require frequent software updates or which rely on machine learning. These features can give rise to new safety risks that were not present at the time when the medium was placed on the market. Emergence ofttew risks: the use of artificial intelligence in products and services can give rise to new safety risks. 'Tltese may be linked to eyher threats, personal safety risks. risks that result from loss ofconnectivity. etc. ?11:36 risks may be present at the time the products are placed products on the market or arise as a result of software updates or machine learning when the product is being used. Dwicrtitics iiml-cti to enforcement: given the opacity of artificial intelligence t'black- hox' characteristics}, it may be dif?cult for authorities to enforce EU legislation. whether on fundamental rights or on safety and liability. The lack of transparency of automated decision-making makes it dif?cult to prove possible discrimination. The lack of transparency will also make it dif?cult to attribute liability and prove causality between a damage and a defect in the design of arti?cial intelligence which in turn will make it dif?cult to have access to remedies. Given the issues identi?ed above the Commission considers it necessary to revicu and where necessary complement the legislative framework applicable to arti?cial intelligence to make if fit for the current technological level of development and to tat-re fully into account the human and ethical implications. Draft ns ofl3-'J'3 C. LEGAL or Jenna?s?. INTELLIGENCE A key.r issue for the future regulatory Framework is the de?nition of the term ?arti?cial intelligence". From the legal point ul? view. arti?cial intelligence is best de?ned by looking at its functions. A Functional de?nition of arti?cial intelligence should look at the characteristics that differentiate ani?cial intelligence from more general terms. such as software. While the term ?sol?tware' is not de?ned in EU law. Directive EDDQIEMEC provides a de?nition of ?computer programme in recitals. This is de?ned as including programs in any Fonn. including those which are incorporated into hardware. Therefore. ani?cial intelligence could be de?ned as software tinteg'ated in hardware or self-standing] which provides for the following functions: Simulation of human intelligence processes. such as teaming. problem-solving, reasoning and self-correction; 0 Performing certain speci?ed complex tasks. such as visual perception, speech recognition. decision-linking and translation with a degree of autonomy. including through self?learning processes; 0 Involving the acquisition, processing and rational or reasoned analysis of data. typically in large quantities. - While other. more technical approaches to the de?nition are possible?. the Commission considers that these approaches would be less suitable in view of the fast pace of technological developments. The de?nition of arti?cial intelligence must be suf?ciently ?exible to accommodate technical progress while providing the necessary.r legal certainty. D. - Man}.r economic actors are involved in the lit?ecycle of an arti?cial intelligence system. These include the developer of the algorithm, the producer. distributor or imponcr of a product based on arti?cial intelligence. the supplier of services based on arti?cial intelligence and the operator or user of a product based on arti?cial intelligence. - The main principle guiding the attribution of roles and rcspoasibilities in the Future regulatory framework should he that the responsibility lies with the actorts). who isfarc best placed to address it. Therefore. while developers of arti?cial intelligence are best placed to address risks that arise ?om the development phase. their abilityr to control risks during the use phase may he more limited. This would also reflect the approach taken in EU safer legislation. which lays down obligations for ditTerenI economic operators involved in placing products on the market. and to a limited extent for consumers and professional users. taking into account their dilt'erent roles and knowledge. I Therefore. future regulatory.r Framework for atti?cial intelligence should set out obligations for both developers and users of arti?cial intelligence. It could also include obligations for other groups. such as suppliers of services to. g. third-patty software update). This approach will require that different requirements are assigned to different types ol'addressees given the Such alternative technical approaches to the de?nition would for instance focus on systems that are trained with the machine learning technique. covering inter alia: deep teaming and back-propagation. supervised learning. unsupervised teaming. reinforcement teaming. adversarial networks and symbolic reasoning. Dr'ql?} very different roles that these actors have in the Iii?ecycle of products and services based on arti?cial intelligettce. The obligations on developers of arti?cial intelligence will focus on the risks that cart he addressed while at1il'tcial intelligence systems are being developed. while the obligations on users of arti?cial intelligence will target tlte risks arising when arti?cial intelligence systems are being used. This approach would tltat risks are managed comprehensively while not going beyond what is feasible for any given economic actor. Possutt. t; TEFES 0F DRUGA t't (M's When designing tlte ?tmre regulatory framework t'or arti?cial intelligence. it will be necessary to decide on the types of legal requirements that should be imposed on the developers and users of ani?cial intelligence. 'l?hese requirements can ltave either a preventative err ante character process requirements. including transparency and accountability that shape the design of ani?cial intelligence systems). or an ex post character requirements on redress. remedies). Preventative ex ante requirements aim to reduce risks created by artificial intelligence before products or services that rely on arti?cial intelligence are placed on the market or are provided. Ex port requirements address the situations once the harm has tnaterialisod and would aim either to facilitate enforcement or to provide possibilitiw ol' redress or other types of remedy. While safety risks can largely be addressed through ex ante requirements. addressing liability issues requires err post requirements. Addressing risks to fundamental rights vtill probably require a combination ol'e.r ante and ex post requirements. Ex ante requirements could include: Accountability and transparency requirements for developers as part of the err-post mechanism for enlhICernent) to disclose tlte design parameters of the arti?cial intelligence metadata of dalasets used for training, on conducted audits. etc; Transparency artd information requirements for users towards individuals, including for transparent and clear processes and outcomes for consumers; 0 General design principles for developers to reduce the risks of the arti?cial intelligence system; 0 Requirements for regarding the quality and diversity of data used to train arti?cial intelligence systems; 0 Obligation for developers to carry out an assessment of possible risks and to take steps to minimise them: as Well as obligation to keep reeords of these and the steps to mitigate the risks: 0 Requirements for human oversight or a possible review of the automated decision by arti?cial intelligence by a human leg. in case of denial of social bene?ts} as regards non?personal data (to complement the obligations for decision making under the General Data Protection Regulation); 0 Additional safety requirements for producers of products. notably mourning the risk of cyber threats as well as risks for privacy. data protection and personal security with implications for safety obligations for tlte producer to ensure a certain level of protection against such safety risks]; Option Option Dr'tt?' o5 0 Requirements addressing the changes to the product during its life-cycle that could affect the safety of the product leg. machine learning. software updates). fir post requirements could include: on liability for hann-?darnage caused by a product or a service relying on arti?cial intelligence. including the necessary procedural guarantees {possibly differentiating between high-risk and low?risk applications]; and 0 Requirements. on enforcement and redress for individuals and undertakings, including access to existing alternative nnline dispute resolution systems. It is important to note that these requirements focus on process reducing risks ex ante and establishing liability and possible remedies ex post rather than on achieving speci?c results, i.e. specifying that arti?cial intelligence shall not discriminate. It would be technically dif?cult to avoid all risks associated with arti?cial intelligence. In addition, imposing speci?c results would likely require establishing new substantive rights for individuals, eg. non? discrimination by arti?cial intelligence. This could lead to regulatory differences between arti?cial intelligence and traditional products and services. Based on this. the Commission is of the view that the regulatory framework should be based on requirements for the process rather than requirements for speci?c results, and that the requirements would need to be both ex ante and ex post. The stakeholders? input would be particularly welcome on the list of requirements presented above. Possmt REGULATUE 1' OPTIONS Given the variety of risks covered. the Commission is looking at the following ?ve regulatory options. 1: Voluntary labelling This option would consist of a legal instrument setting out a voluntary labelling framework for developers and users of anificial intelligence. They could chose to comply. on a voluntary basis. with requirements for ethical and arti?cial intelligence. if they complied. would he allowed to use the label of ?ethical-"trustworthy arti?cial While participation in the labelling scheme would be voluntary, once the developer or user opted to use the label. the requirements would be binding. This scheme would have to include measures to ensure enforcement. it should be recognised that voluntary labelling may not be sufficient to address concerns linked to safety and liability. which are already covered by mandatory requirements in EU legislation. Similarly, a voluntary labelling framework would have limited impact on addressing risks linked to fundamental rights. A voluntary framework could ntnretlteless help to promote 'ellrical and trustworthy" arti?cial intelligence it would help Europe play an intponant role in the discussions on ?ethical and arti?cial intelligence at the international level while limiting the cost implications for both European and foreign developers and users of arti?cial intelligence. 2: Sectoriat requirements for public administration and facial recognition This option would focus on a speci?c area of public concern the use of arti?cial intelligence by public authorities. This limited scope could reduce the regulatory and administrative l4 Dur? unfit?- L7 burden and make it easier for developers and users of arti?cial intelligence systems to ascertain whether or not they fall within the scope of such regulatory instrument. Although this approach would only address the use of arti?cial intelligence by public authorities, it could have an important signalling effect on the private sector. I Speci?c obligations for use of arti?cial intelligence by public administrations could follow the model set by the Canadian directive on automated dccision-makingl. This approach would aim to ensure that public authorities deploy automated decision systems in a way that reduces risks to public institutions. and leads to more ef?cient. accurate, consistent, and interpretable decisions. It could for instance set out requirements for impact assessments ofthe algorithms used, quality assurance, redress mechanisms and reporting. a The requirements for public authorities could be coupled with speci?c rules on facial recognition systems, irrespective of whether they are used by public or private actors. These rules could regulate in more detail the use of facial recognition technology [also known as biometric remote identi?cation] in public spaces, complementing the provisions General Data Protection Regulation. - The General Data Protection Regulation already stipulates that data subjects shall receive information about the existence of automated decision?making. including pro?ling and meaning?tl information about the logic involved. as well as the signi?cance and the consequences for the data subject. In addition. unless he or she has given explicit consent. the data subject has the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects for him or her or signi?cantly affects him or her. This right is subject to some exceptions, notably automated processing is authorised by Llnion or Member State law leg. for border control management). In these cases, the data controller needs to take measures to safeguard the data subject's rights and freedoms and legitimate interests, and to carry out a data protection impact assessment. These provisions mean that citizens must already be infonned and consent to the use of arti?cial technology in situations when this can produce legal effecL-i for them or affect them in a similar Way. . Building on these existing provisions. the future regulatory framework could go funher and include a time-limited ban on the use of facial recognition technology in public spaces. This would mean that the use of facial recognition technology by private or public actors in public spaces would be prohibited for a de?nite period tog. 3-5 years] during which a sound methodology for assessing the impacts of this technology and possible risk management measures could be identi?ed and developed. This would safeguard the rights of individuals, in particular against any possible abuSe of the toehnology. It would be necessary to foresee some exceptions. notably for activities in the contest ofrcsearclt and development and for security pom-rises {subject to a decision issued by a relevant court). By its nature. such a ban would be a far-reaching measure that might hamper the development and uptake of this technology. The (Touunission is therefore of the view that it would be preferable to focus at this stage on full implementation of the provisions in the General Data Protection Regulation. The will consider whether to adopt guidance In facilitate this. More details are available at. hu 5 for? tbs?sci. Inca? Di'n? its oflzr'l.? Option 3: Mandatory risk-based requirements for high-risk applications I This option would foresee legally binding requirements for developers and users of artificial intelligence, building on existing EU legislation. Given the need to ensure proportionality. the new requirements could apply only to high-risk applications of artificial intelligence. ibis risk?based approach would focus on areas where the public is at risk or an important legal interest is at stake. This strictly targeted approach would not add atty new additional administrative burden on applications that are deemed ?low-risk'. I A differentiated risk-based approach would allow for better proportionality of tire regulatory intervention. but it also requires clear criteria to differentiate between 'low-risk' and ?high- tisk' systems. This is necessary to ensure smooth implementation by all relevant economic actors as well as national cotnpetcnt authorities. - The criteria to determine the level of risk could include the following: a] De?ning high-risk sectors {e g. healtheare, transport), possibly in combination with an indicative or exhaustive list with the possibility to amend such list; bl De?ning high-risk applications predictive policing}. possibly in contbittation with an indicative or exhaustive list with the possibility to amend such list; c) the level of risks through a risk assessment carried out by the developer andlur user of artificial intelligence: d} Other types of criteria taking into account the context: 0 whether the individual or legal entities cannot avoid being affected by the output of an arti?cial intelligence system, or risk suffering serious negative consequences as a result ofthe decision to ?opt out' tog. heallhcarc applications]; 0 how important the output of the atti?cial intelligence system is for an individual or legal entity social security bene?ts}: 0 whether the output of the arti?cial intelligence system witlt a sigti?cant effect for an individual or legal entity is irreversible collision avoidance in self-driving vehicles]; 0 whether the individuals or legal entities affected by output of the arti?cial intelligence system are a speci?c area with a high risk of discrimination recruitment proceedings}. Having considered the different options. the Commission is of the opinion that the de?nition of ?liigh-risk' applications should rely on a cumulative application of two criteria. 0 an exhaustive list of sectors leg. transport. police, judiciary) tltat would be speci?ed in an annex and subject to amendments by means of delegated acts if necessary. and a more abstract definition of 'ltiglt-rislt? applications along the lines of 'liigli-rislt applications ittemts applications ofm?itfit'iol intelligence which can produce legal tyjects ?ir the individual or the legal entity or pose rislt of injury. (lt?t?i?lli or significant material duittogefor Elie individual or legal with)?, 0 Such a combined application of the two criteria would ensure a narrow scope of application while providing llte level of legal certainty for relevant economic operators. Only If: Upli?n Option as ofi'E/i'? those applications rrteeting both of these criteria would be subiect to the mandatory For 'low-risk' applications. the existing provisions of EU legislation would apply. 'lhat includes for example the provisions of the General Data Protection Regulation on the information the data subject must receive about the use of automated processing. including pro?ling. and the obligation to carry out a data protection impact assessment. 4: Safety and Liability The Eli acquis includes an extensive body of product safety and liability legislation. While this legal framework has proven its effectiveness. it would be appropriate to consider targeted amendments of the EU safety and liability legislation {including the General Product Safety Directive, the Machinery Directive, the Radio Equipment Directive and the Product Liability Directive} to address the speci?c risks of arti?cial intelligence. The Report on the broader implications of arti?cial intelligence. Internet of Things and other digital technologies for the EU safety and liability framework, which accompanies this White Paper, provides an overview of EU legislation and identi?es the shortcomings with respect to the speci?c risks posed by artificial intelligence and other digital technologies. The aim ofthe targeted adjustments of EU legislation would be to address those shortcotrtings. Speci?c risks which are currently not addressed or not addressed adequately include the risks of cyber threats, risk to personal security. to privacy and to personal data protection. New requirements should address these issues and the risks that are related to software updates and machine learning when products are being used. in addition. adjustments may be needed to clarify the responsibility of developers of ani?cial intelligence and to distinguish them from the responsibilities of the producer of the products using the arti?cial intelligence. The scope of the legislation should also be reviewer] to determine whether ar1ilicial intelligence systems. which are currently not covered by the definition or? products. should be covered. Similar changes will be also required to the provisions concerning the liability for damages caused by defective products. Changes to the Product Liability Directive lillEJ?il also aim to facilitate the burden of proof for consumers to ensure easier access to justice. To assess the impacts of these targeted changes, the Commission will launch the work on the impact assessrnenlis). The changes could take the form of speci?c amendments to individual pieces of EU legislation or a new horizontal piece of legislation that would include the relevant requirements for arti?cial intelligence. This option could be with any of the other three options set out about; This combined approach would ensure that all relevant risks posed by artificial intelligence systems are addressed while taking into account the speci?cities of the existing legal framework. 5: Governance To e-nsme that any future rules on artificial intelligence bring about the anticipated bene?ts for consumers and businesses, an effective system be an essential component of the future regulatory Framework This will require a strong system of public oversight. This system should. as much as possible. build on the existing network of authorities. It should consist of national authorities that will be entanted with the implementation and enforcement of the future regulatory framework. in addition. it will be necessary to foresee a mechanism to Draft of foster cooperation among national authorities across the EU and facilitate the exchange of infonnation. knowledge. and best practice. . There are already a number of different authorities involved in implementing and enforcing EU legislation. including in tire areas of fundamental rights. data protection and safety. For example. under the General Data Protection Regulation. each Member State had to appoint one or more supervisory authorities to monitor the applicatiort of the Regulation. The Regulation also foresees a European Data Protection Board with a number of tasks, including advising the Commission on issues linked to data protection and preparing guidelines, recommendations and best practices. EU safety legislation. including the General Product Safety Directive and the new Regulation on market surveillance and compliance of products, also requires Member States to nominate authorities to monitor the compliance of products with the satiety requirements. Both pieces of legislation foresee speci?c cooperation mechanisms: :1 Consumer Safety Network and a Union Product Compliance Network. II Given the speci?city and complexity of regulatory challenges posed by arti?cial intelligence, it would nonetheless be appropriate for Member States to appoint authorities responsible for monitoring the overall application and enforcement of the future regulatory framework for arti?cial intelligence. Member States will be free to decide that these tasks should be entrusted to existing authorities in order to minimise any additional administrative burden. These authorities could be responsible not only for monitoring the application of the new legislation addressing specifically arti?cial intelligence but also provide guidance on horizontal questions of relevance tor the overall EU regulatory framework for arti?cial intelligence. The Commission will also set up an appropriate mechanism to promote cooperation between the relevant national authorities. I [The Commission is of the view that Option 3 set out above. combined with Option 4 and Option 5. seems to be the most promising to address the risks speci?c to arti?cial intelligence. Therefore, the Commission may consider a combination of a horizontal instrument setting out transparency and accountability requirements and covering also the got-entrance framework. complemented by targeted amendments of existing EU safety and liability legislation. The horizontal instrument would be relevant both for enforcing EU fundamental rights legislation as well as existing EU safety and liability legislation. and possibly also national legislation. ti. Costumes [to he (fet'efoped. also referring to the broad af?iction rr?er the consultation phase] The CotnrniSsion invites for on the proposals set out in the White Paper. They may be sent by ZDEU. either by e-mail to: ?r'W or by post to: It is standard practice for the Commission to publish submissions received in response to a public consultation. However. it is possible to request that submissions, or parts thereof. remain con?dential. Should this be the case. please indicate clearly on the front page of your submission that it should not be made public and also send a non-con?dential version of your submission to the {'ottrmission for publication.