United States Government Accountability Office Testimony Before the Subcommittees on Research and Technology and Energy, Committee on Science, Space, and Technology, House of Representatives For Release on Delivery Expected at 10:30 a.m. ET Tuesday, June 26, 2018 ARTIFICIAL INTELLIGENCE Emerging Opportunities, Challenges, and Implications for Policy and Research Statement of Timothy M. Persons, Chief Scientist Applied Research and Methods GAO-18-644T Letter Letter Chairwoman Comstock, Chairman Weber, and Ranking Members Lipinski and Veasey: Thank you for the opportunity to discuss our work on artificial intelligence (AI). My testimony today summarizes our March 2018 technology assessment entitled Artificial Intelligence: Emerging Opportunities, Challenges, and Implications. 1 According to experts, AI holds substantial promise for not only improving human life and economic competitiveness in a variety of capacities, but also helping to solve some of society’s most pressing challenges. At the same time, AI poses new risks and has the potential to displace workers in some sectors, requires new skills and adaptability to changing workforce needs, and could exacerbate socioeconomic inequality. Our March 2018 report and my statement today address the following topics: • How has AI evolved over time? • According to experts, what are the opportunities and future promise, as well as the principal challenges and risks, of AI? • According to experts, what are the policy implications and research priorities resulting from advances in AI? For our March 2018 report, the Comptroller General of the United States convened a Forum on Artificial Intelligence, a meeting of 21 expert participants held on July 6 and 7, 2017, with the assistance of the National Academy of Sciences. 2 The work for the report also included a review of relevant literature and consultation with additional subjectmatter experts. Additional information about our scope and methodology can be found in our report. We performed the work on which this testimony is based in accordance with all sections of GAO’s Quality Assurance Framework that are relevant to technology assessments. GAO currently has work underway on how automation is affecting labor markets, which we expect to publish in early 2019. Because of the 1 GAO, Artificial Intelligence: Emerging Opportunities, Challenges, and Implications, GAO-18-142SP (Washington, D.C.: March 28, 2018). 2 Forum participants were from academia, business, government, and nonprofit organizations. For a complete list of participants, see Appendix II of GAO-18-142SP. Page 1 GAO-18-644T Artificial Intelligence strategic importance of AI in the health care sector, GAO is planning a forum to develop a technology assessment of this area, as well. The Evolution and Characteristics of AI Several Definitions and Taxonomies of AI Exist The field of artificial intelligence can be traced back to a 1956 workshop organized by John McCarthy, held at Dartmouth College. The workshop’s goal was to explore how machines could be used to simulate human intelligence. Numerous factors, primarily the trends underlying big data (i.e., increased data availability, storage, and processing power), have contributed to rapid innovation and accomplishments in AI in recent years. 3 As we noted in our March 2018 technology assessment, there is no single universally accepted definition of AI, but rather differing definitions and taxonomies. In addition to defining AI overall, researchers have distinguished between narrow and general AI. Narrow AI refers to applications that provide domain-specific expertise or task completion, whereas general AI refers to an AI application that exhibits intelligence comparable to a human, or beyond, across the range of contexts in which humans interact. While there has been considerable progress in developing AI that outperforms humans in specific domains, some observers believe that general AI is unlikely to be achieved for decades in the future. AI Has Been Conceptualized as Having Three Waves of Development In our March 2018 work, we noted that rather than focusing on a specific definition of AI, it can be understood in terms of the waves in which the technology has developed. Launchbury (2016) provides a framework that conceptualizes AI as having three waves based on differences in capabilities with respect to perceiving, learning, abstracting, and reasoning. 4 3 For more on trends underlying big data, see, for example, GAO, Highlights of a Forum: Data and Analytics Innovation: Emerging Opportunities and Challenges, GAO-16-659SP (Washington, D.C.: Sept. 20, 2016). 4 John Launchbury, A DARPA Perspective on Artificial Intelligence, 2016. Page 2 GAO-18-644T Artificial Intelligence • The first wave of AI is represented by expert knowledge or criteria developed in law or other authoritative sources and encoded into a computer algorithm, which is referred to as an expert system. Examples of expert systems include programs that schedule logistics or prepare taxes. • Second-wave AI technology is based on machine learning, or statistical learning, and includes natural-language processing (e.g., voice recognition) and computer-vision technologies, among others. In contrast to first-wave systems, second-wave systems are designed to perceive and learn. Examples of second-wave systems include voiceactivated digital assistants, applications that assist healthcare workers in selecting appropriate treatment options or making diagnoses, and self-driving automated vehicles. • Third-wave AI technologies combine the strengths of first- and second-wave AI and are also capable of contextual sophistication, abstraction, and explanation. An example of third-wave AI is a ship that can navigate the sea without human intervention for a few months at a time while sensing other ships, navigating sea lanes, and carrying out necessary tasks. As described by Launchbury, we are just at the beginning of the third wave of AI, and further research remains before third-wave technologies become prevalent. An important part of third-wave AI will be developing systems that are not only capable of adapting to new situations, but also are able to explain to users the reasoning behind these decisions. Forum Participants Identified Several Benefits of Artificial Intelligence and Challenges to Its Development The increased adoption of artificial intelligence will bring with it several benefits, as well as a number of challenges. According to participants at the forum we convened for our March 2018 technology assessment, both benefits and challenges will need to be carefully considered alongside one another. Figure 1 summarizes selected questions, benefits, and challenges regarding the use of AI in four high-consequence sectors. Participants also stressed that there may be benefits related to AI that cannot yet be predicted or may even be hard to imagine. Page 3 GAO-18-644T Artificial Intelligence Figure 1: Selected Questions Regarding the Use of Artificial Intelligence (AI) in Four High-Consequence Sectors Benefits Identified by Forum Participants Improved economic outcomes and increased levels of productivity. It may be difficult to accurately predict what AI’s impact on the economy could be, according to one forum participant. In previous periods, large investments in automation have been highly correlated with improvements in productivity and economic outcomes, which, according to one forum participant, has led some to believe that transformations as a result of AI could have the same outcome. This same participant noted, however, that no one collects the data needed to measure the impact AI Page 4 GAO-18-644T Artificial Intelligence or other types of advanced automation may have on the economy. According to another participant, whatever the effect that AI will have on productivity in particular, and the economy in general, the changes will occur quickly and be difficult to predict. Improved or augmented human decision making. AI can be used to gather an enormous amount of data and information from multiple locations, characterize the normal operation of a system, and detect abnormalities much faster than humans can. In addition, AI could be used to create data-informed policy that may help prevent inappropriate or harmful human bias—be it from political pressure or other factors—from creating undesirable results, according to one participant. However, as another participant at the forum noted, AI is no guarantee of freedom from bias. The participant stressed specifically that if the data being used by AI are biased, the results will be biased as well. AI can help prevent inappropriate or harmful human bias, according to this same participant, if it is carefully used, if the assumptions of the models are thoughtfully considered, and, most importantly, if the outputs of the model are constantly and closely verified. Insights into complex and pressing problems. Some of the participants at our forum believed that AI has the potential to provide insights into—and even help solve—some of the world’s most complex and pressing problems. For example, one participant stated that as the number of elderly Americans continues to grow, AI could be used to provide medication management, mobility support, housework, meal preparation, and rehabilitation services to a growing number of people who need assistance with day-to-day activities. In addition, there are other complex and pressing problems that may eventually be solved by the adoption of AI. According to one participant, AI could eventually be used to assure regulatory compliance in the financial sector without unnecessary burden on those being regulated. Page 5 GAO-18-644T Artificial Intelligence Challenges Identified by Forum Participants Barriers to collecting and sharing data. While not all applications of AI require massive amounts of data, certain applications that use machine learning algorithms do. 5 This can be a problem in sectors where data are not easily aggregated or interpreted or readily available. Such is the case with criminal justice, where the ways in which data are collected and organized vary from jurisdiction to jurisdiction. Such is also true with most vulnerable populations and developing countries, where data have not yet been collected. Lack of access to adequate computing resources and requisite human capital. Forum participants told us that AI researchers and developers need access to storage and processing, both of which are expensive and sometimes difficult to access at the necessary scale. Some forum participants also shared concerns that the accelerated pace of change associated with AI is straining the education and workforce systems’ capacity to train and hire individuals with the appropriate skill sets, leaving many companies struggling to find workers with relevant knowledge, skills, and training. Adequacy of current laws and regulations. The widespread adoption of AI may, according to some forum participants, have implications regarding the adequacy of current laws and regulations. For example, one participant noted that current patent and copyright laws provide only limited protection for software and business methods and questioned whether these laws will protect the products created by AI. At the same time, one of the participants at the forum raised concerns about ways in which AI could be used to violate civil rights. This participant cautioned, for example, that if law enforcement considers race, class, or gender in AI that is used to assess risk, there is the possibility that a defendant’s equal protection rights under the 14th Amendment may be violated, as well as their due process rights under the 5th and 14th Amendments. Ethical Framework for and Explainability and Acceptance of AI. The adoption of AI also introduces ethical implications. According to a forum participant, there is a need for a system of computational ethics to help AI choose options that reflect agreed-upon values. Moreover, some of the participants at the forum noted that before humans will understand, 5 Scott W. Bauguess, “The Role of Big Data, Machine Learning, and AI in Assessing Risks: a Regulatory Perspective,” U.S. Securities and Exchange Commission, Keynote Address to OpRisk North America 2017, New York, New York, June 21, 2017. Page 6 GAO-18-644T Artificial Intelligence appropriately trust, and be able to effectively manage AI, an AI application or system needs to explain why it took certain actions and why it valued certain variables more than others. Forum Participants Identified Several Cross-Cutting Policy Considerations Related to AI and Several Areas Where More Research Is Needed After discussing the benefits and challenges associated with AI, the participants at the forum we convened for our March 2018 technology assessment highlighted a number of policy considerations and areas of future research (see fig. 2). Figure 2: Implications of Artificial Intelligence (AI) for Policy and Research Incentivizing data sharing. Forum participants emphasized the need for establishing a “safe space” to protect sensitive information (e.g., intellectual property and brand information) while sharing data. Another participant cautioned that for such a safe space to succeed, it will need to start with a few manufacturers and clearly define the data that are needed and the specific scenarios in which the data will be used. Page 7 GAO-18-644T Artificial Intelligence Certain forum participants also expressed concerns that many potentially useful data are guarded by federal agencies that do not provide access to researchers. Participants noted successful data-sharing efforts through entities such as MITRE and the National Institute of Standards and Technology (NIST). In particular, some participants highlighted datasharing efforts to improve safety outcomes. For instance, one participant mentioned that researchers at MITRE had credited data-sharing efforts in the aviation industry (employing a safe space) with reducing the number of accidents. Another participant emphasized the importance of sharing data to better understand safety outcomes associated with automated vehicles, stating, “[i]f we’re going to trust that these vehicles can go out on the road, we need to verify that, in fact, out on the road, they are as safe as we think they are.” Forum participants highlighted other proposed future data-sharing efforts, citing the benefits of assessing data from multiple sources to improve outcomes. According to one forum participant, the National Science and Technology Council Subcommittee on Machine Learning and Artificial Intelligence is working collaboratively among federal departments and agencies to promote the sharing of government data to help develop innovative solutions for social good. This sharing may include creating training environments—safe spaces—in which sensitive data are protected, among other things. Another participant noted that in the criminal-justice sector, the federal system could be used as a test bed for various reforms—including data sharing reforms—because the federal system is unified. This participant argued that if the federal system could find a way to share data related to risk assessments and other areas and show that the data are being utilized in an evenhanded way, the reforms pioneered by the federal system would likely migrate down to the individual state systems. This same participant also stated that the Bureau of Justice Assistance and the Bureau of Justice Statistics may be the best positioned to initiate any nationwide data standardization and collection projects. Improving safety and security. Participants highlighted challenges and opportunities to enhancing the safety and security of system applications from cyber attacks, including those with AI features. One participant said that the costs of cybersecurity in all forms of network computing are not being shared appropriately and that security breaches are much costlier than the security measures that are needed to prevent breaches. This participant said that policymakers will need to consider creating some kind of framework that ensures costs—and liabilities—are appropriately Page 8 GAO-18-644T Artificial Intelligence shared between manufacturers and users. In addition, two participants said that policymakers should consider creating a new regulatory structure to better ensure the safety of automated vehicles. Updating the regulatory approach. The widespread adoption of AI will have implications for regulators, and lawmakers will need to consider policy options to address these issues, according to multiple forum participants. One participant reinforced the need for regulators to be proactive, including a commitment of resources, because change is occurring so rapidly and in unanticipated ways. For example, as a policy matter going forward, one participant explained, a new regulatory structure for automated vehicles needs to evolve and that, accordingly, the federal government should avoid setting standards prematurely. Another interrelated issue raised by a participant about automated vehicles concerned how liability would be regulated. Currently, according to this participant, the manufacturer of the automated vehicle bears all responsibility for crashes, even if these vehicles improve overall public safety. Some of the participants at the forum also raised concerns about privacy, including ways in which AI could be used by law-enforcement agencies to violate civil liberties, and said that this is an area that needs policy solutions. In addition, one of the forum participants said that policymakers should consider allowing financial regulators to explore alternative regulatory approaches and reporting mechanisms, leveraging technology to improve and reduce the burden of regulation. In this regard, one participant discussed the merits of “regtech,” that is, linking regulation with technology. Another participant noted that other laws and regulations may need to be adapted to account for the fact that humans may not always be behind decisions that are made by automated systems. For example, this participant discussed laws where intent plays a key role, as is the case in financial market manipulation. If someone programs AI to make money, and it does so in a nefarious way, it is not clear how current laws could be used to prosecute the creator of the AI. Assessing acceptable risks and ethical decision making. Policymakers need to decide how they are going to measure, or benchmark, the performance of AI and assess the trade-offs, according to one participant, who stressed that the “baseline” is current practice, not perfection (i.e., how humans are performing now, absent AI). As this participant emphasized, “[i]f we have to benchmark [AI] against Page 9 GAO-18-644T Artificial Intelligence perfection…the perfect will be the enemy of the good and we get nowhere.” Several participants at the forum emphasized that such regulatory questions should be resolved by a variety of stakeholders, including economists, legal scholars, philosophers, and others involved in policy formulation and decision making, and not solely scientists and statisticians. Participants at our AI forum also highlighted several areas they believe deserve more research in terms of new regulatory frameworks, data labeling, employment and education, and explainable AI and computational ethics. Establishing regulatory sandboxes. In finance there is a worldwide movement to create so-called regulatory sandboxes, according to one participant, where regulators can begin experimenting on a small scale and empirically testing new ideas. As this participant explained, regulatory sandboxes would provide a safe haven to assess the results of alternative regulatory approaches. Developing high-quality labeled data. One participant emphasized the importance of data collection and how to obtain high-quality labeled data. This encompasses improving the quality of the data during data collection. Another participant we spoke with highlighted the merits of developing adequately labeled data sets. As data become more comprehensive and organized, or labeled, in a manner that facilitates machine learning, AI tools can produce more accurate outcomes. Understanding AI’s effect on employment and reimagining training and education. Some forum participants offered mixed views concerning the impacts associated with AI on employment, while acknowledging the uncertainties. For instance, some forum participants noted that job losses in some areas were likely, while noting the potential for job increases in other areas. One participant advocated for research to better understand how jobs have been changing. There is currently no comprehensive federal data source with information on the employment effects AI may have in manufacturing and other segments of the economy. Further, according to two participants, in the absence of a comprehensive datacollection effort, it is unclear which jobs will be created by AI, which jobs may be augmented, or which jobs are likely to be displaced by AI. The widespread adoption of AI also brings with it a need to reevaluate and reimagine training and education, according to some of the participants. Page 10 GAO-18-644T Artificial Intelligence Exploring computational ethics and explainable AI. According to one participant, we will have to design systems that are going to operate in environments where we cannot anticipate in advance all the things that could go wrong. Explainable AI and computational ethics are relevant for all places where AI systems are interacting with the physical world. As for computational ethics, AI researchers have begun establishing rules of their own. For example, some groups of technologists have created sets of ethical considerations. 6 In addition, researchers from six institutions recently formed a group called PERVADE (Pervasive Data Ethics for Computational Research), whose mission is to develop a clearer ethical process for big-data research for use by both universities and private companies. However, as one participant noted, the current and future developers of AI systems may operate by ethical standards or adhere to certain morals or values that may not be compatible with the rest of society or representative of those who will use the AI. In conclusion, in our March 2018 technology assessment, we noted that AI technologies are already impacting a wide array of economic sectors. Our technology assessment also provides an overview of developments in the field of AI, focusing on the challenges, opportunities, and implications of these developments for policy making and research, and further helps clarify the prospects for the near-term future of AI and identifies areas where changes in policy and research may be needed. Chairwoman Comstock, Chairman Weber and Ranking Members Lipinski and Veasey, this concludes my statement. I would be pleased to respond to any questions you or other Members may have. GAO Contact and Staff Acknowledgments If you or your staff have any questions about this testimony, please contact Timothy Persons at (202) 512-6522 or personst@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Stephen Sanford (Assistant Director), Virginia Chanley (Analyst-in-Charge), and David Chrisinger. Key contributors to the prior work on which this testimony is based are listed in the product. 6 Hila Mehr, “Artificial Intelligence for Citizen Services and Government,” Ash Center for Democratic Governance and Innovation, Harvard Kennedy School, August 2017. (102880) Page 11 GAO-18-644T Artificial Intelligence This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. GAO’s Mission The Government Accountability Office, the audit, evaluation, and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s website (https://www.gao.gov). Each weekday afternoon, GAO posts on its website newly released reports, testimony, and correspondence. To have GAO e-mail you a list of newly posted products, go to https://www.gao.gov and select “E-mail Updates.” Order by Phone The price of each GAO publication reflects GAO’s actual cost of production and distribution and depends on the number of pages in the publication and whether the publication is printed in color or black and white. Pricing and ordering information is posted on GAO’s website, https://www.gao.gov/ordering.htm. Place orders by calling (202) 512-6000, toll free (866) 801-7077, or TDD (202) 512-2537. Orders may be paid for using American Express, Discover Card, MasterCard, Visa, check, or money order. Call for additional information. Connect with GAO Connect with GAO on Facebook, Flickr, Twitter, and YouTube. Subscribe to our RSS Feeds or E-mail Updates. Listen to our Podcasts. Visit GAO on the web at https://www.gao.gov. To Report Fraud, Waste, and Abuse in Federal Programs Contact: Website: https://www.gao.gov/fraudnet/fraudnet.htm Automated answering system: (800) 424-5454 or (202) 512-7470 Congressional Relations Orice Williams Brown, Managing Director, WilliamsO@gao.gov, (202) 512-4400, U.S. Government Accountability Office, 441 G Street NW, Room 7125, Washington, DC 20548 Public Affairs Chuck Young, Managing Director, youngc1@gao.gov, (202) 512-4800 U.S. Government Accountability Office, 441 G Street NW, Room 7149 Washington, DC 20548 Strategic Planning and External Liaison James-Christian Blockwood, Managing Director, spel@gao.gov, (202) 512-4707 U.S. Government Accountability Office, 441 G Street NW, Room 7814, Washington, DC 20548 Please Print on Recycled Paper.