LITIGATING ALGORITHMS 2019 US REPORT: New Challenges to Government Use of Algorithmic Decision Systems Rashida Richardson, AI Now Institute, New York University Jason M. Schultz, NYU School of Law; AI Now Institute, New York University Vincent M. Southerland, Center on Race, Inequality, and the Law, NYU School of Law; AI Now Institute, New York University SEPTEMBER 2019 Cite as: Rashida Richardson, Jason M. Schultz, & Vincent M. Southerland, Litigating Algorithms 2019 US Report: New Challenges to Government Use of Algorithmic Decision Systems (AI Now Institute, September 2019). https://ainowinstitute.org/litigatingalgorithms-2019-us.html. CONTENTS WORKSHOP SUMMARY 3 KEY RECOMMENDATIONS 4 SESSION 1: YOU WON! NOW WHAT? 5 Recommendations SESSION 2: CRIMINAL DEFENSE ACCESS TO LAW ENFORCEMENT ADS Recommendations 12 13 18 SESSION 3: PUBLIC BENEFITS AND COLLATERAL CONSEQUENCES 19 Collateral Consequences and Algorithmic Systems 23 Perspectives from the EU 24 SESSION 4: ILLINOIS’S BIOMETRIC PRIVACY APPROACH Recommendations APPENDIX: CASES DISCUSSED BY SESSION 25 26 28 Session 1 28 Session 2 30 Session 3 31 Session 3.5 32 Session 4 32 This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License Litigating Algorithms 2019 US Report Workshop Summary 3 WORKSHOP SUMMARY Algorithmic decision systems (ADS) are often touted for their putative benefits: mitigating human bias and error, and offering the promise of cost efficiency, accuracy, and reliability. Yet within health care, criminal justice, education, employment, and other areas, the implementation of these technologies has resulted in numerous problems. In our 2018 Litigating Algorithms Report, we documented outcomes and insights from the first wave of US lawsuits brought against government use of ADS, highlighting key legal and technical issues they raised and how courts were learning to address the substantive and procedural problems they create. In June of 2019, with support from The John D. and Catherine T. MacArthur Foundation, AI Now and NYU Law’s Center on Race, Inequality, and the Law held our second Litigating Algorithms Workshop.1 We revisited several of last year’s cases, examining what progress, if any, had been made. We also examined a new wave of legal challenges that raise significant questions about (1) what access, if any, should criminal defense attorneys have to law enforcement ADS in order to challenge allegations leveled by the prosecution; (2) the collateral consequences of erroneous or vindictive uses of governmental ADS; and (3) the evolution of America’s most powerful biometric privacy law and its potential impact on ADS accountability. As with the previous year’s Litigating Algorithms Workshop, participants shared litigation strategies, raised provocative questions, and recounted key moments from both their victories and losses. Workshop attendees came from various advocacy, policy, and research communities, including the ACLU, Center for Civil Justice, Center for Constitutional Rights, Center on Privacy and Technology at Georgetown Law, Citizen Lab, Digital Freedom Fund, Disability Rights Oregon, the Electronic Frontier Foundation, Equal Justice Under Law, Federal Defenders of New York, the Ford Foundation, LatinoJustice PRLDEF, Legal Aid of Arkansas, Legal Aid Society of New York, the MacArthur Foundation, NAACP Legal Defense and Educational Fund, National Association of Criminal Defense Lawyers, National Employment Law Project, National Health Law Program, New York County Defender Services, Philadelphia Legal Assistance, Princeton University, Social Science Research Council, Bronx Defenders, UDC Law School, Upturn, and Yale Law School. 1 The authors wish to thank Alejandro Calcaño Bertorelli, Roel Dobbe, Danisha Edwards, Genevieve Fried, Casey Gollan, Eli Hadley, Sarah HamiltonJiang, Jeffrey Kim, Zachary Mason, and Varoon Mathur for their outstanding support and assistance with this project. Litigating Algorithms 2019 US Report Key Recommendations 4 KEY RECOMMENDATIONS 1. Assess the success of litigation by measuring structural change within government agencies and their programs, rather than through isolated or narrow changes to specific ADS. 2. Consider litigation as a rallying point and megaphone to amplify the voices of those personally impacted by ADS. 3. Track “bad actor” ADS vendors who migrate their defective systems to new agencies or areas of focus. 4. Enact open-file discovery policies for criminal cases involving ADS. 5. Update criminal discovery rules to treat ADS as accessible, contestable, and admissible evidence. 6. Develop ADS trainings and toolkits for the criminal defense community. 7. Expand prosecutorial understanding of the Supreme Court’s Brady Rule to include ADS evidence. 8. Follow the lead of Illinois by adopting statutes similar to its Biometric Information Privacy Act (BIPA), in light of its effectiveness as an algorithmic accountability framework. 9. BIPA-style statutes should ensure a private right of action, standing to sue for non-consensual or non-specific data collection or use, and statutory fines for each violation. 10. BIPA-style laws should add strong prohibitions on government use of biometrics and impose liability on vendors who assist or provide the capacity for violations. Litigating Algorithms 2019 US Report 5 SESSION 1: YOU WON! NOW WHAT? Primary Presenters Ritchie Eppink, Legal Director, ACLU of Idaho Kevin De Liban, Economic Justice Practice Group Leader, Legal Aid of Arkansas Gordon Magella, Staff Attorney, Disability Rights Oregon Elizabeth Edwards, National Health Law Program Vincent M. Southerland, Executive Director, Center on Race, Inequality, and the Law, NYU School of Law; Area Lead, AI Now Institute Audrey Amrein-Beardsley, Professor, Arizona State University MODERATOR: Jason M. Schultz, Professor of Clinical Law, NYU School of Law; Area Lead, AI Now Institute Access Key Litigation Documents Here Session Summary In this session, we revisited several case studies examined in last year’s report involving litigation over disability rights and Medicaid benefits, public school teacher employment, and juvenile criminal risk assessment. Although victory had been achieved in each case, we wanted to explore some lingering questions: • For cases in which specific ADS were shut down, did this achieve a better or worse outcome for the affected communities and populations? • What does victory look like in the long term, and how do we measure progress? • What is the future likely to hold? K.W. ex rel. D.W. v. Armstrong Our first presenter was Ritchie Eppink. He and his team litigated against the State of Idaho over its Medicaid program for adults with intellectual and developmental disabilities. Previously, Idaho had used an in-house formula to determine the “dollar value” of the disability services available to each qualifying individual. Eight years ago, a significant number of people’s “dollar-figure numbers” dropped. They contacted the ACLU; when pressed, the State told the ACLU that an ADS formula— which it considered a “trade secret”—had caused the numbers to drop. Litigating Algorithms 2019 US Report Session 1: You Won! Now What? 6 In subsequent litigation, the Court ordered the State to disclose its formula. During the discovery process, the ACLU worked on figuring out how the State built the ADS formula, and won the merits issues in the case on summary judgment. The Court (1) found that the State’s formula was unconstitutionally arbitrary; (2) ordered the State to fix the formula, so that it allocated funds fairly to recipients; (3) ordered the State to test the formula regularly; and (4) ordered the State to develop a system ensuring that those impacted by the formula had sufficient support and assistance with the appeals process (a “suitable representative”). The case was ultimately settled. In the agreement, a road map emerged for how the State would fix the formula and provide a suitable representative for individual recipients. Specifically, during the period in which the State developed its new formula and processes, benefit recipients would receive the dollar amount at the highest level provided by the existing tool. The State would also agree to pay plaintiffs’ counsel to train attorneys on how to handle appeals across the state, as well as the attorneys handling those appeals. A key aspect of the victory was ensuring the interim status quo for recipients left no gap in coverage or due process rights. The settlement also held the State equally responsible for future implementations of algorithms or formulas. The next key phase was fixing the formula. From this, two challenges emerged: first, it was clear the State was unlikely to fix it alone. The previous in-house formula was shown to also be constitutionally deficient, and the likelihood of a new in-house formula improving the situation was remote. Instead, the State looked to hire an outside firm with experience using validated and tested formulas for benefits determinations. Requiring the State to pay for this level of expertise brought forth the true cost of accountability, and highlighted the fact that ADS implementations should not be left to amateur personnel. The second (and perhaps most important) factor was the settlement agreement’s requirement that the State continuously engage in dialogue with the affected population of benefits recipients. Specifically, the State was required to provide information and updates on progress, solicit feedback, and incorporate it. These processes have been ongoing for nearly two and a half years. Perhaps unsurprisingly, the impact of the continuous community-engagement process proved far more profound than the impact of fixes to the formula itself. As the State began to engage with individuals within the program, departmental staff saw not only problems with the ADS, but with many aspects of the program overall. This resulted in a report with several recommendations for reform, only a few of which directly implicated the ADS themselves. These included changes to the services provided, rates charged, and other fundamental structural aspects of the State’s Medicaid program. As a result, the State approached class counsel to request an additional 5 years under the settlement timeline to implement some of the changes. Class counsel offered to accept the extended time frame if the State would immediately agree to additional concessions, such as the guarantee that new program participants would receive the same levels of service and support for appeals as existing participants. The State rejected that proposal, leaving the parties to continue litigating the exact scope and time frame for fixing the formula under the Court’s order. Litigating Algorithms 2019 US Report Session 1: You Won! Now What? 7 Eppink said that in the years since the litigation began, he has reflected on various aspects of the case with his staff, including whether the best strategy was to challenge individual and procedural aspects of the program (for example, how transparent and robust the formula should be) or whether they should have challenged the entire practice of using automated decision-making for Medicaid benefits programs. The struggle with this question arose because the automated decisions had both substantial positive and negative impacts on different populations. In some instances, the ADS overcame the bias of human decision-making, and in others reinforced it. Sometimes they increased recipient benefits, and sometimes they dramatically cut them. All of this made it difficult to isolate the role of ADS from the culture and personnel of the State department administering them. Ark. Dep’t of Human Servs. v. Ledgerwood Next we heard an update from Kevin De Liban, whose case was quite similar to Eppink’s but smaller in scope due to the fact that it was not a class action. It focused on community-based services for low-income Arkansans with physical disabilities. As noted in the summary from last year, prior to 2016, the Arkansas program had a nurse assess each individual beneficiary and recommend a certain number of hours of care (up to a maximum of 8 hours per day). In 2016, without any notice, the State introduced an algorithm that drastically reduced care; now the best-case scenario for most recipients was 5 hours a day. This abrupt change led to serious problems. Recipients were no longer receiving the care they desperately needed, sometimes leaving them with bedsores or lying in their own waste. Legal Aid challenged the algorithm, and won after nearly 3 years of litigation in both federal and state courts on different causes of action. Ultimately, the algorithm was invalidated. In October 2018, Arkansas announced that it would need a new assessment tool and planned to return to the old algorithm for two months as an emergency stopgap. This provoked a fight over how to design the new ADS. Again, the state agency began developing the system without any public transparency. In fact, the State was so concerned about bad publicity and backlash that it shifted away from using the word “algorithm.” Instead, its language referred to the system having “tiering logic” or “eligibility criteria.” Even with this rhetorical shift, the State withheld its new “tiering logic” from the required notice of its administrative rulemaking process. Yet even as the state agency resisted pressure from concerned communities, the State Legislature responded to community concerns. Lawmakers pressed the agency for assurance that the new system would not have the same consequences as the previous system the courts had struck down. When the agency could not promise better outcomes, the Legislature approved it, but also promised it would receive heightened scrutiny. Undeterred, the state agency launched the new system with some improvements. New assessments were put back into the hands of nurses, who were allowed modest discretion about the number of hours of patient care. The state agency also “grandfathered” in clients who had higher severity disability levels to lessen the chances that their care would be cut under the new system. But new problems emerged with the administration of the assessment—again, Litigating Algorithms 2019 US Report Session 1: You Won! Now What? 8 highlighting that accountability issues with the administration of the benefits program cannot be isolated to the ADS themselves, but instead must include the culture and personnel of the agencies implementing them. Under the latest system, about 30% of the people on the program were determined ineligible upon their latest assessment. These terminations happened even though the eligibility criteria had not changed—only the assessment tool used to measure them had—and the clients’ conditions had not improved. A promising development has been the State Legislature’s engagement. In the wake of new mass terminations, the Legislature has been more proactive than it was under the 2016-2018 algorithm-related cuts. In response to widespread complaints, the Legislature has ordered the new system and vendor contracts to be reviewed. In the two legislative hearings held thus far, several legislators have asked that the vendor contracts be cancelled and that a new process be implemented to determine eligibility. These hearings will continue and a group led by people with disabilities has moved to incorporate, so that people directly impacted can lead the discussion. Other lessons were learned: first, because the lawsuit was not a class action and could not provide relief to every injured benefit recipient, the advocacy strategy included substantial media and public education components. That strategy spread information widely, providing the public with a detailed understanding of this issue that would then turn into political pressure. Another lesson was to be wary of overreliance on appeals processes as safeguards. While the right to appeal any state determination is an important one, substantial evidence showed that even when proper appeals were filed, the State would fail to preserve the number of care hours previously provided pending the outcome of the appeal, a requirement of due process protections. Legal Aid is suing to challenge the failures of the appeals process. Meanwhile, the state agency has been using the appeals process to resist oversight and reform, claiming that standard appeals are enough to correct any errors. C.S. v. Saiki We also heard from Gordon Magella of Disability Rights Oregon (DRO). In April 2017, DRO filed a lawsuit, similar to that of the ACLU of Idaho, over sudden cuts in Oregonian’s disability benefits with no notice or explanation. In the investigation and litigation process, DRO discovered that the reduction was due to the State hard-coding a 30% across-the-board reduction of hours into their algorithmic assessment tool. Faced with the Idaho precedent and no legal justification for the reduction, the State quickly accepted a preliminary injunction that restored all recipients’ hours to their prior levels, and agreed to use the previous version of the assessment tool going forward. Yet much like the cases in Idaho and Arkansas, the injunction in Oregon failed to resolve some larger issues. There was some evidence that the State was aware of the assessment tool’s flaws, but implemented it nonetheless in response to political pressure to cut costs. Thus came an opportunity to develop a better tool, which has undergone two validity phases with two experienced contractors. Unlike the old tool, which mandated a specific number of hours of care, the new tool reportedly has more flexibility to work with case managers in deciding the number of hours and service groups for a given individual. The consequences of the new tool on recipients’ hours and care is unknown, however, as it is still in development. Litigating Algorithms 2019 US Report Session 1: You Won! Now What? 9 Magella noted that DRO and its clients were fortunate that the state agency takes these issues seriously, and that Oregon has an active advocate community and a progressive legislature. Although budget issues are challenging, and administrative agencies are hesitant to take risks, things have gone well overall since the suit was filed and the preliminary injunction granted. It remains to be seen what service levels individuals will receive under the new tool, and whether the State and the disability community can reach a consensus on the best approach. Also, beyond the due process issues with the tool’s ability to give notice and explanations of its decisions, Magella noted that ensuring recipients receive their legally entitled levels of care remains an issue in the case. In the future, it may be necessary to litigate individual claims under Title II of the ADA and Section 504 of the Rehabilitation Act regarding the right of individuals with disabilities to live in the most integrated setting appropriate to their needs (often called Olmstead claims). Other Cases throughout the United States Elizabeth Edwards from the National Health Law Program added that these trends are consistent with other cases across the United States, including in North Carolina, West Virginia, and Florida, where plaintiffs have focused more on due process and notice issues related to use of ADS tools, and less on the substantive question of whether ADS tools could (or should) be used fairly to decide benefits. In some of the recent cases, judges seem disinclined to examine the tools themselves or the technology issues involved, and instead focused primarily on the due process issues. However, the due process issues largely revolve around the notices and information available, and do not address the related issue of incorrect decisions that discourage individuals from appeals and keep their benefits and care hours low. Edwards said that increased reliance on algorithmic tools highlights how ADS are overly relied upon by case managers, state employees, and others such that the outcome of the tool is given greater weight than the individual’s needs. DC Juvenile Court Risk Assessment Case Next Vincent Southerland updated attendees about a case the Public Defender Service (PDS) of Washington, DC litigated last year. As our 2018 report highlighted, PDS lawyers challenged the use of the Structured Assessment of Violence and Risk in Youth (SAVRY) risk assessment tool in a criminal juvenile sentencing proceeding. The defendant in the case was a young person who pleaded guilty to the charge of robbery due, in part, to the promise that he would be given probation. He engaged in perfect conduct throughout the presentencing period. However, prior to sentencing, the SAVRY assessment results came back reporting that he was deemed to be “high risk” for violence and should be incarcerated. Probation was no longer an option. The defense lawyers decided to challenge the SAVRY assessment under the Daubert line of cases, which require a certain level of scientific robustness for evidence presented in court, including proof of foundational validity and reliable application. While investigating SAVRY, defense counsel discovered significant racial disparities in the risk factors used, such as “community disorganization” (whether a neighborhood is “high crime”); “parental criminality”; and a history of violent or nonviolent offending, calculated by the Litigating Algorithms 2019 US Report Session 1: You Won! Now What? 10 number of police contacts. Many of these factors align with how targeted a community is by law enforcement, and with presuming children to be higher on the scale of violence. Defense counsel also found problems with the SAVRY validation studies: there were only two, with one an unpublished master’s thesis and the other more than two decades old. In winning their argument, defense counsel convinced the judge to disallow the use of SAVRY in the specific case before him. However, because he was not aware of any other successful challenge to SAVRY, the judge allowed only an as-applied challenge in this particular case, and would not extend his ruling to the use of SAVRY generally in all DC juvenile cases. That meant the victory was fleeting—and, unfortunately, the DC court continues to use the SAVRY test. Also, because DC’s juvenile court has a regular judicial rotation, the judge who made this limited ruling has now rotated off the court. Current judges have rejected the precedent of the prior ruling and have made a practice of ordering the SAVRY assessment over the objections of defense counsel, stating that since they have access to the same underlying information as the SAVRY evaluator, they will allow SAVRY to be admitted and give it “appropriate weight” (a similar approach to the Wisconsin Supreme Court’s decision in the Loomis case). Future defense counsel may press harder, but given the current composition of the court, last year’s victory may end up limited to a historic note. Houston Federation of Teachers v. Houston Independent School District Finally, we heard from Audrey Amrein-Beardsley, who served as one of the primary expert witnesses for the Houston Federation of Teachers in their lawsuit against the Houston Independent School District. Presented last year, the case concerned the rights of public-school teachers to challenge the use of algorithmic assessment tools in employment evaluations. Amrein-Beardsley summarized the case before providing a broader overview of the history behind these assessment tools and algorithmic systems, and the challenges that lie ahead for publicschool teachers. After the George W. Bush administration implemented the No Child Left Behind Act, various companies, including SAS Institute Inc. began lobbying the federal government to use algorithms to assess teacher performance as a result of student test scores. This yielded, among others, SAS’ Education Value-Added Assessment System (EVAAS), which purportedly “holds teachers accountable” for their causal impact on student test scores over time. SAS’ marketing pushed the logic that if teachers’ EVAAS scores go up, there is growth, and therefore the teacher is considered value-added; if scores go down, there is “decay,” and therefore the teacher is deemed value-detracted. During the Obama administration, the US Department of Education instituted the “Race to the Top” program, which provided grants to schools based on achieving certain metrics. It used Value Added Model (VAM) systems and test-based accountability for reforming schools. States that adopted VAMs and enforced VAM approaches through merit pay, teacher termination, and rewarding (and revoking) tenure received more grant money. The states that proved most extreme in their approach received the most money. Litigating Algorithms 2019 US Report Session 1: You Won! Now What? 11 This led to 15 lawsuits, primarily across the Sun Belt, Tennessee, and in New York, with AmreinBeardsley serving as an expert witness in nearly half a dozen of these. Issues addressed in these cases included: • Reliability: Is the VAM measurement consistent over time? • Validity: Does the VAM measurement reflect reality or truth? • Bias: Are teachers’ VAM scores inaccurate or invalid due to the types of students (e.g., racial minority, low income, English language learners, special education) they teach? • Fairness: Given that only certain subjects (such as math and reading) are tested, is it fair to assess or not assess teachers who cover other subjects in fair or “uniform” ways? • Transparency: Is the VAM model, which is based on complicated statistics and computer algorithms, sufficiently transparent and easy to understand to permit teachers to use their value-added data to improve their quantified levels of effectiveness? In the Houston, Texas case, the transparency of the EVAAS was central. The assumption underlying the system was that teachers could understand the data, use it to improve instruction, and replicate their own scores. But there was never any testing or verification as to fairness or accuracy. Transparency is even more complicated because SAS claimed its software implementation of EVAAS is a trade secret. As noted in last year’s report, the teachers used this fact to win the case on procedural due process grounds, showing that SAS’s secrecy essentially denied them the right to act on their data, use it in formative ways, or even understand it. According to Amrein-Beardsley, the good news is that, when challenged, most VAM assessments have been defeated. And in 2016, the federal government passed the Every Student Succeeds Act, which removed incentives to use the VAM and prohibited forcing school districts to adopt it. Nonetheless, SAS has taken the VAM internationally, and is working with the World Bank to market it to countries across the Global South. Discussion After these initial presentations, workshop participants raised a number of key points about the lessons learned from these cases. Some asked whether theories under the Administrative Procedures Act (or state equivalents) could be an avenue for bringing additional challenges to inadequate notice and comment processes involving ADS. Presenters noted that some cases had analogized to these theories, and, when successful, did help generate large opportunities for public commentary. One attendee asked how states were approaching the maintenance of ADS, pointing out that even if problems were fixed during or after deployment, others might arise later. Ongoing testing and assessment are essential but often not built into budgets, which allows these systems to be approved, falsely, as “cost-saving.” Amrein-Beardsley built on this point, noting that any time government agencies reference “internal” audits, tests, or assessments, advocates should immediately regard them as red flags, and push for external testing and validation of internal metrics. Litigating Algorithms 2019 US Report Session 1: You Won! Now What? 12 Another participant asked more fundamentally whether algorithms have ever been good for poor people, and if there were any concrete positive examples worth noting. In response, one participant suggested looking to Pennsylvania’s Clean Slate legislation, which automatically seals the records of people with misdemeanors who earn no further convictions after 10 years, or who have been arrested but not convicted. After a large-scale effort by civil society and others to help implement the bill, more than 30 million records were sealed, allowing these people to pursue economic opportunities without the stigma of a criminal conviction. Session 1 Recommendations • Assess the success of litigation by measuring structural change within government agencies and their programs, rather than by isolated or narrow changes to specific ADS. Defining success in cases involving ADS is challenging. The issues being litigated are often entangled with the structural and systemic problems of government agencies. Litigation, at best, has been effective in pausing demonstrably bad ideas. Litigation can continue as an effective mitigation mechanism, provided the right stakeholders are engaged, and if each performs their functions effectively—but a successful ADS advocacy strategy must include coalition work with community organizing, media campaigns, and advocacy across all stakeholders, including all branches of government. Success also requires a true accounting for the costs of ongoing professional ADS validation and testing to ensure compliance with regulations and settlements. Stakeholders should include such accounting in their criteria for milestones and remedies. • Consider litigation as a rallying point and megaphone to amplify the voices of those personally impacted by ADS. When plaintiffs who have been harmed by ADS tell their stories, the public conversation focuses on specific and concrete impacts and provides evidence and event-driven narratives. • Track “bad actor” ADS vendors who migrate their defective systems to new agencies or areas of focus. After suffering criticism and losses in a specific area of government service, such as education or criminal justice, some ADS vendors have migrated the marketing and sales of their systems to other areas of focus rather than addressing the fundamental concerns they have created. One example is SAS, which, after essentially abandoning automated teacher evaluation services in the US, has now moved to focus internationally in jurisdictions that have less expertise and fewer legal accountability avenues. Researchers and advocates monitoring these issues are encouraged to identify specific companies and document their histories, practices, and footprints to prevent them from evading appropriate scrutiny, oversight, and enforcement. Litigating Algorithms 2019 US Report 13 SESSION 2: CRIMINAL DEFENSE ACCESS TO LAW ENFORCEMENT ADS Primary Presenters Somil Trivedi, Senior Staff Attorney, ACLU Criminal Law Reform Project Kevin Vogeltanz, Founder and Managing Member, Law Office of Kevin S. Vogeltanz Andrew Ferguson, Professor of Law, UDC Law School Cynthia Conti-Cook, Staff Attorney, Legal Aid Society MODERATOR: Vincent M. Southerland, Executive Director, Center on Race, Inequality, and the Law at NYU School of Law; Area Lead, AI Now Institute Access Key Litigation Documents Here Session Summary This session explored the intersection between prosecutor’s obligations pursuant to the Supreme Court’s decision in Brady v. Maryland and the use of algorithmic tools by law enforcement, including prosecutors and police officers. Brady imposes an affirmative constitutional duty on prosecutors to disclose to the defense exculpatory evidence material to the guilt, innocence, or punishment of the accused.2 Practically speaking, that means prosecutors must disclose any evidence to the defense that could cast doubt on the guilt of the accused or the punishment to be imposed. The advent of ADS in the criminal legal system has led to the production of new forms of evidence, much of which could qualify as Brady evidence. Against that backdrop, presenters described two cases that centered on Brady evidence produced by the use of algorithmic tools, explored broader concerns raised by the advent of data-driven prosecution and Brady, and presented a searchable police misconduct database that public defenders developed to counter efforts to relieve prosecutors of their Brady obligations. Louisiana v. Hickerson Kevin Vogeltanz presented the first case. It involved Kentrell Hickerson, who, following a trial, was convicted of what amounted to criminal conspiracy and other related charges. He was sentenced to 100 years in prison following his conviction. The central question in Mr. Hickerson’s case was whether he was a member of a gang that had been responsible for several crimes. At the time of Mr. Hickerson’s prosecution, law enforcement in the city of New Orleans used a risk-assessment database called Gotham, created by the 2 373 U.S. 83 (1963) Litigating Algorithms 2019 US Report Session 2: Criminal Defense Access to Law Enforcement ADS 14 company Palantir. The ostensible purpose of the program was to determine who among the city’s population was likely to become a perpetrator or victim of gun violence. That information was to be used in a violence-intervention program, “NOLA For Life,” in which identified individuals were warned by law enforcement of the potential consequences of their lifestyle, and were offered social services and other supports.3 The database created social-networking graphs based on aggregated information about the city’s population, including individuals’ ties and connections to other individuals. Given the centrality of the question about Mr. Hickerson’s relationships to other suspected gang members, the socialnetworking graphs the Gotham program produced could have proven dispositive on that key point. Despite the relevance of Gotham to Mr. Hickerson’s case, he and his lawyers only learned of its existence and use after the media reported that Palantir had been operating in the city for 6 years with little public knowledge.4 In a motion for a new trial, Mr. Hickerson advanced the claim that he was entitled to the Gotham-produced materials pursuant to Brady, as they might have raised reasonable doubts with the jury. Mr. Hickerson’s new trial motion was denied by the district court judge based on the prosecution’s claim—contested by Mr. Hickerson—that Gotham played no role in Mr. Hickerson’s case. Lynch v. Florida Somil Trivedi presented the Lynch case, which centered on the arrest, prosecution, and conviction of Willie Allen Lynch, who was accused of selling 50 dollars’ worth of crack cocaine to undercover officers. One key aspect of the case was that the undercover officers could not identify the individual who sold drugs to them except by his nickname, “Midnight.” One of the officers surreptitiously took a cell phone picture of the individual, which they forwarded to a crime analyst, along with the suspect’s nickname and the location of the drug sale. The officers left the scene without making an arrest. The crime analyst was unable to use the nickname or location to identify anyone through law enforcement databases. She then turned to the cell phone photo and uploaded it into a facialrecognition program called the Face Analysis Comparison Examination System (FACES), which draws from a database of more than 33 million driver’s license and law enforcement photos.5 That search produced four possible suspects, along with Mr. Lynch. The quality of these matches was gauged by a star rating of unknown reliability. FACES assigned only one star to Mr. Lynch, and no stars to the other suspects. The analyst then selected Mr. Lynch from among the suspects 3 Emily Lane, Mayor, Police Chief to Face Subpoenas from Convicted Gang Member Over Palantir Claim, NOLA.com, Apr. 3, 2018, https://www.nola.com/ news/crime_police/article_fa5949c4-a300-509d-90e8-2d7814f505f6.html. This program has many similarities to the Chicago Strategic Subjects List, which has been subject to extensive criticism. 4 Ali Winston, Palantir Has Secretly Been Using New Orleans to Test its Predictive Policing Technology, The Verge, Feb. 27, 2018, https://www.theverge. com/2018/2/27/17054740/palantir-predictive-policing-tool-new-orleans-nopd 5 Aaron Mak, Facing Facts, Slate, Jan 25, 2019, https://slate.com/technology/2019/01/facial-recognition-arrest-transparency-willie-allen-lynch.html Litigating Algorithms 2019 US Report Session 2: Criminal Defense Access to Law Enforcement ADS 15 returned by the software, and sent his identification information back to the officers, who promptly arrested him. At trial, Mr. Lynch’s sole defense was misidentification—claiming that he was not the person who sold drugs to undercover officers. He was convicted and sentenced to 8 years in prison. On appeal, Mr. Lynch argued, pursuant to Brady, that prior to his trial he should have been given the photographs of the other individuals who matched as potential suspects through the FACES program. He learned of the use of FACES only during a pretrial deposition of the crime analyst who made the alleged match. The Florida First District Court of Appeals affirmed Mr. Lynch’s conviction, denying his appeal. The Court ruled that because Mr. Lynch could not demonstrate that his trial outcome would have been different if the other FACES results had been disclosed to the defense, he could not prevail. The Court pointed out that Mr. Lynch could not show that the other photos returned by the software resembled him and would have supported his misidentification defense.6 The ACLU, the Electronic Frontier Foundation, the Georgetown Center on Privacy and Technology, and the Innocence Project filed an amicus curiae brief urging the Florida Supreme Court to hear Mr. Lynch’s appeal. In July 2019, the Florida Supreme Court denied discretionary review of the case. Intelligence-Driven Prosecution Professor Andrew Ferguson presented on the emergence of intelligence-driven prosecution, which has been defined by prosecutors as focusing the collective resources of a prosecutor’s office on reducing crime—violent crime in particular—through data collection and analysis, information sharing, and close coordination with law enforcement and community partners. In practice, this means amassing data from a range of sources, with varying degrees of reliability, on individuals identified as so-called “drivers” of crime. Prosecutors use this information to determine how individuals should be treated pretrial and at sentencing, and to demonstrate relationships and connections between individuals. Three conclusions flow from the use of intelligence-driven prosecution: first, to the extent that prosecutors’ offices have created these systems, they contain significant Brady material, touching on everything from the credibility of witnesses and potential biases of law enforcement, to how law enforcement identifies and treats suspects and targets of their investigations. Second, the wealth of information produced by intelligence-driven prosecution means that prosecutors may unwittingly have Brady evidence in their possession. Third, in light of the possibility that prosecutors are unknowingly in possession of Brady evidence, the shift to intelligence-driven prosecution requires that Brady be considered more expansively and with those technological advances in mind. 6 Lynch v. State, No. 1D16-3290 (Fla. Dist. Ct. App. Dec. 27, 2018) Litigating Algorithms 2019 US Report Session 2: Criminal Defense Access to Law Enforcement ADS 16 CAPstat: Using Technology to Uncover Brady Evidence Cynthia Conti-Cook then presented on the Cop Accountability Project (CAPstat) of the Legal Aid Society of New York’s Special Litigation Unit. The Project is a publicly accessible database that compiles complaints regarding officer misconduct from sources such as administrative proceedings, lawsuits, and media sources.7 Members of the public can search the database by officer and precinct to obtain information regarding patterns of misconduct by officers; relationships among officers who may have engaged in conduct together; the use of force by police officers; and punishments imposed on officers for misconduct. In the courts, state laws generally shield an officer’s history of misconduct from the accused, defense counsel, and the general public, thus leaving a broad source of potential Brady material inaccessible. The CAPstat database fills that gap. Overarching Lessons Learned This session focused on the intersection of Brady and algorithmic tools, highlighting critical points that warrant serious attention from criminal justice advocates and those engaged in the design, implementation, and oversight of algorithmic tools. Of utmost importance is how the interplay between Brady and algorithmic tools reveals the power dynamics and differences between law enforcement and people accused of crimes. Technological tools increase the already significant power imbalance between the State and the accused. Fundamentally, Brady and discovery, more generally, is meant to help equalize that imbalance. When operating as designed, Brady shifts some power from law enforcement to the accused by forcing the State to show its hand, and by arming the accused with evidence that may help defeat the criminal allegations they face. However, Brady can only do so when criminal justice actors understand how these tools operate, comprehend the nature of the data they produce, and then diligently fulfill their resultant Brady obligations. The case studies we explore in this session made clear the difficulty of meeting those conditions, especially when the existence and inner workings of those systems are kept secret. Knowledge Is Power This session exposed significant knowledge gaps among criminal justice stakeholders about the ubiquitous nature of technological tools law enforcement agencies uses to advance their investigative work, as well as the breadth of the data those tools produce that can be material to a person’s guilt, innocence, or punishment. This is particularly problematic for those tasked with defending the accused. Many defense attorneys do not know which algorithmic tools law enforcement is using or how they are being deployed. They often only learn of their existence through media reports, pretrial motion practice, discovery requests, or other procedural channels. Neither the defense, nor even law enforcement officials themselves, seem fully aware of the data these algorithmic tools produce, and what that means for their Brady obligations. This knowledge gap—whether born of willful ignorance on the part of the prosecution, or because law enforcement actively obscures investigative techniques— has significant implications. 7 CAPstat, https://www.capstat.nyc/ (last visited Aug. 6, 2019) Litigating Algorithms 2019 US Report Session 2: Criminal Defense Access to Law Enforcement ADS 17 The prosecution has an affirmative, constitutional duty to disclose Brady evidence. If they are unaware that such material exists, it is exceedingly difficult to ensure that they can fulfill that duty on their own. In other instances, the prosecution may know such evidence exists, yet not view it as Brady material. Several common justifications for nondisclosure were raised during the session. Among them are that defense counsel has access to the underlying reports and sources used by ADS to produce their analysis; that the presence of trade secrecy protections limits disclosure; and that the taint of a Brady violation can be cleansed at some other point in the process, such that the evidence would not have changed the outcome of a case. Technical Concerns with Algorithmic Tools Ensuring compliance with Brady is difficult when considered in light of another concern: problems with the technology itself due to biased data, technical flaws, inconsistent oversight, and other features that render the tools unreliable. Presenters noted the data that law enforcement collects and amasses as inputs for these algorithmic tools is often not trustworthy. It is affected not only by biases and prejudices from both individuals and institutional structures but also is susceptible to the type of errors commonly found in data collected from nontraditional and nonstandardized sources, including examples such as nicknames, crime locations, and social media connections.8 The potential for those data points to serve as the impetus for an investigative effort highlights their centrality to the Brady analysis. Responses to Brady Resistance Criminal defense lawyers and advocates have worked to meet the challenges of Brady enforcement but given how high the hurdles can be, they have started to build their own Bradyoriented tools and records in an attempt to essentially fulfill the State’s Brady obligation for it. CAPstat is one example: a publicly-accessible database that collects complaints and lawsuits related to police misconduct. CAPstat was modeled on, and inspired by, the Invisible Institute’s Citizens Police Data Project—a database that tracks public-police encounters to hold law enforcement accountable. The discussion of CAPstat underscored the difficulties of maintaining such a database. It is both time- and resource-intensive, requiring constant monitoring to ensure accuracy and effectiveness. Yet such databases are critical to filling the void left by prosecutors’ continued noncompliance with Brady. This part of the session provided some insight into how communities and criminal justice advocates might work together to hold law enforcement more accountable. One presenter noted that by developing tools used to surveil communities, law enforcement has inadvertently created an infrastructure to surveil itself. The gaze of technological and algorithmic tools can shift from traditional targets of investigation to institutional stakeholders. That shifts the balance of power in ways that are consistent with the spirit of Brady and discovery more generally. 8 See Rashida Richardson, Jason M. Schultz & Kate Crawford, Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice, 94 N.Y.U. L. Rev. Online 192 (2019) Litigating Algorithms 2019 US Report Session 2: Criminal Defense Access to Law Enforcement ADS 18 Session 2 Recommendations • Enact open-file discovery policies for criminal cases involving ADS. In a world where algorithmic tools are ubiquitous, opening prosecutorial files to defense counsel and the accused eliminates reliance on the prosecutor’s good-faith judgment about what may constitute Brady evidence. • Update criminal discovery rules to treat ADS as accessible, contestable, and admissible evidence. Courtroom discovery rules should be rewritten to account for the use of algorithmic tools in the criminal legal system, ensuring that defense counsel and the accused have access to ADS information, similar to the way they have access to information about analog police tools. • Expand prosecutorial understanding of the Supreme Court’s Brady Rule to include ADS evidence. A technology and data-driven approach to prosecution demands that Brady be reconceptualized and understood for its broad applicability. The tools that support intelligence-driven prosecution generate critically important data: investigative methods and sources of investigatory leads; potential suspects; alternate theories of the prosecution case; impeachment material about particular witnesses; maps of connections between potential suspects and those who may be accused of crimes; past (failed) investigations of the accused and their associates; the means of arriving at the identification of a suspect; and law enforcement field reports. All represent the kinds of material that defense counsel and the accused should have access to, and which the prosecution should fully disclose as part of its Brady obligation. Each may be exculpatory evidence material to the guilt, innocence, or punishment of the accused, and therefore falls squarely within Brady. This includes prosecutors providing the defense with a list of the algorithmic tools law enforcement has used to conduct investigation of suspects in a case; the purpose for which each tool was designed; the data on which those tools rely; and other technical specifications, all of which should be documented. • Develop ADS trainings and toolkits for the criminal defense community. The criminal defense community must be allowed to have meaningful access to any ADS used in the criminal justice system, including training on how to understand its’ operation and outputs. This also necessitates developing a defense counsel discovery toolkit to identify requests that catalog all potential discovery material in cases where the prosecution process has involved data-driven law enforcement practices. Litigating Algorithms 2019 US Report 19 SESSION 3: PUBLIC BENEFITS AND COLLATERAL CONSEQUENCES Primary Presenters Jackie Doig, Attorney at Law, Center for Civil Justice Jennifer L. Lord, Partner, Pitt McGehee Palmers & Rivers MODERATOR: Rashida Richardson, Director of Policy Research, AI Now Institute Access Key Litigation Documents Here Session Summary In 2011, Republican Rick Snyder became the Governor of Michigan with a Republican-controlled legislature. After years of working in the technology sector, Governor Snyder teamed up with the Michigan State Legislature to create the “Michigan’s reinvention” budget plan, which sought to end the State’s deficit. It is within this political climate that the facts of Barry v. Lyons and Bauserman v. Unemployment Insurance Agency arose. Barry v. Lyons Jackie Doig discussed Barry v. Lyons, a case involving the Michigan Department of Health and Human Services (MDHHS) and its use of a matching algorithm to implement the State’s “fugitivefelon” policy, an attempt to automatically disqualify individuals from food assistance based on outstanding felony warrants. Despite Snyder’s austerity policy proposals and rhetoric, initial fiscal analysis revealed it would cost $345,000 to create the ADS, with virtually no state savings. Instead, the primary beneficiary of the expected termination of benefits would be the federal government. Moreover, Freedom of Information Act (FOIA) requests later revealed that MDHHS planned to use the matching algorithm as part of a media campaign to vilify people with outstanding felony warrants and individuals who rely on government benefits. Between December 2012 and January 2015, the new algorithmic system improperly matched more than 19,000 Michigan residents, and automatically disqualified each of them from foodassistance benefits with a vague notice: “You or a member of your group is not eligible for assistance due to a criminal justice disqualification...Please contact your local law enforcement to resolve.” Litigating Algorithms 2019 US Report Session 3: Public Benefits and Collateral Consequences 20 In the summer of 2013, a class-action lawsuit was filed on behalf of anyone who had received a disqualification notice, along with a subclass of individuals disqualified from food assistance with no determination that they were actually fleeing or actively sought by law enforcement. The complaint alleged that Michigan’s automatic disqualification policy violated the federal Supplemental Nutrition Assistance Program (SNAP) statute, the Supremacy Clause, and Constitutional and statutory due process requirements. Ultimately, the Sixth Circuit upheld the federal district court’s ruling, enjoining the State’s inadequate notices and any disqualification based on computer matching without an individualized determination. Following negotiations between the parties and the United States Department of Agriculture (USDA), the district court also required reinstating benefits to those who had been unlawfully disqualified, resulting in a lump-sum payment of $3,120 to each class member (or the actual amount of food assistance they were denied, for the few class members who opted out of the lump sum). Bauserman v. Unemployment Insurance Agency Jennifer Lord presented Bauserman v. Unemployment Insurance Agency, a case involving the Michigan Unemployment Insurance Agency’s (UIA) failed automation of the Michigan Integrated Data Automated System (MiDAS) to adjudicate and impose penalties for alleged benefits fraud. To build the system, the State had turned to third-party vendors, asking them to design it to automatically treat any data discrepancies or inconsistencies in an individual’s record as evidence of illegal conduct. Between October 2013 and August 2015, the system falsely identified more than 40,000 Michigan residents of suspected fraud. Those individuals were sent an online questionnaire with pre-loaded answers, some of which triggered an automatic default finding against them. Automatic determinations of fraud also occurred if recipients failed to respond to the questionnaire within 10 days, or if the MiDAS system automatically deemed their responses unsatisfactory. The consequences were severe: seizure of tax refunds, garnishment of wages, and imposition of civil penalties—four times the amount people were accused of owing. And although individuals had 30 days to appeal, that process was also flawed. In September 2015, a class-action lawsuit was filed in state court alleging due process violations. The case was dismissed for failure to bring the action sooner, but this decision was appealed to the Michigan Supreme Court, which unanimously reversed the lower court’s dismissal and remanded the case to continue to trial. In the meantime, UIA continues to use MiDAS, and claims that adjudications are no longer fully automated. It is unclear what (if any) changes were made, and whether there is any meaningful human review or oversight. This session provided a unique opportunity to examine the use of two different ADS with various commonalities: they were implemented in the same state, under the same political leadership, during similar time frames, and the lawsuits challenging the systems were class-action lawsuits. These connections allowed presenters to highlight some common themes regarding the political rhetoric, motivations, and practices that led to the failed implementation of both ADS and the government practices that exacerbated their consequences. Litigating Algorithms 2019 US Report Session 3: Public Benefits and Collateral Consequences 21 Negligent notice One common theme was inadequate notice. In Barry v. Lyons, class members were told they did not qualify for SNAP benefits due to “a criminal justice disqualification” and that they should “[c]ontact law enforcement for more information.” No additional information was provided about the charges or how to contest the disqualification. Nor was local law enforcement instructed on how to advise individuals in any meaningful or helpful way. Later, the notices were revised to provide even less information. Not only did that make it harder to identify class members, but it discouraged people from applying for government benefits they needed and for which they qualified. Subsequently, documents obtained through FOIA requests revealed that the State had actually directed MDHHS staff not to tell residents why or how their benefits were cut. In Bauserman v. Unemployment Insurance Agency, we saw similar issues of deficient notice, with evidence of individuals failing to receive notification of the false accusations by letter or email, and the UIA ignoring months of initial complaints from people filing unemployment claims. Government rhetoric and practices intended to vilify and degrade poor and marginalized communities One additional aspect of these cases was the way in which the political rhetoric of the State mapped to the logics of the ADS. With the ADS in both cases, the government engineered the processes, decisions, and results to frame plaintiffs as criminals who deserved automated and efficient punishment. Even after strong evidence of complaints and errors, government officials stood by their ADS as superior in their determinations. For example, in Barry, FOIA requests helped reveal that the Department of Human Services (DHS) planned to claim in the media that it was arresting thousands of individuals as felons and “bad people.” This action was intended to defeat a pending federal regulation mandating that the State could not use these types of ADS. The MiDAS system, contested in Bauserman, is part of a national trend to target poor people under the auspices of prosecuting “waste, fraud, and abuse” in government systems. These campaigns have increased usage of automated systems to allegedly achieve cost savings but have yet to clearly prove their success. As the MiDAS case demonstrates, not only were these systems flawed; they were also costly. The system hurt two categories of people. The first casualties of the system were roughly 400 state employees who reviewed applications and discrepancies.9 They were laid off and replaced by MiDAS. Next were the thousands of falsely accused individuals—many of whom filed for bankruptcy to discharge exorbitant debt from fines and penalties until the State Attorney General intervened to challenge those claims. 9 See Key Litigation Documents. https://drive.google.com/drive/folders/1qvvxFIVCzxwlnTmEV17kdYBqPi25vKZ1?usp=sharing Litigating Algorithms 2019 US Report Session 3: Public Benefits and Collateral Consequences 22 Austerity rhetoric that cost the state millions of dollars and imposed significant societal costs In Barry, economic efficiency was used to justify Michigan’s use of ADS. However, the federal government recouped most of the cost savings from unclaimed benefits while the State absorbed all the costs of building the system and defending it in court. In Bauserman, the ADS falsely flagged more than 40,000 people for unemployment fraud. The effects were compounded by the fact that Michigan imposes the highest penalty rate in the country. Governor Snyder claimed that the State refunded $20.8 million to individuals falsely flagged for unemployment, but in fact the Michigan Auditor General’s records revealed that the State took more than $100 million from residents, and has paid back only $16 million to date.10 Both cases resulted in residents being discouraged from applying for government benefits. And many plaintiffs from Bauserman are still suffering collateral consequences from bankruptcy records. Uncertain and inadequate government remedy After Barry, the State negotiated with the Obama administration to streamline the payment of back benefits to those harmed and continues to work on fixing its notices and to develop a policy that does not rely on future ADS use. Bauserman is still pending, but the State claims that the system is no longer performing a robo-adjudication function and that there is now human oversight. Since then, parts of Michigan have experienced a political “blue wave” that has led to pushback against these Snyder-era policies. Still, problems persist. Siloed government agencies Another clear lesson was the danger of siloed ADS deployments. Both Michigan systems were rolled out in similar time frames, but by different agencies. Government officials with oversight capacity were disconnected from the processes and no single office or individual was tasked with ensuring accountability for the interactions between the two systems. These examples also demonstrate the risks of centralizing data sharing, utilization, and outcomes while at the same time decentralizing ADS management and accountability. 10 See Key Litigation Documents. https://drive.google.com/drive/folders/1qvvxFIVCzxwlnTmEV17kdYBqPi25vKZ1?usp=sharing Litigating Algorithms 2019 US Report Session 3: Public Benefits and Collateral Consequences 23 Collateral Consequences and Algorithmic Systems The term “collateral consequences” describes the civil, legal, and regulatory sanctions and restrictions that result from a criminal conviction.11 Distinct from the direct criminal consequences, such as incarceration or fines and fees, collateral consequences can significantly limit or prohibit an individual’s access to education, employment, food, housing, and other opportunities even after their case is resolved, and can affect the rights and opportunities of the individual’s family and community.12 The Michigan cases illustrate how government ADS in high stakes social domains can amplify the effects and reach of collateral consequences. They can blur the lines between civil and criminal policies. For example, the Barry case illustrates how the failed algorithmic implementation of a criminal justice policy prohibited individuals from accessing public benefits that were rightfully theirs. Bauserman similarly demonstrates how the failed implementation of an algorithmic fraud detection system prohibited individuals from accessing proper unemployment benefits and subjected many innocent people to unwarranted criminal convictions and penalties that brought yet more collateral consequences. Many individuals ended up pleading guilty to fraudulent activity they did not commit. Since some fraud convictions are considered “crimes of moral turpitude,” individuals with such convictions can be barred from positions of trust, such as a financial advisor or teacher, and immigrants can be subject to deportation.13 Michigan also admitted that there were at least 1,100 bankruptcies traceable to these false fraud accusations. Since a bankruptcy remains on an individual’s credit report for seven or more years, it can significantly affect everyday activities, such as renting an apartment, seeking a job, or applying for credit. A bankruptcy record can limit an individual’s access to these opportunities as well as subject them to higher fees or interest rates, straining their finances even further. These cases also illustrate the harsh consequences of failed attempts to automate proper legal notice when benefits are contested. In both cases, numerous individuals failed to appeal their disqualification because they were confused by the vague notice. In Barry, some people mistakenly assumed that old misdemeanor convictions, which were not subject to the fugitivefelon policy, were the reason for the disqualification, so they did not appeal. In both cases presented at this session, the plaintiffs experienced not only immediate, but also sustained harms from these system failures. Many Michigan residents have been discouraged from applying for public assistance because of them. In fact, people are still having their tax refunds seized today, despite the Michigan Attorney General’s certification that seizures are no longer occurring. 11 See, e.g., The Personal Responsibility and Work Opportunity Reconciliation Act of 1996, Pub. L. No. 104-193, 110 Stat. 2105 (1996) (denying government aid, including federally subsidized housing and food stamps, to individuals convicted of drug offenses). 12 See, e.g., Ifeoma Ajunwa, The Modern Day Scarlet Letter, 83 Fordham L. Rev. 2999, 3021-22 (2015) (highlighting how implementation of the Adoption and Safe Families Act has resulted in fewer children of incarcerated parents being reunited with their biological families which can have long-term negative effects on the family and their social network). 13 The employment consequences of fraud convictions vary by state but can include mandatory termination from certain positions or industries, or revocation of professional licenses. Employers also have great discretion in refusing to interview or hire individuals based on convictions, even in states or municipalities with ban-the-box laws or provisions that protect people with convictions from discrimination. Litigating Algorithms 2019 US Report 24 PERSPECTIVES FROM THE EU Primary Presenters Anton Ekker, Attorney at Law MODERATOR: Rashida Richardson, Director of Policy Research, AI Now Institute Access Key Litigation Documents Here Session Summary The Netherlands’ Ministry of Social Affairs and Employment implemented the Systeem Risico Inventarisatie (SyRI), a risk-profiling system marketed at preventing social security, employment, and tax fraud. The SyRI system was used by several municipal governments, but it targeted “highcrime” areas, which were also historically lower-income communities. In 2018, SyRI flagged over 1,000 individuals or households as “fraud risk,” which subjected those individuals to increased government surveillance, risk of denial of social benefits, or fines. When the public became aware of the system, there was a backlash, with residents noting parallels to Nazi Germany during WWII. The Rotterdam City Council also objected, and the SyRI system is currently on hiatus. Anton Ekker is a lawyer representing a group of two organizational plaintiffs (a coalition of privacy organizations and a labor union) and two individual plaintiffs challenging the use of the SyRI system. Their legal challenge alleges due process, bias/discrimination, and privacy violations, as well as a few EU-specific claims (presumption of innocence, failure to meet requirements of Article 8 of ECHR, GDPR Article 29). The case is pending, awaiting a hearing at the Court of The Hague in October 2019. Ekker provided an overview on this case and responded to the public-benefits cases presented in the third workshop session. He noted that his case shared similarities with the Michigan cases insofar as the Dutch government, like Michigan’s, failed to articulate a reasonable rationale for implementing faulty systems, failed to oversee the system once implemented, and resisted mitigating the harmful consequences of its systems.14 14 It appears the only running SyRI-project, in Rotterdam, was cancelled on July 3, 2019, due to, among other reasons, privacy concerns with regard to the General Data Protection Regulation. See The Public Interest Litigation Project, Profiling and SyRI, https://pilpnjcm.nl/en/dossiers/profiling-andsyri/ (last visited September 14, 2019) Litigating Algorithms 2019 US Report 25 SESSION 4: ILLINOIS’S BIOMETRIC PRIVACY APPROACH Primary Presenter David M. Oppenheim, Attorney, Bock, Hatch, Lewis & Oppenheim MODERATOR: Jason M. Schultz, Professor of Clinical Law, NYU School of Law; Area Lead, AI Now Institute Access Key Litigation Documents Here Session Summary In our final workshop session, we examined a leading case involving the Illinois Biometric Information Privacy Act (BIPA), Rosenbach v. Six Flags. David Oppenheim, lead counsel for Rosenbach, described the case origins and key moments leading to the influential Illinois Supreme Court’s ruling in his favor. Passed in 2008, BIPA imposes numerous restrictions on how private entities collect, retain, disclose, and destroy biometric identifiers, including retina or iris scans, fingerprints, voiceprints, scans of hand or face geometry. Rather than rely on public enforcement, BIPA provides a private cause of action that allows individuals to sue when their biometric information has been used unlawfully. The Rosenbach case involved a fingerprinting system implemented by Six Flags Entertainment in their Gurnee, Illinois amusement park. In 2014, as part of a “fraud-detection” system to prevent patrons from sharing season-long admission passes, Six Flags began collecting season passholders’ fingerprints and matching them to allow holders to enter the park. In the summer of 2014, Stacy Rosenbach’s 14-year-old son, Alexander, visited Six Flags on a school field trip. Before the trip, she bought him a season pass online and learned that Alexander would have to complete the registration process at the park. When he arrived, the theme park digitally recorded and stored his thumbprints for future verification. Yet Six Flags never obtained permission from Rosenbach and her son. Nor did they provide any written notification about having collected and stored the thumbprints. Rosenbach sued Six Flags, claiming a violation of BIPA. In response, Six Flags moved to dismiss the case, claiming that even if it had failed to provide adequate notice or get consent, there was no tangible harm to Rosenbach and her son, and thus no standing to sue. The trial court denied the motion—but the appellate court reversed the ruling, holding that “technical” violations of BIPA without additional harm could not be litigated. Rosenbach appealed to the Illinois Supreme Court and won. The ruling held that even a technical violation of BIPA was sufficient for a lawsuit to proceed—especially because a core aspect of the law was prohibiting and punishing unlawful collection of information. This was essential to note because in many cases, individuals would learn only of the collection, without having any direct information about Litigating Algorithms 2019 US Report Session 4: Illinois’s Biometric Privacy Approach 26 downstream harms. The Court stated that BIPA is intended to be a preventative measure, an incentive to protect data. Requiring plaintiffs to prove tangible harms arising from misuse of their data would make BIPA essentially useless in most cases, which would frustrate the intent of the Illinois legislature in providing a private cause of action in the first place. The case is now back at the trial court, in the discovery phase, and more evidence is likely to come to light. One of the key subjects for discovery is whether Six Flags shared the fingerprints with any of its other theme parks across the United States. Because each unauthorized distribution of biometric information can result in additional penalties, the case is likely to have significant implications for large platforms that share data widely.15 BIPA provides an interesting model for algorithmic accountability legislation. Most approaches to algorithmic accountability assume that government investigation and enforcement is the most effective approach. But Rosenbach demonstrates that private causes of action can also prove useful. If individuals have standing to sue for prohibited methods of data collection, manipulation, or application, governments and companies using ADS in these ways will be forced to build in accountability approaches on the front end. They will also be forced to design them to support informed written consent for each use. We are seeing more of these approaches, such as the European Union’s General Data Protection Regulation (GDPR), California’s Consumer Privacy Act, and the Algorithmic Accountability Act of 2019, which was recently introduced in Congress. That said, workshop participants generally agreed that informed consent protects individuals only in limited contexts—such as an optional trip to a theme park—and that for many aspects of modern digital life, governments and companies can force people to agree to whatever terms are deemed necessary to collect and use personal information.16 Attendees expressed concern that although BIPA prohibits commercial use of biometric information, it does little to prevent harmful government uses. Further, the companies that make the technology are not held liable—only the companies that implement it.17 Another outstanding question is whether BIPA would cover employers that collect biometric information on their employees. 15 This was also implicated by the recent Patel v. Facebook decision in the 9th Circuit. See Patel v. Facebook, Inc., 932 F.3d 1264 (9th Cir. 2019) (Available at https://www.documentcloud.org/documents/6248797-Patel-Facebook-Opinion.html). 16 Dinah Wisenberg Brin, New Illinois Bill Sets Rules for Using AI with Video Interviews, SHRM, July 1, 2019, https://www.shrm.org/resourcesandtools/ legal-and-compliance/state-and-local-updates/pages/illinois-ai-video-interviews.aspx 17 See Chris Burt, “Vendors Not Liable for Employers’ Biometric Procedures as BIPA Details Challenged”, Biometric Update, September 11, 2019, https:// www.biometricupdate.com/201909/vendors-not-liable-for-employers-biometric-procedures-as-bipa-details-challenged Litigating Algorithms 2019 US Report Session 4: Illinois’s Biometric Privacy Approach 27 Session 4 Recommendations • Follow the lead of Illinois by adopting statutes similar to its Biometric Information Privacy Act (BIPA), in light of its effectiveness as an algorithmic accountability framework. • BIPA-style statutes should ensure a private right of action, standing to sue for non-consensual or non-specific data collection or use, and statutory fines for each violation. • BIPA-style laws should add strong prohibitions on government use of biometrics and impose liability on vendors who assist or provide the capacity for violations. These improvements would help to reduce the power and information asymmetries plaguing current privacy and data protection laws. Litigating Algorithms 2019 US Report 28 APPENDIX: CASES DISCUSSED BY SESSION Session 1: You Won! Now What? K.W. ex rel. D.W. v. Armstrong, 298 F.R.D. 479 (D. Idaho 2014) Case Summary Procedural Posture Idaho’s state Medicaid program began using a new automated decision-making system to determine Medicaid payments for adults with intellectual and developmental disabilities. As a result, many participants saw their payments drop drastically, leading to horrific living conditions for participants who were no longer receiving enough hours of in-home care and services. Plaintiffs filed a class-action suit, and preliminarily, the Court ordered the State to disclose its formula, fix the formula so that participants received the proper amount of funds, and develop and implement procedural protections for those who had already been impacted. The case was subsequently settled. As part of the settlement, the State would develop a new formula, and in the meantime, participants would receive the dollar amount of payments at the highest level the existing tool calculated as an option. ONGOING The Court granted Plaintiffs preliminary injunctive relief; the case was subsequently settled. Ongoing litigation around implementation and next steps. Ark. Dep’t of Human Servs. v. Ledgerwood, 530 S.W.3d 336 (Ark. 2017) Case Summary Procedural Posture In 2016, without notice, the State introduced an algorithm that drastically reduced the Medicaid attendant care hours for many low-income adult Medicaid participants living with disabilities. As a result of losing attendant care hours, many participants experienced horrific living conditions. Prior to the introduction of the algorithm, expert nurses assessed participants attendant care hour needs on an individualized basis. Plaintiffs sued, and the Court ordered an injunction on the basis that the program was improperly promulgated. DHS then issued an emergency rule and began using the same program for two months. DHS subsequently developed and began using a similarly nontransparent automated decision-making system, but one that returned to allowing expert nurses to conduct assessments and use discretion for the number of hours. ONGOING The Court granted Plaintiffs injunctive relief. Ongoing litigation about inadequacy of protections in the appeals process. Litigating Algorithms 2019 US Report Appendix: Cases Discussed by Session 29 Session 1: You Won! Now What? (Continued) DC Juvenile Court Risk Assessment Case Case Summary Procedural Posture The Public Defender Service (PDS) in DC challenged the application of the SAVRY—a risk assessment tool that purports to assess a young person’s risk of future violence as part of the sentencing process. The young person raised a Daubert challenge in his case, showing that many of the factors the tool uses are racist or overlap with normal child brain development. The Court ruled the SAVRY could not be used on an as-applied basis in this case due to the tallying errors in this case, but did not rule on the use of the SAVRY more broadly. The Court further ruled that many SAVRY factors could still be taken into account at sentencing. CONCLUDED The Court ruled in favor of Defendant on an as-applied basis in this case due to as-applied errors; no ruling on the algorithm more broadly. Houston Federation of Teachers v. Houston Independent School District, 51 F. Supp. 3d 1168 (S.D. Tex. 2017) Case Summary Procedural Posture In 2010, the Houston Independent School District (HISD) began using private company SAS’s Educational Value-Added Assessment System (EVAAS), a system that promised to improve teaching quality by providing standardized assessments of teachers. The teachers’ union, the Houston Federation of Teachers, along with other employees of HISD, sued HISD. The Court ruled in favor of the Plaintiffs on procedural due process grounds, noting that teachers have a property interest in their continued employment, and SAS’s secrecy about its algorithm prohibited teachers from accessing, understanding, or acting on their own evaluations. CONCLUDED The Court ruled in favor of Plaintiffs on procedural due process grounds. Litigating Algorithms 2019 US Report Appendix: Cases Discussed by Session 30 Session 2: Criminal Defense Access to Law Enforcement ADS State v. Hickerson, 228 So. 3d 251 (La. Ct. App. 2018) Case Summary Procedural Posture Kentrell Hickerson of New Orleans was convicted at trial of what amounted to criminal conspiracy and other charges, and was sentenced to 100 years in prison. At the time of Mr. Hickerson’s prosecution, the New Orleans Police Department had been using Palantir’s Gotham risk assessment tool, which built social-networking surveillance graphs that included information about city residents’ ties to one another. Given the nature of Mr. Hickerson’s conspiracy charges, he moved for a new trial on the basis that the Gotham graphs were Brady material. The Court remanded the case back to the trial court, which denied Mr. Hickerson’s motion on the basis of the prosecution’s claim that the Gotham graphs were not involved in their prosecution of Mr. Hickerson. CONCLUDED The Court remanded motion for a new trial back to the trial court, which subsequently denied it. Lynch v. State, 260 So. 3d 1166 (Fla. Dist. Ct. App. 2018) Case Summary Procedural Posture Willie Allen Lynch was convicted of selling 50 dollars’ worth of crack cocaine to an undercover officer. The officers could not identify the person who had sold them the crack cocaine, so they left the scene without making an arrest. A law enforcement analyst used a cell phone photograph the officers took during the sale and a facial recognition program called Face Analysis Comparison Examination System (FACES) to produce five possible suspected people, including Mr. Lynch. FACES uses a system of stars, with unknown internal reliability, to rank possible suspected people. FACES produced one star for Mr. Lynch, and no stars for the other suspected people. The law enforcement analyst sent Mr. Lynch’s information to the case investigators, declining to send information about any of the other possible suspected people. Mr. Lynch put forth a misidentification defense at trial; on appeal, arguing the photographs of the other suspected people should have been produced to him as Brady material. The appellate court affirmed Mr. Lynch’s conviction, holding that he could not demonstrate the result of his trial would have been different if the requested material had been produced, and thus he could not be granted a new trial. CONCLUDED The Court denied Mr. Lynch’s motion for a new trial. In July 2019, Florida’s Supreme Court denied discretionary review of the case. Litigating Algorithms 2019 US Report Appendix: Cases Discussed by Session 31 Session 3: Public Benefits and Collateral Consequences Barry v. Lyon, 834 F.3d 706 (6th Cir. 2016) Case Summary Procedural Posture The Michigan Department of Health and Human Services (MDHHS) began using a matching algorithm that automatically disqualified individuals for food assistance if the system determined they had an outstanding felony warrant through a “matching” system. More than 19,000 people, predominantly residents of Detroit and Flint, were improperly matched, automatically disqualified, and given only vague notice. Plaintiffs filed a class-action suit that included anyone who had been disqualified, as well as a subclass of people who had been disqualified with no determination that they were “fleeing,” which was the supposed impetus of the felony disqualification program. The federal district court ruled the automatic disqualification policy violated the federal Supplemental Nutrition Assistance Program (SNAP), the constitutional Supremacy Clause, and constitutional and statutory due process, and required that people’s benefits be reinstated. The 6th Circuit upheld the district court’s ruling. CONCLUDED The 6th Circuit upheld the District Court’s ruling in favor of the Plaintiffs, ordering the class’s benefits reinstated. Bauserman v. Unemployment Ins. Agency, 503 Mich. 169 (2019) Case Summary Procedural Posture Michigan Unemployment Insurance Agency (UIA) began using a third-partydeveloped automated system, Michigan Integrated Data Automated System (MiDAS), to adjudicate and impose penalties on people for benefits fraud. MiDAS automatically categorized any discrepancies in an individual’s automated file as fraud, falsely accusing more than 40,000 people of suspected fraud. These individuals were sent prepopulated online questionnaires that then triggered an automatic finding of fraud in many people’s cases. Automatic determinations of fraud also occurred if recipients failed to respond in 10 days, or if MiDAS deemed their responses unsatisfactory. As a result, Plaintiffs experienced devastating consequences, including tax-refund seizures, wage garnishment, and the imposition of civil penalties with no notice. The appeals process provided inadequate due process. Plaintiffs filed a class-action suit, which the trial court dismissed. Michigan’s Supreme Court reversed, remanding the case for trial. ONGOING The Court reversed the trial court’s dismissal of the case, set for trial. Litigating Algorithms 2019 US Report Appendix: Cases Discussed by Session 32 Session 3.5: Perspectives from the EU Nederlands Juristen Comité voor de Mensenrechten (NJCM) c.s. vs. de Staat der Nederlanden (“Ministerie van Sociale Zaken en Werkgelegenheid”) Translated: Netherlands Lawyers’ Committee for Human Rights (NJCM) et al. vs. State of The Netherlands (Department of Social Affairs and Employment) Case Summary Procedural Posture The Netherland’s Ministry of Social Affairs and Employment implemented the Systeem Risico Inventarisatie (SyRI), a risk-profiling system aimed at preventing social security, employment, and tax fraud. The SyRI system was used by several municipal governments to target poor and working-class communities under the guise of targeting “high crime” areas. In 2018, SyRI flagged over 1,000 individuals or households as “fraud risk,” which subjected those individuals to increased government surveillance, risk of denial of social benefits, or fines. A coalition of privacy groups and labor unions, as well as two individuals, sued to challenge the use of SyRI, alleging due process violations and discrimination, among other claims. ONGOING Scheduled for hearing at the Court of The Hague in October 2019. Session 4: Illinois’s Biometric Privacy Approach Rosenbach v. Six Flags Entm’t Corp., 2019 IL 123186 (2019) Case Summary Procedural Posture Six Flags Entertainment implemented a biometric “fraud-detection” system in their amusement parks. In 2014, the Plaintiff’s minor child visited a Six Flags amusement park, where the park digitally recorded and stored his fingerprints as part of their biometric data collection system, without obtaining permission from his parents/guardians, and without any written notice. The Illinois Biometric Information Privacy Act (BIPA) was passed in 2008 and imposes restrictions on how private entities collect and retain biometric data. BIPA provides for a private cause of action. Plaintiff sued Six Flags Entertainment, claiming a violation of BIPA. Six Flags moved to dismiss on the basis that Plaintiff and her son suffered no tangible harm. The trial court denied Six Flags’ motion, but the appellate court reversed the ruling. In January 2019, the Illinois Supreme Court held that, because the intent of BIPA was to protect data, the “technical” harm of collecting Plaintiff’s son’s data was sufficient to allow Plaintiff standing and for the case to move forward. ONGOING Ongoing. In the discovery phase.