February 24, 2015 Jannelle Kubinec Director of National, State and Special Projects WestEd 730 Harrison Street San Francisco, California 94107 Via email only (jkubine@wested.org) Re: SBE March 2015 Agenda Item – Evaluation Rubrics Dear Ms. Kubinec: We represent a coalition of civil rights, advocacy, community, parent, student and other organizations who have worked diligently on passage and implementation of the Local Control Funding Formula (LCFF). LCFF creates an historic opportunity to focus resources on helping California’s neediest students overcome the barriers they face in closing the achievement gap and graduating college and career ready. It also promises a new level of transparency and local engagement for parents, students, and community members in the design of their local schools. As you know, in an effort to give life to these objectives, we have commented jointly multiple times over the last year regarding the State Board of Education’s implementation of LCFF. With these comments, we address the evaluation rubrics required under LCFF. Specifically, we respond to the first Conceptual Draft for the rubrics published by WestEd late last month and recommend a direction for moving forward. We have enclosed a working draft of a Data Metric Analysis document that reflects proposed metrics and proposed targets for performance and improvement relative to those metrics. Additionally, we propose some structural changes to the Inquiry into Practice section of the Conceptual draft and are working on an alternative set of prompts that we will share in the future. We hope that the State Board and the individuals responsible for developing the rubrics adopt these suggestions so that the rubrics can serve their intended purpose of driving continuous improvement while closing the achievement gaps experienced by our neediest students. Recommended Design Principles for Evaluation Rubrics As an initial matter, we remind you of some of the design principles for the evaluation rubrics from our January 9, 2015 letter to the State Board that are particularly relevant to the current Conceptual Draft, including:  1 2 Establish Uniform Statewide Standards for Both “Performance” and “Expectation for Improvement”: The State Board must include in the evaluation rubrics “standards for school district and individual schoolsite performance” and standards for district and schoolsite “expectation for improvement.”1 The rubrics should thus set performance targets and improvement targets that will serve as statewide standards for the metrics required by statute in the 8 state priority areas.2 Cal. Educ. Code § 52064.5(c) emphasis added. Id. 1  Maintain an Equity Focus on Closing Achievement Gaps: Differing levels of expectation for improvement should be set for the unduplicated student groups and other numerically significant student subgroups to produce a closing of the achievement gap.  Be Supportive of Student, Parent, and Stakeholder Engagement; Comprehensive, Yet Accessible; and Transparent. The rubrics should be transparent to the public, cover the full range of outcomes on which LEAs are focused without sacrificing accessibility, and support community engagement in the rubric review/LCAP revision process.  Trigger Action in a Timely Manner with Clarity of Who is Responsible for Action: When performance issues are identified, the rubrics should ensure a prompt, clear protocol of response that identifies which entity (LEA, COE, Collaborative, SPI, or other body) is available or required to act and what level of support, assistance or intervention is appropriate by when.  Be Inquiry-Prompting: To promote effective reflection and continuous improvement, the rubrics should facilitate discussions about why outcomes have or have not been met for all students and for specific subgroups or schools, why the LEA has made specific choices in its program and/or LCAP approach and whether those choices should be re-evaluated based on the outcomes achieved. Data Metric Analysis The attached Excel file reflects our current working draft of a Data Metric Analysis for the evaluation rubrics. Just as WestEd, the Rubrics Design Group, and the State Board are working through a conceptual draft, so too is our coalition of grassroots community organizations and equity advocates. Though still a work-in-progress, we share this with you at this stage to help advance the current conceptual conversation. Format – Data Metric Analysis. First, we offer an explanation of the Data Metric Analysis spreadsheet format. This format blends aspects of the LCFF data metric analysis (page 5 of the WestEd rubrics concept document), the sample metrics attachment (pages 11-13 of the WestEd rubrics concept document), the one-pager passed out at the info session (LCFF State Priorities: Data Metrics and Data Availability), and independent work of our coalition. There are separate sheets within the Excel file for the three “buckets” of state priorities—Conditions of Learning, Pupil Outcomes, and Engagement. Within each bucket, the document lists the relevant state priorities, and within each priority it has a row for each proposed metric. As for the columns:  The first two columns have already been described: they are the relevant state priority and proposed metrics, respectively.  The next column indicates whether the metric is intended to be Required (“R”) (all LEAs would track) or Suggested (“S”) (LEAs could opt to use the metric).  The fourth column addresses the data source, with the options being: State data (“S”) (meaning the state has in its possession uniform data from LEAs), local uniform (“U”) (meaning the state does not compile it, but all districts collect and should have readily accessible the same data), and local data only (“L”) (meaning either that there is no standardized way districts capture data or that some, but not all, districts capture the data). 2   The fifth and sixth columns are proposed state targets for performance and improvement, respectively.3 The final column in the main chart is for comments, with further explanation or context for the proposed metric or how it would be derived. Our approach assumes that the same underlying data metric will be used across the three components (LEA, Equity, School). This differs conceptually from the approach within the WestEd sample metrics (which, at times, included different metrics). We believe maintaining the same underlying data metric across the three components will improve accessibility. Key Conceptual Features – Data Metric Analysis. The following are some of the key conceptual features in our proposed Data Metric Analysis:  As noted, we believe the LCFF statute requires that the State Board establish uniform state performance and improvement standards. Anything less than uniform statewide standards would undercut, if not irreparably impair, meaningful accountability for ensuring equality of educational opportunity, improving student outcomes and closing the achievement gap for all students. And anything less than uniform state standards would significantly impair public confidence in LCFF and the state’s obligation to ensure a uniform system of public schools. (Local performance and improvement targets would be appropriate where only non-uniform local data exists to measure a metric under a state priority or where a local priority and its corresponding metric were at issue.)  To establish state performance standards on certain basic metrics, such as fully credentialed teachers or sufficient instructional materials, we urge the Board to establish standards that expect 100% of LEAs and schools will be meeting the standard for all students, as is already required by law.  To establish state performance standards on most of the remaining metrics, we recommend (at least as part of this first round of standard-setting) that the State Board determine the state standard based on an ambitious but achievable level of current LEA or school performance statewide. As such, we suggest the State Board analyze for each identified metric the current performance distribution of LEAs or schools across the state and set the expected performance target at a meaningful percentile level. As an example, we have suggested setting the 80th percentile of the statewide performance distribution as the target for most outcome measures. For A-G completion, this would mean a rate of about 50% A-G completion for graduates of a school at the 80th percentile as set forth below: We endorse and have adopted WestEd’s proposed framework of tracking the data across three “components”—LEA, Equity, and School. These two columns are fully filled in for the LEA component. There are two separate sections to the right of main chart for the targets related to the Equity and School components. As a work-in-progress, we have proposed targets for each metric in the LEA component and have proposed targets for some metrics, but not others, in the Equity and School components. 3 3 A-G Metric (Percent of graduates completing A-G requirements) This chart shows the distribution of schools by the percent of their graduates that complete A-G requirements. (The analysis is limited to schools with at least 10 graduates, some 2,032 schools total.) Graduates Completing A-G Course Requirements (All Students) 700 600 500 400 80th Percentile 51.8% Average School 29% 300 200 100 0% 3% 6% 9% 12% 15% 18% 21% 24% 27% 30% 33% 36% 39% 42% 45% 48% 51% 54% 57% 60% 63% 66% 69% 72% 75% 78% 81% 84% 87% 90% 93% 96% 99% 0 Basic Statistics (for all schools with at least 10 graduates) o School Median – 27.2% o School Average – 29.0% o 70th percentile – 42.0% o 75th percentile – 46.5% o 80th percentile – 51.8%  The recommendation of using the 80th percentile as a consistent performance target represents an aspirational but—as evidenced by 20% of LEAs or schools—an attainable goal. By way of comparison, when an 800 API was originally set as the goal for school performance, an 800 represented the 87th percentile of the performance distribution for elementary schools statewide. Moreover, the consequences for not meeting performance objectives reflected in the API were potentially quite severe, in contrast to the focus within the state’s emerging accountability system on analyzing outcomes to inform focused technical assistance and support. We also would propose that this target be reassessed regularly (for example, every 3-5 years) and adjusted as necessary to support a system of continuous improvement and student growth.  We expect that for each outcome measure, the percentile performance target would differ somewhat, as appropriate, based on what state data shows relative to the unit of analysis. Thus, 4 for example, LEA 80th percentile performance targets and schoolsite 80th percentile performance targets will differ somewhat by virtue of the fact that the LEA distribution will fall differently than that for individuals schools. We further recommend that school-level performance targets be established separately according to grade spans, namely elementary, middle, K-8, or high schools.  Performance targets for different student subgroups should not differ from the targets set for LEAs or individual schoolsites overall. Thus, in our A-G example above, all subgroups would have the same performance target of 51.8% of graduates completing A-G as the school overall.  Because low-performing student subgroups will have a larger gap to close to reach the performance target than the average student or students from high-performing subgroups, those subgroups will have higher incremental annual improvement targets.  Over a set period of time, gaps for overall LEA and school performance and subgroup gaps should be incrementally closed to 0. In thinking about standard expectations for improvement, we would recommend that the SBE look at trends for improvement historically and think about where there has been the most effective system change and how long it has taken to see significant improvement in student outcomes. Thus, the time period for closing the gaps could differ for different metrics if there is a reasonable basis for such treatment. We have suggested a typical term of 5-7 years for expecting an LEA and school to meet its performance target.4  Intervention standards. We also believe the State Board should develop standards for when technical assistance or intervention should be provided and mandated by county offices, the California Collaborative for Educational Excellence, and ultimately, the State Superintendent. We look forward to working with the State Board to develop general parameters for the field concerning when to expect technical assistance or intervention. Reflection of Practice Narrative We also are developing a proposed draft Reflection of Practice document, which would be a narrative component to be completed as part of the evaluation rubrics. Under our approach, the reflection of practice component of the rubrics would effectuate the principle that the rubrics should prompt inquiry into practice and support a process of continual improvement. As we envision it, the prompts would be crafted to guide LEAs through a self-assessment of why the outcomes for the metrics identified in the Data Metrics Analysis are what they are. Put differently, the prompts would focus the inquiry on whether the LEA’s strategies for improving outcomes are working; if not, why not, and if so, why. As we develop the draft, however, we wanted to share that our proposed approach will differ from the WestEd conceptual draft in two material respects. First, our proposal will focus the inquiry on the outcomes reflected in the data. In fact, the inquiry will be linked to, and in fact flow directly from, the Data Metrics Analysis, prompting LEAs to review the actual outcomes, relative to performance and improvement targets—for the LEA as a whole and with an equity focus on unduplicated and racial/ethnic subgroups and across school sites—and then to assess the key actions, services, and programs that the LEA has implemented and might implement in 4 An alternative set of evaluation rubrics and state standards will likely need to be established for schools that are serving highly mobile and/or at risk students in the same vein as the current Alternative Schools Accountability Model (ASAM). 5 light of these outcomes. As such, our prompts will direct self-reflection into actual practice and, we believe, will promote meaningful reflection and continual improvement. In contrast, the narrative prompts in the conceptual draft focus inquiry on the process the LEA used in designing the relevant plan(s). As the conceptual draft states, the narrative prompts are based on the theory of action that supports good planning. This approach tests the underlying logic model that the LEA used, rather than beginning the inquiry with the actual outcomes reflected in the Data Metric Analysis. We believe that the evaluation rubrics should evaluate outcomes and guide inquiry into how current practices drive the actual outcomes, both in terms of areas of strengths and areas for growth. Second, our proposal will not include a scoring protocol in conjunction with the Reflection of Practice, as WestEd’s conceptual draft does. Evaluation rubrics must evaluate outcomes, but the Data Analysis Matrix already serves this critical function by assessing progress in general and also relative to the performance and improvement targets. The value-added of narrative prompts, we believe, is to guide meaningful inquiry into practices, specifically focused on how the practices impact the outcomes. In fact, we believe that introducing a separate scoring component in conjunction with the narrative prompts will only serve to complicate the tool and confuse the inquiry, while improperly diluting the significance of the performance and improvement targets that must, by statute, be included in the rubrics. Accordingly, we believe that the narrative component of the rubrics should be solely focused on assisting LEAs in assessing why the outcomes are what they are, relative to their actual educational programs. *** Thank you for the opportunity to comment. We look forward to continue working with WestEd and the State Board of Education to realize the full promise of LCFF. Sincerely, John Affeldt Managing Attorney Public Advocates Inc. Martha Arevalo Executive Director CARECEN (Central American Resource Center) Kevine Boggess Director of Youth Organizing Coleman Advocates for Children & Youth Jan Gustason Corea CEO CABE Oscar E. Cruz President & CEO Families In Schools Annie Fox Education Policy Lead PICO 6 Melia Franklin Executive Director Bay Area Parent Leadership Action Network Xilonin Cruz Gonzalez President Californians Together Taryn Ishida Executive Director Californians for Justice Akua Jackson Executive Director Youth Together Bill Lucia President and CEO EdVoice Warren Quan Executive Director California State Conference of the NAACP Luis Santana Executive Director Reading and Beyond Dave Sapp Director of Education Advocacy/Legal Counsel ACLU of California Gloria Scoggins President The BlackBoard of West Contra Costa Ryan J. Smith Executive Director The Education Trust-West David Valladolid President & CEO Parent Institute for Quality Education (PIQE) Jackie Thu-Huong Wong Director, FosterEd: California National Center for Youth Law 7 cc: President Mike Kirst & Members, California State Board of Education Karen Stapf Walters, Executive Director, California State Board of Education Judy Cias, Chief Counsel, California State Board of Education Brooks Allen, Deputy Policy Director and Assistant Legal Counsel, California State Board of Education Cathy McBride, Governor’s Office 8