Greenville County Schools - South Carolina

Greenville County Schools - South Carolina

Greenville County Schools South Carolina The 2013 Broad Prize for Urban Education PAGE DATA 2 Background Information 3 Trends in Overall Reading, Mathematics, and Science Proficiency READING 4 Reading Performance and Improvement at the Proficient or Above Level 5 Reading Performance and Improvement at the Advanced Level 6 Reading Proficiency Gaps 7 Standardized Residuals for Reading MATHEMATICS 8 Mathematics Performance and Improvement at the Proficient or Above Level 9 Mathematics Performance and Improvement at the Advanced Level 10 Mathematics Proficiency Gaps 11 Standardized Residuals for Mathematics SCIENCE 12 Science Performance and Improvement at the Proficient or Above Level 13 Science Performance and Improvement at the Advanced Level 14 Science Proficiency Gaps 15 Standardized Residuals for Science 16 High School Graduation Rates 17 College Readiness Data 18 Methodology and Technical Notes © 2013 THE ELI AND EDYTHE BROAD FOUNDATION PREPARED BY RTI INTERNATIONAL

Greenville County Schools SOUTH CAROLINA 2 © 2013 THE ELI AND EDYTHE BROAD FOUNDATION PREPARED BY RTI INTERNATIONAL The 2013 Broad Prize for Urban Education 2008 2009 2010 2011 District characteristics Locale1 21 21 21 21 Number of schools 94 94 94 96 Student characteristics Enrollment 69,444 70,441 70,969 71,930 District size rank2 53 50 49 47 Percent low-income students3 39 42 46 46 Percent non-White students 39 39 39 37 Percent of students by race/ethnicity African American 27 26 26 24 Asian4 2 3 3 2 Hispanic 10 10 11 12 White 60 60 60 59 American Indian/Alaska Native 0 0 0 0 Hawaiian Native/Pacific Islander4 — 0 0 0 Two or more races5 — 0 0 3 Not reported 1 1 1 0 Percent English language learners 4 8 9 9 Percent students with disabilities 15 15 14 14 District expenditures Total current expenditures per pupil $8,174 $8,063 $7,875 — Instructional expenditures per pupil $4,640 $4,698 $4,530 — State expenditures Total current expenditures per pupil $9,268 $9,352 $9,173 — Instructional expenditures per pupil $5,258 $5,367 $5,252 — SOURCE: Analysis of data from the U.S.

Department of Education, National Center for Education Statistics, Common Core of Data (CCD) and from the U.S. Census Bureau.

— Not available. † Data were suppressed due to unreliability. See methodology section. 1 As defined by CCD, locale code 11 represents a large city, code 12 represents a midsize city, and code 21 represents a suburb of a large urban area. 2 District size rank is based on enrollment in local school districts in the 50 states and DC, and does not include other district types or territories. 3 Low-income students are defined as eligible for Free or Reduced-Price School Lunch (FRSL). 4 Prior to 2011, some states combined Asian and Hawaiian Native/Pacific Islander categories. 5 As of 2011, all states reported a“Two or more races”category; however, some states began reporting this category as early as 2009.

NOTES: CCD data for 2012 and 2011 expenditures data from the U.S. Census were not available at time of this analysis. Background Information Grades included in analysis Subject/level Most recent test included in analysis 2009 2010 2011 2012 Reading Elementary Palmetto Assessment of State Standards (PASS) 3, 4, 5 3, 4, 5 3, 4, 5 3, 4, 5 Middle Palmetto Assessment of State Standards (PASS) 6, 7, 8 6, 7, 8 6, 7, 8 6, 7, 8 High High School Assessment Program (HSAP) 10 10 10 10 Mathematics Elementary Palmetto Assessment of State Standards (PASS) 3, 4, 5 3, 4, 5 3, 4, 5 3, 4, 5 Middle Palmetto Assessment of State Standards (PASS) 6, 7, 8 6, 7, 8 6, 7, 8 6, 7, 8 High High School Assessment Program (HSAP) 10 10 10 10 Science Elementary Palmetto Assessment of State Standards (PASS) 3, 4, 5 3, 4, 5 3, 4, 5 3, 4, 5 Middle Palmetto Assessment of State Standards (PASS) 6, 7, 8 6, 7, 8 6, 7, 8 6, 7, 8 High End-of-Course Examination Program (EOCEP) 10 10 10 10 SOURCE: State education agency.

— Not available. NOTES: Italics indicate tests were not comparable to other years. 2011 and 2012 data at the high school level are based on end-of-course exam results which include some data for middle school students (less than 1 percent). In 2011, the state implemented new federal race/ethnicity data collection and reporting requirements; as a result, trends for racial/ethnic subgroups should be interpreted with caution. At the high school level in 2011, the state began a rolling imple- mentation of the Biology EOCEP for accountability purposes, replacing the Physical Science EOCEP. The transition was completed in 2012; 2011 and 2012 results are comparable neither to each other nor to prior years and, thus, were excluded from trend analysis.

Description of district: 2008–2011 State test information: 2009–2012

Greenville County Schools SOUTH CAROLINA PREPARED BY RTI INTERNATIONAL © 2013 THE ELI AND EDYTHE BROAD FOUNDATION 3 The 2013 Broad Prize for Urban Education Trends in Overall Reading, Mathematics, and Science Proficiency Percentage of all students in the district and the state1 scoring at or above proficient in reading, mathematics, and science in elementary, middle, and high school: 2009–2012 Percent 20 40 60 80 100 2009 2010 2011 2012 SOURCE: Analysis of state test data. 1 Unless otherwise indicated in the NOTES section below, state values exclude the district’s results; see methodology section.

NOTES: See tables on pages 4, 8 and 12 for details.

Elementary Middle High Percent 20 40 60 80 100 2009 2010 2011 2012 Percent 20 40 60 80 100 2009 2010 2011 2012 Elementary Middle High D I S T R I C T P R O F I C I E N C Y R AT E S TAT E 1 P R O F I C I E N C Y R AT E Percent 20 40 60 80 100 2009 2010 2011 2012 Percent 20 40 60 80 100 2009 2010 2011 2012 Percent 20 40 60 80 100 2009 2010 2011 2012 Reading Mathematics Science

Greenville County Schools SOUTH CAROLINA 4 © 2013 THE ELI AND EDYTHE BROAD FOUNDATION PREPARED BY RTI INTERNATIONAL The 2013 Broad Prize for Urban Education Reading Performance and Improvement at the Proficient or Above Level Percentage of students in the district and the state1 scoring at or above proficient in reading: 2009–2012 Decile ranks 2009 2010 2011 2012 Average change 2012 Average change Elementary District All 79 81 82 82 1 3 3 African American 65 67 68 67 1 5 4 Asian † Hispanic 65 71 72 72 2 4 2 White 86 87 89 89 1 3 3 Low income 67 71 72 71 2 4 3 Non-low income 89 90 91 92 1 3 3 State1 All 78 78 78 78 0 — — African American 66 67 67 65 0 — — Asian — Hispanic † 72 73 72 0 — — White 87 86 87 87 0 — — Low income 69 70 71 70 0 — — Non-low income 90 90 90 91 0 — — Middle District All 70 69 71 73 1 3 3 African American 53 50 51 53 0 6 5 Asian † Hispanic 58 61 63 65 2 6 3 White 79 78 80 82 1 3 3 Low income 55 55 57 60 2 5 3 Non-low income 81 82 83 86 1 3 4 State1 All 69 68 69 70 0 — — African American 55 54 53 55 0 — — Asian — Hispanic † 65 — White 80 79 79 81 0 — — Low income 58 57 57 60 1 — — Non-low income 83 83 83 84 1 — — High District All 56 61 69 65 3 1 4 African American 31 37 45 41 4 3 3 Asian † Hispanic 41 45 55 51 4 3 6 White 69 73 81 76 3 2 5 Low income 33 41 51 44 4 3 3 Non-low income 68 74 80 78 4 1 5 State1 All 49 54 60 57 3 — — African American 31 37 42 39 3 — — Asian — Hispanic — White 63 66 72 69 2 — — Low income 32 39 45 42 4 — — Non-low income 64 68 74 72 3 — — SOURCE: Analysis of state test data.

— Not available. † Data were suppressed due to unreliability or if the subgroup represented less than 5 percent of test takers at a level. See methodology section. ‡ Calculation could not be performed due to a change in the state test.

1 Unless otherwise indicated in the NOTES section below, state values exclude the district’s results; see methodology section. NOTES: Details on the calculation of average change and decile ranks are found in the methodology section. Positive change values appear in color. Decile ranks appear in color when the district’s 2012 performance or average change in proficiency is in the top 30 percent (1–3) of the state.

Greenville County Schools SOUTH CAROLINA PREPARED BY RTI INTERNATIONAL © 2013 THE ELI AND EDYTHE BROAD FOUNDATION 5 The 2013 Broad Prize for Urban Education Decile ranks 2009 2010 2011 2012 Average change 2012 Average change Elementary District All 42 45 46 50 3 3 3 African American 21 25 24 28 2 4 4 Asian † Hispanic 26 30 33 35 3 4 3 White 52 55 57 61 3 2 3 Low income 25 29 31 34 3 4 3 Non-low income 56 61 61 67 4 3 3 State All 39 43 42 45 2 — — African American 23 26 25 26 1 — — Asian — Hispanic † 33 34 35 1 — — White 52 55 54 58 2 — — Low income 26 30 30 32 2 — — Non-low income 57 61 60 65 2 — — Middle District All 30 37 40 41 4 3 2 African American 14 18 17 19 2 5 6 Asian † Hispanic 18 23 26 28 3 6 3 White 39 46 50 52 4 3 2 Low income 15 20 23 25 3 4 2 Non-low income 42 51 54 56 5 4 3 State All 30 35 36 38 2 — — African American 15 18 19 20 1 — — Asian — Hispanic † 29 — White 41 47 49 50 3 — — Low income 18 22 23 25 2 — — Non-low income 45 52 54 56 3 — — High District All 26 31 39 32 2 1 3 African American 8 11 16 11 2 3 3 Asian † Hispanic 14 18 25 19 2 3 7 White 36 40 50 41 3 2 4 Low income 9 13 21 14 2 3 3 Non-low income 35 42 50 44 3 1 4 State All 21 25 30 24 2 — — African American 9 12 14 10 1 — — Asian — Hispanic — White 30 35 42 34 2 — — Low income 9 13 17 12 1 — — Non-low income 31 37 44 37 3 — — SOURCE: Analysis of state test data.

— Not available. † Data were suppressed due to unreliability or if the subgroup represented less than 5 percent of test takers at a level. See methodology section. ‡ Calculation could not be performed due to a change in the state test.

1 Unless otherwise indicated in the NOTES section below, state values exclude the district’s results; see methodology section. 2 “Advanced”includes any levels above proficient. NOTES: Details on the calculation of average change and decile ranks are found in the methodology section. Positive change values appear in color. Decile ranks appear in color when the district’s 2012 performance or average change in proficiency is in the top 30 percent (1–3) of the state. Reading Performance and Improvement at the Advanced Level Percentage of students in the district and the state1 scoring at the advanced2 level in reading: 2009–2012

Greenville County Schools SOUTH CAROLINA 6 © 2013 THE ELI AND EDYTHE BROAD FOUNDATION PREPARED BY RTI INTERNATIONAL The 2013 Broad Prize for Urban Education Reading Proficiency Gaps Percentage-point gaps in reading proficiency rates between disadvantaged and advantaged groups: 2009–2012 Decile ranks 2009 2010 2011 2012 Average change 2012 Average change Elementary Internal district gap African American vs. White -21 -21 -21 -22 0 7 5 Hispanic vs. White -21 -16 -16 -17 1 8 4 Low income vs. non-low income -22 -20 -19 -21 1 7 5 Internal district vs. internal state1 gap African American vs. White -1 -2 -1 0 0 — — Hispanic vs.

White † -2 -2 -2 0 — — Low income vs. non-low income -2 0 0 0 1 — — External gap: district disadvantaged vs. state1 advantaged African American vs. White -22 -20 -19 -20 1 — — Hispanic vs. White -22 -15 -14 -15 2 — — Low income vs. non-low income -23 -19 -18 -19 1 — — Middle Internal district gap African American vs. White -26 -28 -29 -29 -1 9 7 Hispanic vs. White -20 -17 -17 -17 1 6 4 Low income vs. non-low income -26 -27 -26 -26 0 9 5 Internal district vs. internal state1 gap African American vs. White -1 -2 -3 -3 -1 — — Hispanic vs. White - 1 — Low income vs. non-low income -1 -1 0 -1 0 — — External gap: district disadvantaged vs.

state1 advantaged African American vs. White -27 -28 -28 -28 0 — — Hispanic vs. White -21 -18 -16 -16 2 — — Low income vs. non-low income -28 -28 -26 -25 1 — — High Internal district gap African American vs. White -39 -36 -36 -35 1 9 4 Hispanic vs. White -29 -28 -26 -25 1 7 6 Low income vs. non-low income -35 -33 -29 -34 1 9 5 Internal district vs. internal state1 gap African American vs. White -7 -6 -5 -5 1 — — Hispanic vs. White — Low income vs. non-low income -3 -4 0 -4 0 — — External gap: district disadvantaged vs. state1 advantaged African American vs. White -32 -30 -27 -28 2 — — Hispanic vs.

White -22 -21 -18 -18 2 — — Low income vs. non-low income -31 -27 -23 -27 1 — — SOURCE: Analysis of state test data.

— Not available. † Data were suppressed due to unreliability or if the subgroup represented less than 5 percent of test takers at a level. See methodology section. ‡ Calculation could not be performed due to a change in the state test. 1 Unless otherwise indicated in the NOTES section below, state values exclude the district’s results; see methodology section. NOTES: In the first four columns, negative numbers indicate an achievement gap, where the disadvantaged group performed lower than the advantaged group. (Positive numbers indicate the disadvantaged group performed higher than the advantaged group.) Negative average change values indicate the achievement gap widened; positive numbers indicate the achievement gap narrowed.

Average change values appear in color when the gap is closing; details on the defini- tion of a gap closure and average change are found in the methodology section. Details on the calculation of decile ranks are also found in the methodology section. 2012 decile ranks appear in color when the 2012 gap is among the 30 percent (1–3) of districts with the smallest gaps in the state. Decile ranks of average change appear in color when the average change in gaps is in the top 30 percent (1–3) of the state and meets the conditions for a gap closure.

Greenville County Schools SOUTH CAROLINA PREPARED BY RTI INTERNATIONAL © 2013 THE ELI AND EDYTHE BROAD FOUNDATION 7 The 2013 Broad Prize for Urban Education Standardized residuals1 for regressions of the percentage of students in the district scoring at or above proficient in reading, controlling for district poverty level: 2009–2012 Standardized Residuals for Reading Standardized residuals1 for regressions of the percentage of students in the district scoring at or above proficient in reading, controlling for district poverty level: 2009–2012 SOURCE: Analysis of state test data.

1 Positive residuals indicate higher-than-expected performance, and negative residuals indicate lower-than-expected performance, given the district’s poverty level.

Residuals are expressed in standard units. Regressions were weighted by district size. NOTES: See below for details. Residual -2.00 -1.50 -1.00 -0.50 0.00 0.50 1.00 1.50 2.00 2009 2010 2011 2012 2009 2010 2011 2012 2009 2010 2011 2012 -0.72 -0.38 -0.14 -0.15 -1.06 -0.90 -0.65 -0.43 -0.06 0.19 0.39 0.21 Elementary High Middle Decile ranks2 2009 2010 2011 2012 Average change 2012 Average change Elementary -0.72 -0.38 -0.14 -0.15 0.20 6.00 3.00 Middle -1.06 -0.90 -0.65 -0.43 0.22 8.00 3.00 High -0.06 0.19 0.39 0.21 0.10 5.00 4.00 Count of positive residuals in reading/total available 0/3 1/3 1/3 1/3 3/3 6.33 3.33 Count of positive residuals in reading, mathematics, and science/total available 0/9 2/9 1/9 3/9 8/9 6.56 4.00 SOURCE: Analysis of state test data.

— Not available. † Data were suppressed due to unreliability. See methodology section. 1 Positive residuals indicate higher-than-expected performance, and negative residuals indicate lower-than-expected performance, given the district’s poverty level. Residuals are expressed in standard units. Regressions were weighted by district size. 2 For the count of“positive residuals”rows, the decile rank is the average rank for the three education levels. NOTES: For details on the calculation of average change and decile ranks, see methodology section. Positive average change values and decile ranks in the top 30 percent (1–3) of the state appear in color.

Counts of residuals also appear in color when all available residuals are positive.

Greenville County Schools SOUTH CAROLINA 8 © 2013 THE ELI AND EDYTHE BROAD FOUNDATION PREPARED BY RTI INTERNATIONAL The 2013 Broad Prize for Urban Education Mathematics Performance and Improvement at the Proficient or Above Level Percentage of students in the district and the state1 scoring at or above proficient in mathematics: 2009–2012 Decile ranks 2009 2010 2011 2012 Average change 2012 Average change Elementary District All 76 76 79 81 2 3 5 African American 58 59 62 63 2 5 5 Asian † Hispanic 67 70 74 74 2 4 4 White 84 83 86 88 2 3 4 Low income 64 65 69 70 2 4 5 Non-low income 86 86 88 91 2 3 3 State1 All 72 72 75 75 1 — — African American 57 58 60 61 1 — — Asian — Hispanic 67 67 71 72 2 — — White 83 83 84 85 1 — — Low income 62 63 66 67 2 — — Non-low income 86 86 87 88 1 — — Middle District All 66 67 72 72 3 4 3 African American 46 46 53 51 2 8 5 Asian † Hispanic 58 59 68 66 3 8 4 White 75 76 80 81 2 5 3 Low income 50 52 60 59 3 7 3 Non-low income 78 80 82 85 3 5 4 State1 All 68 67 70 71 1 — — African American 52 52 56 56 2 — — Asian — Hispanic † 70 — White 79 78 80 81 1 — — Low income 56 56 60 61 2 — — Non-low income 81 81 84 85 1 — — High District All 53 54 57 57 2 3 4 African American 25 27 30 32 2 6 3 Asian † Hispanic 45 45 49 45 0 6 8 White 65 66 68 68 1 4 5 Low income 32 34 39 38 2 6 4 Non-low income 64 66 68 70 2 4 4 State1 All 50 51 51 54 1 — — African American 31 32 31 35 1 — — Asian — Hispanic — White 64 64 66 66 1 — — Low income 34 36 37 39 2 — — Non-low income 64 65 66 68 1 — — SOURCE: Analysis of state test data.

— Not available. † Data were suppressed due to unreliability or if the subgroup represented less than 5 percent of test takers at a level. See methodology section. ‡ Calculation could not be performed due to a change in the state test.

1 Unless otherwise indicated in the NOTES section below, state values exclude the district’s results; see methodology section. NOTES: Details on the calculation of average change and decile ranks are found in the methodology section. Positive change values appear in color. Decile ranks appear in color when the district’s 2012 performance or average change in proficiency is in the top 30 percent (1–3) of the state.

Greenville County Schools SOUTH CAROLINA PREPARED BY RTI INTERNATIONAL © 2013 THE ELI AND EDYTHE BROAD FOUNDATION 9 The 2013 Broad Prize for Urban Education Mathematics Performance and Improvement at the Advanced Level Percentage of students in the district and the state1 scoring at the advanced2 level in mathematics: 2009–2012 Decile ranks 2009 2010 2011 2012 Average change 2012 Average change Elementary District All 34 39 44 45 4 2 4 African American 14 19 21 21 2 4 6 Asian † Hispanic 22 28 31 33 4 4 4 White 43 48 53 55 4 3 4 Low income 18 24 28 29 4 4 4 Non-low income 47 54 59 61 5 2 4 State All 30 35 39 38 3 — — African American 14 18 21 20 2 — — Asian — Hispanic 22 27 33 32 3 — — White 41 47 51 51 3 — — Low income 18 23 27 26 3 — — Non-low income 46 52 57 57 4 — — Middle District All 24 27 31 33 3 4 2 African American 8 10 12 12 2 7 5 Asian † Hispanic 14 18 21 21 3 8 5 White 32 34 40 42 4 4 2 Low income 10 13 17 17 2 6 3 Non-low income 34 39 44 47 4 4 2 State All 26 27 30 31 2 — — African American 11 12 14 15 1 — — Asian — Hispanic † 27 — White 37 38 41 42 2 — — Low income 14 16 18 19 2 — — Non-low income 40 42 45 48 3 — — High District All 29 28 27 31 0 3 5 African American 7 9 8 10 1 6 4 Asian † Hispanic 19 16 16 19 0 6 6 White 39 37 36 40 0 3 6 Low income 12 11 12 14 1 4 4 Non-low income 39 38 37 42 1 4 6 State All 26 24 23 27 0 — — African American 10 9 8 11 0 — — Asian — Hispanic — White 37 35 33 37 0 — — Low income 12 12 11 14 0 — — Non-low income 38 36 35 40 1 — — SOURCE: Analysis of state test data.

— Not available. † Data were suppressed due to unreliability or if the subgroup represented less than 5 percent of test takers at a level. See methodology section. ‡ Calculation could not be performed due to a change in the state test.

1 Unless otherwise indicated in the NOTES section below, state values exclude the district’s results; see methodology section. 2 “Advanced”includes any levels above proficient. NOTES: Details on the calculation of average change and decile ranks are found in the methodology section. Positive change values appear in color. Decile ranks appear in color when the district’s 2012 performance or average change in proficiency is in the top 30 percent (1–3) of the state.

Greenville County Schools SOUTH CAROLINA 10 © 2013 THE ELI AND EDYTHE BROAD FOUNDATION PREPARED BY RTI INTERNATIONAL The 2013 Broad Prize for Urban Education Mathematics Proficiency Gaps Percentage-point gaps in mathematics proficiency rates between disadvantaged and advantaged groups: 2009–2012 Decile ranks 2009 2010 2011 2012 Average change 2012 Average change Elementary Internal district gap African American vs.

White -25 -25 -24 -25 0 8 6 Hispanic vs. White -16 -14 -13 -14 1 7 6 Low income vs. non-low income -23 -21 -19 -21 1 7 7 Internal district vs. internal state1 gap African American vs. White 0 0 0 -1 0 — — Hispanic vs. White -1 1 1 -1 0 — — Low income vs. non-low income 2 2 2 1 0 — — External gap: district disadvantaged vs. state1 advantaged African American vs. White -25 -24 -23 -22 1 — — Hispanic vs. White -16 -13 -11 -11 2 — — Low income vs. non-low income -22 -21 -18 -18 2 — — Middle Internal district gap African American vs. White -29 -29 -27 -30 0 9 7 Hispanic vs. White -17 -17 -11 -16 1 8 6 Low income vs.

non-low income -27 -27 -22 -26 1 9 5 Internal district vs. internal state1 gap African American vs. White -2 -3 -2 -5 -1 — — Hispanic vs. White - 4 — Low income vs. non-low income -2 -2 1 -3 0 — — External gap: district disadvantaged vs. state1 advantaged African American vs. White -33 -31 -27 -30 1 — — Hispanic vs. White -21 -19 -12 -15 2 — — Low income vs. non-low income -31 -29 -24 -26 2 — — High Internal district gap African American vs. White -40 -39 -38 -36 1 9 4 Hispanic vs. White -20 -21 -19 -23 -1 9 7 Low income vs. non-low income -32 -32 -28 -33 0 9 5 Internal district vs. internal state1 gap African American vs.

White -8 -6 -3 -5 1 — — Hispanic vs. White — Low income vs. non-low income -2 -3 1 -4 0 — — External gap: district disadvantaged vs. state1 advantaged African American vs. White -39 -37 -36 -34 2 — — Hispanic vs. White -19 -20 -17 -21 0 — — Low income vs. non-low income -32 -30 -27 -30 1 — — SOURCE: Analysis of state test data.

— Not available. † Data were suppressed due to unreliability or if the subgroup represented less than 5 percent of test takers at a level. See methodology section. ‡ Calculation could not be performed due to a change in the state test. 1 Unless otherwise indicated in the NOTES section below, state values exclude the district’s results; see methodology section. NOTES: In the first four columns, negative numbers indicate an achievement gap, where the disadvantaged group performed lower than the advantaged group. (Positive numbers indicate the disadvantaged group performed higher than the advantaged group.) Negative average change values indicate the achievement gap widened; positive numbers indicate the achievement gap narrowed.

Average change values appear in color when the gap is closing; details on the defini- tion of a gap closure and average change are found in the methodology section. Details on the calculation of decile ranks are also found in the methodology section. 2012 decile ranks appear in color when the 2012 gap is among the 30 percent (1–3) of districts with the smallest gaps in the state. Decile ranks of average change appear in color when the average change in gaps is in the top 30 percent (1–3) of the state and meets the conditions for a gap closure.

Greenville County Schools SOUTH CAROLINA PREPARED BY RTI INTERNATIONAL © 2013 THE ELI AND EDYTHE BROAD FOUNDATION 11 The 2013 Broad Prize for Urban Education Standardized Residuals for Mathematics Standardized residuals1 for regressions of the percentage of students in the district scoring at or above proficient in mathematics, controlling for district poverty level: 2009–2012 Standardized residuals1 for regressions of the percentage of students in the district scoring at or above proficient in mathematics, controlling for district poverty level: 2009–2012 SOURCE: Analysis of state test data.

1 Positive residuals indicate higher-than-expected performance, and negative residuals indicate lower-than-expected performance, given the district’s poverty level.

Residuals are expressed in standard units. Regressions were weighted by district size. NOTES: See below for details. Residual -2.00 -1.50 -1.00 -0.50 0.00 0.50 1.00 1.50 2.00 2009 2010 2011 2012 2009 2010 2011 2012 2009 2010 2011 2012 -0.37 -0.26 -0.12 0.06 -1.25 -0.95 -0.62 -0.65 -0.63 -0.40 -0.29 -0.67 Elementary High Middle Decile ranks2 2009 2010 2011 2012 Average change 2012 Average change Elementary -0.37 -0.26 -0.12 0.06 0.14 6.00 4.00 Middle -1.25 -0.95 -0.62 -0.65 0.21 8.00 3.00 High -0.63 -0.40 -0.29 -0.67 0.00 8.00 5.00 Count of positive residuals in mathematics/total available 0/3 0/3 0/3 1/3 2/3 7.33 4.00 Count of positive residuals in reading, mathematics, and science/total available 0/9 2/9 1/9 3/9 8/9 6.56 4.00 SOURCE: Analysis of state test data.

— Not available. † Data were suppressed due to unreliability. See methodology section. 1 Positive residuals indicate higher-than-expected performance, and negative residuals indicate lower-than-expected performance, given the district’s poverty level. Residuals are expressed in standard units. Regressions were weighted by district size. 2 For the count of“positive residuals”rows, the decile rank is the average rank for the three education levels. NOTES: For details on the calculation of average change and decile ranks, see methodology section. Positive average change values and decile ranks in the top 30 percent (1–3) of the state appear in color.

Counts of residuals also appear in color when all available residuals are positive.

Greenville County Schools SOUTH CAROLINA 12 © 2013 THE ELI AND EDYTHE BROAD FOUNDATION PREPARED BY RTI INTERNATIONAL The 2013 Broad Prize for Urban Education Science Performance and Improvement at the Proficient or Above Level Percentage of students in the district and the state1 scoring at or above proficient in science: 2009–2012 Decile ranks 2009 2010 2011 2012 Average change 2012 Average change Elementary District All 71 71 71 75 1 3 6 African American 52 50 50 55 1 5 6 Asian † Hispanic 59 59 60 64 2 5 4 White 81 81 81 84 1 3 6 Low income 58 57 58 63 2 4 5 Non-low income 83 84 84 87 1 3 5 State1 All 66 64 66 69 1 — — African American 49 46 47 52 1 — — Asian — Hispanic † 55 58 62 4 — — White 79 78 81 82 1 — — Low income 54 53 55 60 2 — — Non-low income 83 82 83 85 1 — — Middle District All 68 69 71 75 2 4 4 African American 51 50 50 55 1 6 8 Asian † Hispanic 59 60 62 67 3 6 6 White 76 79 80 83 2 4 3 Low income 54 56 58 62 3 6 5 Non-low income 79 81 82 87 2 4 3 State1 All 67 69 69 72 2 — — African American 50 53 53 57 2 — — Asian — Hispanic † 69 — White 79 80 81 83 1 — — Low income 54 57 58 62 3 — — Non-low income 82 83 84 86 1 — — High District All 26 33 44 51 ‡ 2 ‡ African American 7 15 19 25 ‡ 3 ‡ Asian ‡ Hispanic 16 21 28 32 ‡ 5 ‡ White 35 41 55 63 ‡ 3 ‡ Low income 11 19 24 31 ‡ 4 ‡ Non-low income 33 42 55 63 ‡ 2 ‡ State1 All 20 24 34 41 — African American 7 10 15 21 — Asian — Hispanic — White 30 34 47 54 — Low income 9 12 19 26 — Non-low income 31 35 48 56 — SOURCE: Analysis of state test data.

— Not available. † Data were suppressed due to unreliability or if the subgroup represented less than 5 percent of test takers at a level. See methodology section. ‡ Calculation could not be performed due to a change in the state test.

1 Unless otherwise indicated in the NOTES section below, state values exclude the district’s results; see methodology section. NOTES: Details on the calculation of average change and decile ranks are found in the methodology section. Positive change values appear in color. Decile ranks appear in color when the district’s 2012 performance or average change in proficiency is in the top 30 percent (1–3) of the state. A rolling implementation of a new science end-of-course exam affected the comparability of results at the high school level; data from 2011 and 2012 are considered comparable neither to each other nor to previous years.

2011 and 2012 data at the high school level are based on end-of-course exam results which include some data for middle school students (less than 1 percent). Italicized values are not comparable to other years.

Greenville County Schools SOUTH CAROLINA PREPARED BY RTI INTERNATIONAL © 2013 THE ELI AND EDYTHE BROAD FOUNDATION 13 The 2013 Broad Prize for Urban Education Decile ranks 2009 2010 2011 2012 Average change 2012 Average change Elementary District All 19 20 21 21 1 3 5 African American 6 7 6 7 0 4 5 Asian † Hispanic 9 9 9 11 1 4 5 White 25 27 28 28 1 4 7 Low income 8 9 10 10 1 4 5 Non-low income 28 31 32 32 1 4 6 State All 16 17 18 18 1 — — African American 5 6 6 6 0 — — Asian — Hispanic † 10 11 10 0 — — White 24 26 28 26 1 — — Low income 7 9 10 10 1 — — Non-low income 27 29 31 30 1 — — Middle District All 20 25 27 31 4 3 3 African American 7 9 9 12 1 6 7 Asian † Hispanic 12 15 17 20 3 7 6 White 26 33 35 40 4 4 3 Low income 10 13 15 18 3 5 4 Non-low income 28 36 37 44 5 4 3 State All 20 25 26 29 3 — — African American 8 10 10 13 2 — — Asian — Hispanic † 22 — White 29 35 37 40 3 — — Low income 10 14 15 18 2 — — Non-low income 32 39 41 45 4 — — High District All 13 20 28 35 ‡ 2 ‡ African American 2 8 9 11 ‡ 4 ‡ Asian ‡ Hispanic 7 12 16 18 ‡ 5 ‡ White 19 25 37 46 ‡ 2 ‡ Low income 5 9 13 17 ‡ 4 ‡ Non-low income 18 26 38 45 ‡ 2 ‡ State All 10 14 20 27 — African American 3 4 6 10 — Asian — Hispanic — White 16 21 29 37 — Low income 3 6 9 14 — Non-low income 17 22 30 39 — SOURCE: Analysis of state test data.

— Not available. † Data were suppressed due to unreliability or if the subgroup represented less than 5 percent of test takers at a level. See methodology section. ‡ Calculation could not be performed due to a change in the state test.

1 Unless otherwise indicated in the NOTES section below, state values exclude the district’s results; see methodology section. 2 “Advanced”includes any levels above proficient. NOTES: Details on the calculation of average change and decile ranks are found in the methodology section. Positive change values appear in color. Decile ranks appear in color when the district’s 2012 performance or average change in proficiency is in the top 30 percent (1–3) of the state. A rolling implementation of a new science end-of-course exam affected the comparability of results at the high school level; data from 2011 and 2012 are considered comparable neither to each other nor to previous years.

2011 and 2012 data at the high school level are based on end-of-course exam results which include some data for middle school students (less than 1 percent). Italicized values are not comparable to other years.

Science Performance and Improvement at the Advanced Level Percentage of students in the district and the state1 scoring at the advanced2 level in science: 2009–2012

Greenville County Schools SOUTH CAROLINA 14 © 2013 THE ELI AND EDYTHE BROAD FOUNDATION PREPARED BY RTI INTERNATIONAL The 2013 Broad Prize for Urban Education Science Proficiency Gaps Percentage-point gaps in science proficiency rates between disadvantaged and advantaged groups: 2009–2012 Decile ranks 2009 2010 2011 2012 Average change 2012 Average change Elementary Internal district gap African American vs.

White -29 -31 -32 -30 0 7 6 Hispanic vs. White -22 -22 -22 -20 1 6 5 Low income vs. non-low income -25 -27 -26 -25 0 8 7 Internal district vs. internal state1 gap African American vs. White 2 1 2 1 0 — — Hispanic vs. White † 1 1 -1 -1 — — Low income vs. non-low income 3 3 2 1 -1 — — External gap: district disadvantaged vs. state1 advantaged African American vs. White -27 -29 -31 -27 0 — — Hispanic vs. White -20 -19 -21 -18 1 — — Low income vs. non-low income -25 -25 -25 -22 1 — — Middle Internal district gap African American vs. White -26 -29 -30 -29 -1 9 9 Hispanic vs. White -17 -18 -18 -16 0 6 7 Low income vs.

non-low income -25 -25 -24 -25 0 8 7 Internal district vs. internal state1 gap African American vs. White 3 -1 -2 -2 -2 — — Hispanic vs. White - 2 — Low income vs. non-low income 2 1 1 -1 -1 — — External gap: district disadvantaged vs. state1 advantaged African American vs. White -28 -30 -30 -29 0 — — Hispanic vs. White -20 -20 -19 -16 1 — — Low income vs. non-low income -28 -27 -26 -24 1 — — High Internal district gap African American vs. White -28 -27 -36 -39 ‡ 9 ‡ Hispanic vs. White -20 -20 -27 -31 ‡ 8 ‡ Low income vs. non-low income -22 -23 -32 -32 ‡ 9 ‡ Internal district vs. internal state1 gap African American vs.

White -6 -2 -3 -6 — Hispanic vs. White — Low income vs. non-low income 0 0 -3 -2 — External gap: district disadvantaged vs. state1 advantaged African American vs. White -23 -20 -29 -29 — Hispanic vs. White -14 -13 -19 -21 — Low income vs. non-low income -19 -17 -24 -25 — SOURCE: Analysis of state test data.

— Not available. † Data were suppressed due to unreliability or if the subgroup represented less than 5 percent of test takers at a level. See methodology section. ‡ Calculation could not be performed due to a change in the state test. 1 Unless otherwise indicated in the NOTES section below, state values exclude the district’s results; see methodology section. NOTES: In the first four columns, negative numbers indicate an achievement gap, where the disadvantaged group performed lower than the advantaged group. (Positive numbers indicate the disadvantaged group performed higher than the advantaged group.) Negative average change values indicate the achievement gap widened; positive numbers indicate the achievement gap narrowed.

Average change values appear in color when the gap is closing; details on the defini- tion of a gap closure and average change are found in the methodology section. Details on the calculation of decile ranks are also found in the methodology section. 2012 decile ranks appear in color when the 2012 gap is among the 30 percent (1–3) of districts with the smallest gaps in the state. Decile ranks of aver- age change appear in color when the average change in gaps is in the top 30 percent (1–3) of the state and meets the conditions for a gap closure. A rolling implementation of a new science end-of-course exam affected the comparability of results at the high school level; data from 2011 and 2012 are considered comparable neither to each other nor to previous years.

2011 and 2012 data at the high school level are based on end-of-course exam results which include some data for middle school students (less than 1 percent). Italicized values are not comparable to other years.

Greenville County Schools SOUTH CAROLINA PREPARED BY RTI INTERNATIONAL © 2013 THE ELI AND EDYTHE BROAD FOUNDATION 15 The 2013 Broad Prize for Urban Education Standardized residuals1 for regressions of the percentage of students in the district scoring at or above proficient in science, controlling for district poverty level: 2009–2012 Standardized Residuals for Science Standardized residuals1 for regressions of the percentage of students in the district scoring at or above proficient in science, controlling for district poverty level: 2009–2012 SOURCE: Analysis of state test data.

1 Positive residuals indicate higher-than-expected performance, and negative residuals indicate lower-than-expected performance, given the district’s poverty level.

Residuals are expressed in standard units. Regressions were weighted by district size. NOTES: See below for details. Residual -2.00 -1.50 -1.00 -0.50 0.00 0.50 1.00 1.50 2.00 2009 2010 2011 2012 2009 2010 2011 2012 2009 2010 2011 2012 -0.24 -0.01 -0.26 -0.05 -0.88 -0.77 -0.76 -0.49 -0.28 0.29 -0.01 0.09 Elementary High Middle Decile ranks2 2009 2010 2011 2012 Average change 2012 Average change Elementary -0.24 -0.01 -0.26 -0.05 0.03 6.00 5.00 Middle -0.88 -0.77 -0.76 -0.49 0.12 7.00 4.00 High -0.28 0.29 -0.01 0.09 0.08 5.00 5.00 Count of positive residuals in science/total available 0/3 1/3 0/3 1/3 3/3 6.00 4.67 Count of positive residuals in reading, mathematics, and science/total available 0/9 2/9 1/9 3/9 8/9 6.56 4.00 SOURCE: Analysis of state test data.

— Not available. † Data were suppressed due to unreliability. See methodology section. 1 Positive residuals indicate higher-than-expected performance, and negative residuals indicate lower-than-expected performance, given the district’s poverty level. Residuals are expressed in standard units. Regressions were weighted by district size. 2 For the count of“positive residuals”rows, the decile rank is the average rank for the three education levels. NOTES: For details on the calculation of average change and decile ranks, see methodology section. Positive average change values and decile ranks in the top 30 percent (1–3) of the state appear in color.

Counts of residuals also appear in color when all available residuals are positive.

Greenville County Schools SOUTH CAROLINA 16 © 2013 THE ELI AND EDYTHE BROAD FOUNDATION PREPARED BY RTI INTERNATIONAL The 2013 Broad Prize for Urban Education High School Graduation Rates Three estimated high school graduation rates: 2006–2009 Estimated high school graduation rates for the classes of 2006–2009 Percent 20 40 60 80 100 2006 2007 2008 2009 2006 2007 2008 2009 2006 2007 2008 2009 Averaged Freshman Graduation Rate Manhattan Institute Method Urban Institute Method All students African American Asian Hispanic White 2006 2007 2008 2009 Average change Average of the three graduation rate measures All — 57 — 66 4 African American — 42 — 52 5 Asian † Hispanic — 48 — 63 8 White — 65 — 74 4 Averaged Freshman Graduation Rate All — 61 — 69 4 African American — 46 — 55 5 Asian † Hispanic — 45 — 74 14 White — 69 — 74 3 Urban Institute method1 All — 55 — 62 3 African American — 40 — 47 3 Asian † Hispanic — 50 — 53 1 White — 61 — 70 5 Manhattan Institute method1 All — 56 — 68 6 African American — 41 — 55 7 Asian † Hispanic † White — 66 — 76 5 SOURCE: Analysis of data from the U.S.

Department of Education, National Center for Education Statistics, Common Core of Data (CCD). — Not available. † Data were suppressed if a subgroup represented less than 5 percent of the population or due to unreliability; rules vary by method. See methodology section.

1 The Urban Institute method is also known as Swanson’s cumulative promotion index (SCPI) and the Manhattan Institute method is also known as Greene’s graduation indicator (GGI). NOTES: Gaps in lines represent missing or suppressed data. Average of the three graduation rates is based on the average of any available values from the three individual methods. Positive change values appear in color. Details on the calculation of average change are found in the methodology section. Diploma counts for 2010 or later were not released in time for this year’s analysis. For districts that were eligible for The 2012 Broad Prize, results are generally the same as those reported last year.

Diploma counts in 2006 and 2008 were unavailable for this district.

Greenville County Schools SOUTH CAROLINA PREPARED BY RTI INTERNATIONAL © 2013 THE ELI AND EDYTHE BROAD FOUNDATION 17 The 2013 Broad Prize for Urban Education College Readiness Data Test scores and participation rates for college readiness examinations: 2009–2012 2009 2010 2011 2012 Average change SAT Reasoning Test1 Mean total score (reading, mathematics, and writing) All 1,472 1,458 1,448 1,448 -8 African American 1,204 1,231 1,198 1,208 -2 Asian 1,577 1,612 1,617 1,594 6 Hispanic 1,348 1,352 1,395 1,363 9 White 1,561 1,534 1,527 1,530 -10 Participation rate All 53 57 60 60 2 African American 42 49 51 50 3 Asian † Hispanic 41 46 48 52 3 White 56 59 62 61 2 ACT1 Mean composite score (English, reading, mathematics, and science) All 21.4 21.4 21.3 21.5 0.0 African American 16.7 17.0 16.8 17.4 0.2 Asian 23.0 24.6 25.7 23.5 0.3 Hispanic 19.6 19.6 20.2 19.6 0.1 White 22.7 22.8 22.7 22.8 0.0 Participation rate All 34 37 38 41 2 African American 24 32 33 31 2 Asian † Hispanic 20 25 32 34 5 White 37 38 38 41 1 Advanced Placement (AP) (all subjects)2 Percent of tests taken with scores of 3 or above All 61 59 57 56 -2 African American 32 36 31 33 0 Asian 72 72 66 63 -3 Hispanic 47 49 51 50 1 White 63 61 59 58 -2 Participation rate All 21 22 23 25 1 African American 7 7 6 8 0 Asian † Hispanic 12 15 15 19 2 White 26 27 29 30 1 SOURCE: Analysis of data from the Common Core of Data (CCD), ACT, and the College Board (copyright © 2009–2012 The College Board.

www.collegeboard.com). — Not available.

† Test scores were suppressed if fewer than 15 students took the test. Participation rates were suppressed due to unreliability or if the subgroup represented less than 5 percent of district enrollment in the relevant grades. Results for subgroups were suppressed when less than 90 percent of all test takers’racial/ethnic iden- tity was reported. See methodology section. 1 Describes the most recent test results for graduating seniors. 2 Describes test results for juniors and seniors taking any AP test in the given year. NOTES: Subgroup participation rates may not reflect the“all students”rate due to some test takers not reporting their race/ethnicity.

Positive change values appear in color. Details on the calculation of average change are found in the methodology section. CCD data for 2012 were not available at the time of this analysis; participation rates for 2012 were estimated using 2011 enrollment data.

18 © 2013 THE ELI AND EDYTHE BROAD FOUNDATION PREPARED BY RTI INTERNATIONAL Methodology and Technical Notes The 2013 Broad Prize for Urban Education Understanding the Data Report This data report contains all of the data collected and analyzed for this district for the purpose of selecting The Broad Prize finalists. Tables of summarized results for all the eligible districts and three former winners, which can be used for making comparisons of this district’s performance and improvement with other eligible districts, are also publicly available on the Broad Prize web site at www.broadprize.org/resources/75_districts.html.

The Broad Prize finalists are determined by a panel of education experts from around the country, based on a re- view of the data and analyses for the 75 Broad Prize-eligible districts. Neither a strict formula nor set of weighting factors is applied to the various analyses calculated. Broad Prize Review Board members consider all of the data and analyses available and, based on each member’s knowledge and expertise, select four finalists. The Review Board considers both performance as of the most recent year and improvement over the four most recent years on the various measures included in this report.

The rest of this section discusses the data collection and analysis procedures used to produce the data report. First, it describes the criteria and data sources for identifying the eligible districts. Second, it reviews each of the quantitative achievement measures that the Review Board used in March 2013 to identify the four finalists and the data on which the measures were based. Eligible Districts To be eligible for The Broad Prize, school districts must meet certain criteria set by The Broad Foundation that are related to district size, poverty, and urbanicity. Winners from the previous three years were ineligible.

The criteria for eligibility in 2013 were as follows: • K–12 districts serving at least 42,500 students that have at least 40 percent of students eligible for free or reduced-price school lunch (FRSL), at least 40 percent of students from minority groups, and an urban designa- tion (Locale Code 11, 12, or 21 in the CCD1 ) were identified. In states where more than 10 districts qualify under these criteria, only the 10 largest qualifying districts are eligible (70 districts met these criteria in 2013). • In states with no districts meeting the above criteria, the next largest districts in the nation with at least 40 percent FRSL, at least 40 percent minority, and an urban designation were identified, in order to bring the total number of eligible districts to 75.

Only one district per state can qualify under these criteria (5 districts were included in 2013 based on these criteria).2 1 CCD locale code 11 represents a large city; code 12 represents a mid-size city; and code 21 represents a large suburb. Sable, J. (2008). Documentation to the NCES Common Core of Data Local Education Agency Universe Survey: School Year 2006–07 Version 1a (NCES 2009‑301). National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education. Washington, DC. 2 These include Indianapolis, Indiana; Des Moines, Iowa; Norfolk, Virginia; St.

Paul, Minnesota; and Newark, New Jersey.

PREPARED BY RTI INTERNATIONAL © 2013 THE ELI AND EDYTHE BROAD FOUNDATION 19 Methodology and Technical Notes The 2013 Broad Prize for Urban Education For The 2013 Broad Prize, data on school district demographics obtained from the National Center for Education Statistics (NCES) Common Core of Data (CCD) for 2011 (the most recent year for which data were available) were used to determine the list of 75 eligible districts. The 75 eligible school districts were located in 31 states and the District of Columbia.3 Data Used for Measures of Student Achievement Detailed data on various measures of student achievement were obtained for each district from federal and state records and other sources.

Wherever possible, data were collected by grade level; race/ethnicity (African Ameri- can, Asian, Hispanic, and White); and income status (low income and non-low income). The achievement data ex- amined included performance on state achievement tests, estimated graduation rates based on federal counts of high school enrollments and completions, and college readiness data obtained from the College Board and ACT. Reading, Mathematics, and Science Proficiency as Determined by State Tests Key indicators of student performance include scores on state-mandated achievement tests used for federal accountability and trends in these scores over time.

Proficiency data in reading, mathematics, and science were collected from each state for 2009 through 2012.4 These data were used to calculate the percentage of students in each district scoring at or above proficient levels on state-mandated tests in reading, mathematics, and science in each of grades 3 through 12 where available. Weighted by the number of test takers at each grade level, these data on student achievement were aggregated across elementary grades (3–5), middle grades (6–8), and high school grades (9–12). These state assessment data were analyzed (using methods described later) to calculate actual versus expected performance, to directly compare district performance with other districts in the same state, and to measure gaps and changes in gaps between low- and non-low-income students as well as between White and African American students and White and Hispanic students.

Important Note Regarding State Test Data Because states establish their own assessment and proficiency standards, districts’performance on state tests cannot be directly compared across states. To provide context for these data, summary tables containing infor- mation about state or district performance on recent administrations of the National Assessment of Education Progress (NAEP), the NAEP Trial Urban District Assessment (TUDA), and a Northwestern Evaluation Association (NWEA) proficiency-standards mapping study (ongoing since 2006) were provided to the Review Board and are available online at www.broadprize.org/resources/75_districts.html but are not included in this data report.

3 States without eligible districts this year were Alaska, Arkansas, Connecticut, Delaware, Hawaii, Idaho, Iowa, Maine, Mississippi, Missouri, Montana, New Hampshire, North Dakota, Rhode Island, South Dakota, Utah, Vermont, West Virginia, and Wyoming. Hawaii was ineligible because it has a statewide school system.

4 The data were provided directly by state agencies or downloaded from their websites.

20 © 2013 THE ELI AND EDYTHE BROAD FOUNDATION PREPARED BY RTI INTERNATIONAL Methodology and Technical Notes The 2013 Broad Prize for Urban Education High School Graduation Rates Another key measure of a district’s performance is the graduation rate. While using longitudinal student data generates the most accurate graduation rate, such data are not currently available from most states. In the absence of longitudinal data, cross-sectional data can be used to generate estimates of rates of on-time high school graduation.

There are several methods generally considered to be reliable estimators of graduation rates, three of which are used in this report and are described in the next section on methods of analysis. To generate estimates that are comparable across the 75 Broad Prize-eligible districts, RTI obtains diploma counts and enrollment data for the districts from the federal CCD. The data used to create graduation rate estimates include total and subgroup enrollments and completion counts for each district for the high school classes of 2006 through 2009 (the most recent years that were available at time of analysis).

The different methods vary in terms of the specific years of enrollment data used in the calculations.

It should be noted that district diploma counts for 2010 were not released in time for The 2013 Broad Prize analysis. Therefore, the same range of years reported for the 2012 Broad Prize (2006 to 2009) was included for the Review Board’s review in 2013. College Readiness Measures Measures of students’college readiness include participation rates and scores for SAT and ACT as well as Advanced Placement (AP) participation and passing rates. These tests are designed to assess readiness for college-level work. Scale scores for each SAT subject (reading, writing, and mathematics) range from 200 to 800.

Scale scores for the composite ACT test (covering English, mathematics, reading, and science) range from 1 to 36. With schools’per- mission, the College Board (which administers SAT) and ACT provided mean test scores for each district for 2009 through 2012, along with the number of seniors who had taken the test (regardless of when they took the test during high school).5 Another measure of college readiness is the extent to which students take and pass AP examinations. These ex- aminations provide a standardized measure of student performance in college-level courses taken while in high school.

AP grades are reported on a five-point scale: 5 = Extremely well qualified 4 = Well qualified 3 = Qualified (equivalent to passing) 2 = Possibly qualified 1 = No recommendation Again, with permission from each district, the College Board provided data for the district for 2009 through 2012 on the number of AP examinations taken by juniors and seniors and the number of passing scores (3 or above). Exam passing rates were calculated using these data, with the number of exams taken used as the denominator. 5 When students had taken the test more than once, the most recent score was reported.

PREPARED BY RTI INTERNATIONAL © 2013 THE ELI AND EDYTHE BROAD FOUNDATION 21 Methodology and Technical Notes The 2013 Broad Prize for Urban Education The College Board also provided the number of juniors and seniors who took the test. RTI staff used these numbers to calculate percentages of AP examinations with scores of 3 or above (equivalent to pass rates) for each district. The College Board and ACT do not calculate test participation rates. RTI staff calculated participation rates using enrollment data obtained from the federal CCD for 11th- and 12th-graders as appropriate, in combination with the number of students taking the tests from the relevant year as the numerator.

Because the most recent year of enrollment data available from the CCD was 2011, participation rates for 2012 were estimated using enrollment counts for 2011.

Data Analysis Methods The Broad Prize data report presents data collected on district characteristics and background information on state tests. In addition, RTI staff analyzed the data described above on student achievement to develop measures of the following: district proficiency rates (at both the proficient or above level and the advanced level) com- pared with other districts in the state; achievement gaps; standardized residuals; graduation rates; and college readiness. Trend data are presented where available, as are performance and improvement measures. Each data report section is explained here, and the relevant report page numbers are indicated in parentheses.

Additional explanatory notes are included as footnotes in the data report itself.

Each data report section is explained in further detail below, and the relevant report page numbers are indicated in parentheses. Additional explanatory notes are included as footnotes in the data report itself. General approaches applied to the analyses are explained directly below. Calculating Performance and Improvement Trend data are presented where available, as are performance and improvement measures. Performance indicators reflect the most recent year reported for any given measure. If the most recent year’s data for a performance measure were not available or were suppressed for a district, no performance indicator was reported.

Improvement indicators generally reflect average change over the four most recent years of available data for each measure. Average change was calculated as the slope of the best fit line among available data points, generally determined by regressing the relevant outcome measure on year.6 If only one data point was available, or if data were missing for both of the two most recent years, average change was not calculated. Standardized residuals were used in average change calculations, regardless of any changes in state tests from 2009 to 2012, as long as a test change was implemented statewide.

This practice was followed because of the relative nature of the measure. Standardized residuals indicate a district’s performance relative to that of districts in the state in a given year regardless of the particular test administered that year. Trends in student achievement rates, however, can be affected by year-to-year changes in state tests. Therefore proficiency rates, advanced pro- ficiency rates, or achievement gap measures in a given year were excluded from trend analyses when there were changes in state testing standards or policies.

6 When only two data points were available, the slope was equal to (X2 – X1 )/(Year2 – Year1 ).

22 © 2013 THE ELI AND EDYTHE BROAD FOUNDATION PREPARED BY RTI INTERNATIONAL Methodology and Technical Notes The 2013 Broad Prize for Urban Education In theory, districts with high initial performance levels might be expected to have lower rates of improvement. For example, if the residuals analysis reveals that a district performed consistently above expectations during all four years but did not improve, that district could still be considered consistently high-performing.

In addition, because states use different tests and different standards of proficiency, individual states may be subject to“floor”or“ceiling”effects. If proficiency levels are generally very high in a state (e.g., 90 percent), then high-performing districts within that state may not be able to show their relative achievement because their pro- ficiency level cannot increase above 100 percent. Similarly, if state proficiency levels are very low, then the rela- tive achievement of the higher performers may be understated because the lower performing districts within that state cannot fall below zero percent.

Calculation of Within-State Decile Rankings Because testing standards differ from state to state, proficiency rates cannot be directly compared across states. Instead, several of the state test analyses in this report include within-state decile rankings of performance and improvement measures. These within-state decile rankings can be used to determine how an eligible district’s relative performance within its state compares with that of eligible districts in other states. The following tables provide an example of how decile rankings are generally determined for any given measure. Suppose that a district in State A had a 2012 elementary reading proficiency rate on the state’s reading assessment of 49 percent for all students.

To understand the relative standing of that performance level in the state, the proficiency rates of all districts in the state are ranked, from highest to lowest, and then divided into deciles (10 groupings). In a state with 300 school districts, there would be 30 districts in each decile; in a state with 30 school districts, there would be 3 districts in each decile. In this example, the district proficiency rate of 49 percent would fall in the 7th decile in the state (i.e., in the top 70 percent—or bottom 40 percent—compared with school districts in the state). The table below illustrates where the proficiency rate of 49 percent would fall in the distribution of proficiency rates for State A (colored orange).

STATE A PROFICIENCY RATES (2012) Decile Ranking 1 2 3 4 5 6 7 8 9 10 State A: Elementary Reading Proficiency Rates (2012) 95–88 88–82 81–75 74–70 69–67 66–54 54–48 47–39 38–31 30–18 Because testing standards differ from state to state, a proficiency rate of 49 percent may have a very different standing in another state. Suppose that a district in State B also had a 2012 elementary reading proficiency rate of 49 percent for all students. As the table below illustrates, a proficiency rate of 49 percent would fall in the 3rd decile in State B (colored orange).

PREPARED BY RTI INTERNATIONAL © 2013 THE ELI AND EDYTHE BROAD FOUNDATION 23 Methodology and Technical Notes The 2013 Broad Prize for Urban Education STATE B PROFICIENCY RATES (2012) Decile Ranking 1 2 3 4 5 6 7 8 9 10 State B: Elementary Reading Proficiency Rates (2012) 65–59 58–52 51–48 47–44 43–41 40–37 36–32 31–26 26–18 17–12 Based on the examples above, even though both districts had the same absolute proficiency rate, the eligible district in State B was performing better than the one in State A based on its standing relative to the other districts in its state.

Decile rankings were applied to several different types of measures, as explained below. Background Information (page 2) Description of District: 2008–11 This section presents background information on the district. Data in the table, generally obtained from the CCD, are included to provide contextual information about the district. The data shown in the table are not directly used in the analyses of student achievement conducted for The Broad Prize but are provided to give some context to the reader. Demographic percentages were calculated using enrollment counts. The non-White percentages were calculated as the sum of non-White enrollments divided by the total district enrollment.

Percent non-White may not equal 100 minus the percentage of Whites due to missing racial/ethnic data in some districts. The information in the table is organized as follows: First column: Lists the district characteristics, student characteristics, and types of expenditures shown. Remaining columns: Lists data for each year for which data were available (2008, 2009, 2010, and 2011). State Test Information: 2009–12 Key indicators of student performance include scores on state-mandated achievement tests and trends in scores over time. The state test information shows the tests and grades that were included in The 2013 Broad Prize analysis.

The table notes indicate whether any tests were not comparable with those for other years and may provide additional information. In some cases, when test changes were made to some but not all grades within an education level (elementary, middle, and high school), some grade-level data were excluded from the education level results in order to maximize the number of years included in trend analyses. Non-comparable tests were not included in calculations of average change on pages 4–6, 8–10, and 12–14. Because of the relative nature of standardized residuals, however, data for all tests were included in calculations of average change on pages 7, 11, and 15.

Generally, test changes affect both the proficient or above level and the advanced level, but exceptions are noted in the footnotes on pages 5, 9, and 13, which provide detail on performance and improve- ment on state tests at the advanced proficiency level.

24 © 2013 THE ELI AND EDYTHE BROAD FOUNDATION PREPARED BY RTI INTERNATIONAL Methodology and Technical Notes The 2013 Broad Prize for Urban Education The information in the table is organized as follows: First column: Lists the subject (reading, mathematics, and science) and level (elementary, middle, and high school). Second column: Specifies the test. Remaining columns: Specifies the grades included in trend calculations for 2009, 2010, 2011, and 2012. Trends in Proficiency Rates (page 3) Test score data in reading, mathematics, and science were collected from each state for 2009 through 2012.

These data were used to calculate the percentage of students scoring at or above the proficient level on their state tests in reading, mathematics, and science in each grade. Weighted by the number of test takers at each grade level, these data on student achievement were aggregated across elementary grades (3–5), middle grades (6–8), and high school grades (9–12) where available.

District and state trends in proficiency are shown for all students in reading, mathematics, and science (page 3). The state-level proficiency rates in these analyses generally excluded the district’s results. That is, unless otherwise indicated, this district’s proficiency rates were removed from state averages to produce rest-of-state proficiency rates for comparison purposes. This approach was particularly important in cases where very large eligible districts enrolled a significant proportion of the state population and would otherwise have been compared largely with their own data. In states with multiple eligible districts, the state proficiency rates will vary, because each district was compared separately with all other districts in the state except itself.

Non-comparable test data are not included in trend lines. Six different trend charts, with data for 2009, 2010, 2011, and 2012, are shown as follows: Left side: District reading, mathematics, and science proficiency trend lines for all students at the elementary, middle, and high school levels. Right side: State reading, mathematics, and science proficiency trend lines for all students at the elementary, middle, and high school levels. Performance and Improvement at the Proficient or Above Level (pages 4, 8, and 12) Percentages of students scoring at or above the proficient level on the state tests between 2009 and 2012 are shown for reading on page 4, for mathematics on page 8, and for science on page 12 for both the district and the state.

The tables also show calculations of improvement over time. Improvement or average change was calculated as the slope of the best fit line among the available data points for 2009 through 2012. The slope was generally determined by regressing the available proficiency rates on year. If only one data point was available, or if data were missing for both 2011 and 2012, average change was not calculated.

PREPARED BY RTI INTERNATIONAL © 2013 THE ELI AND EDYTHE BROAD FOUNDATION 25 Methodology and Technical Notes The 2013 Broad Prize for Urban Education Finally, the tables also show the decile rankings of both the 2012 proficiency rates and the average change in proficiency rates between 2009 and 2012, relative to all other districts in the state. Decile ranks of the percentage of students performing at the proficient or above level in 2012 and of the average change in proficiency rates between 2009 and 2012 were computed for each subgroup included in the table.

Data could be missing because either they were not available (indicated by“—”) or they were suppressed (indicated by .

Data were suppressed due to unreliability or if a subgroup represented less than 5 percent of the test takers in a subject at a level (elementary, middle, high school). Data that were not comparable with other years, due, for example, to changes in the state test as described above, appear in italics and were excluded from average change calculations. (In some cases, too few years of data were comparable for average change to be calculated, and the missing result in the average change column and the decile rank of the average change were indicated by“‡”.) Calculations were performed on unrounded numbers.

Positive average change values appear in color. Decile ranks appear in color when the district’s 2012 performance or average change in proficiency was in the top 30 percent (decile ranks of 1–3) in the state.

Reading, Mathematics, and Science Proficiency Data Summaries: 2009–12 (pages 4, 8, and 12) The information in the tables is organized as follows: First column: Subgroups are specified for the district and rest of state for each of the three levels (elementary, middle, and high school). Second column: Proficiency rates are specified for the 2009 academic year. Third column: Proficiency rates are specified for the 2010 academic year. Fourth column: Proficiency rates are specified for the 2011 academic year. Fifth column: Proficiency rates are specified for the 2012 academic year. Sixth column: The average change value is shown.

Average change was calculated as the slope of the best fit line among the available data points for 2009 through 2012.

Seventh column: The decile rank of the 2012 proficiency rate is shown. Eighth column: The decile rank of the average change in proficiency rates between 2009 and 2012 is shown. Performance and Improvement at the Advanced Level (pages 5, 9, and 13) Percentages of students scoring at the advanced level7 on the state tests between 2009 and 2012 are shown for reading on page 5, for mathematics on page 9, and for science on page 13 for both the district and the state. As indicated above, the state-level advanced proficiency rates in these analyses generally excluded the district’s re- sults. That is, unless otherwise indicated, the district’s advanced proficiency rates were removed from state aver- ages to produce rest-of-state advanced proficiency rates for comparison purposes.

7 The“advanced”level was defined as the combination of any performance levels above“proficient”on a state’s test.

26 © 2013 THE ELI AND EDYTHE BROAD FOUNDATION PREPARED BY RTI INTERNATIONAL Methodology and Technical Notes The 2013 Broad Prize for Urban Education The tables also show calculations of improvement over time. Improvement or average change was calculated as the slope of the best fit line among the available data points for 2009 through 2012. The slope was generally determined by regressing the available advanced proficiency rates on year. If only one data point was available, or if data were missing for both 2011 and 2012, average change was not calculated.

Finally, the tables also show the decile rankings of both the 2012 advanced proficiency rates and the average change in advanced proficiency rates between 2009 and 2012, relative to all other districts in the state. Decile ranks of the percentage of students performing at the advanced level in 2012 and of the average change in ad- vanced proficiency rates between 2009 and 2012 were computed for each of the subgroups included in the table. Data could be missing because either they were not available (indicated by“—”) or they were suppressed (indi- cated by . Data were suppressed due to unreliability or if a subgroup represented less than 5 percent of the test takers in a subject at a level (elementary, middle, high school).

Data that were not comparable with those for other years, due, for example, to changes in the state test as described above, appear in italics and were excluded from average change. (If only one data point was available, or if data were missing for both 2011 and 2012, average change was not calculated. and the missing result in the average change column and the decile rank of the average change column were indicated by“‡”.) Calculations were performed on unrounded numbers. Positive average change values appear in color. Decile ranks appear in color when the district’s 2012 performance or average change in advanced proficiency is in the top 30 percent (decile ranks of 1–3) in the state.

Reading, Mathematics, and Science Advanced Proficiency Data Summaries: 2009–12 (pages 5, 9, and 13) The information in the tables is organized as follows: First column: Subgroups are specified for the district and rest of state for each of the three levels (elementary, middle, and high school). Second column: Advanced proficiency rates are specified for the 2009 academic year. Third column: Advanced proficiency rates are specified for the 2010 academic year. Fourth column: Advanced proficiency rates are specified for the 2011 academic year. Fifth column: Advanced proficiency rates are specified for the 2012 academic year.

Sixth column: The average change value is shown. Average change was calculated as the slope of the best fit line among the available data points for 2009 through 2012.

Seventh column: The decile rank of the 2012 advanced proficiency rate is shown. Eighth column: The decile rank of the average change in advanced proficiency rates between 2009 and 2012 is shown.

PREPARED BY RTI INTERNATIONAL © 2013 THE ELI AND EDYTHE BROAD FOUNDATION 27 Methodology and Technical Notes The 2013 Broad Prize for Urban Education Proficiency Gaps (pages 6, 10, and 14) Measures of gap closures are shown for reading on page 6, for mathematics on page 10, and for science on page 14. Two types of comparisons were made when calculating achievement gaps: Racial/Ethnic Gaps: These compared the performance of African American and Hispanic students with that of White students.

Income Gaps: These compared the performance of low-income students with that of non-low-income students.

Three types of gaps were measured: Internal District Gap This measure calculates the gap in performance between a district’s disadvantaged group and the district’s advantaged group. Some caution must be used in comparing internal gaps across districts because these com- parisons may be distorted by the following factors: • The relative absence of an advantaged group in some districts (e.g., few White or few non-low-income students). To address this issue, internal gaps were not calculated in districts where either of the groups being compared represented less than 5 percent of the district’s test-takers in a given subject and at a given level.

• Differences between districts in the composition of the“advantaged”or“disadvantaged”groups (e.g., high- income Whites in one district and moderate-income Whites in another).

• Higher than average performance or improvement by the advantaged group in some districts and lower than average performance or improvement by the advantaged group in others (which could cause districts with lower-performing advantaged students to appear to be doing a better job of“closing the gap”). • Ceiling or floor effects, which can distort the comparison of gaps across states. Gaps are represented by negative numbers and the closing of such gaps is represented by positive numbers. For example, if a district’s African American students perform 30 percentage points below the district’s White stu- dents, this gap is represented by –30.

If the gap closes to –10 in subsequent years, then the gap closure measure is the later year’s gap minus the earlier year’s gap (–10 minus –30 equals +20), meaning that the gap between African American and White students has closed by 20 percentage points.

Internal District versus Internal State Gap This measure corresponds to the district’s internal gap minus the state’s internal gap. The district’s internal gap is defined as the performance of the district’s disadvantaged group minus the performance of the district’s advan- taged group. The state’s internal gap is defined as the performance of the state’s disadvantaged group minus the performance of the state’s advantaged group. As described above, the state internal gaps against which district

28 © 2013 THE ELI AND EDYTHE BROAD FOUNDATION PREPARED BY RTI INTERNATIONAL Methodology and Technical Notes The 2013 Broad Prize for Urban Education internal gaps were compared generally excluded the district’s results.

That is, unless otherwise indicated, district proficiency rates were removed from state averages to produce rest-of-state values for comparison purposes. Positive numbers indicate that the district outperformed the state on the measure. For example, if the district’s Hispanic students are performing 10 percentage points below the district’s White students, but the state’s Hispanic students are performing 15 percentage points below the state’s White students, then the internal district gap is 5 percentage points smaller than the internal state gap.

By similar reasoning, a positive change in this measure over time for Hispanic students would indicate that the district’s Hispanic students are improving faster relative to the district’s White students than the state’s Hispanic students are improving relative to the state’s White students. External Gap: District Disadvantaged versus State Advantaged This measure was used to compare the performance of the district’s disadvantaged group with that of the state’s advantaged group. Thus, if 30 percent of District A’s Hispanic students and 50 percent of the state’s White students are proficient on the state test, District A’s external gap for Hispanic students is 30 percent minus 50 percent (or –20 percentage points).

As described above, the state internal gaps against which district internal gaps were compared generally excluded the district’s results. That is, unless otherwise indicated, the district’s proficiency rates were removed from state averages to produce rest-of-state values for comparison purposes. Note that com- paring two districts’external gaps in the same state is virtually the same as comparing the performance of their disadvantaged groups except that the state’s advantaged proficiency against which the district’s disadvantaged group was compared was not exactly the same for each district.

External gap statistics are generally negative numbers, but improvements in external gaps (improvements in the performance of the district’s disadvantaged students relative to the state’s advantaged students) are shown as positive numbers. The tables also show calculations of improvement (i.e., narrowing of gaps) over time. Improvement or average change was calculated as the slope of the best fit line among the available data points for 2009 through 2012. If only one data point was available, or if data were missing for both 2011 and 2012, average change was not calcu- lated. Data could be missing because either they were not available (indicated by“—”) or they were suppressed (indicated by .

Data were suppressed if a subgroup represented less than 5 percent of the test-takers in a subject at a level (elementary, middle, high school) or if the data were unreliable. Data that were not comparable with those for other years, due, for example, to changes in the state test as described above, appear in italics and were not included in calculations of average change (and the missing result is indicated by . Calculations were performed on unrounded numbers.

PREPARED BY RTI INTERNATIONAL © 2013 THE ELI AND EDYTHE BROAD FOUNDATION 29 Methodology and Technical Notes The 2013 Broad Prize for Urban Education Definitions of Gap Closures An internal district gap was considered to be closing if the district’s disadvantaged group proficiency was increasing and the district’s advantaged group proficiency was either steady or increasing. The gap was closing because the district’s disadvantaged group proficiency was increasing at a faster rate than the district’s advantaged group proficiency.

An internal district vs. internal state gap was considered to be closing if the district’s disadvantaged group proficiency was increasing, the district’s advantaged group proficiency was either steady or increasing, and the internal district gap was closing at a faster rate than the state internal gap.

An external gap was considered to be closing if the district’s disadvantaged group proficiency was increasing at a faster rate than the state’s advantaged group proficiency. When a gap is considered to be closing, the average change value appears in color. Definitions of Smallest and Fastest-Closing Gaps To identify districts with the smallest gaps and those gaps that are narrowing at the fastest pace within a state, decile ranks based on gap magnitudes and average change in gap magnitudes for all districts in a state were computed.

Decile ranks could only be calculated for internal district gaps and ranged from 1 for the smallest or fastest- closing gaps to 10 for the largest or least-closing gaps in a state. When the decile rank of the 2012 gap is 1, 2, or 3, the gap is considered to be“small,”and the decile rank appears in color. When the average change value appears in color and the decile rank of the average change is 1, 2, or 3, the gap is considered to be among the“fastest- closing”in the state, and the decile rank appears in color.

Important Note Regarding Achievement Gap Data Caution must be used when looking at the gap measures for districts across states because the three gap measures are not standardized and are even more vulnerable than are standardized measures to ceiling and floor effects.

Reading, Mathematics, and Science Proficiency Gaps: 2009–12 (pages 6, 10, and 14) The information in the tables is organized as follows: First column: The internal district gap, internal district vs. internal state gap, and external gap are specified with regard to comparing the disadvantaged vs. advantaged groups (African American vs. White, Hispanic vs. White, and low-income vs. non-low-income students) at each of the three levels (elementary, middle, and high school). Second column: Gaps are specified for the 2009 academic year. Third column: Gaps are specified for the 2010 academic year.

30 © 2013 THE ELI AND EDYTHE BROAD FOUNDATION PREPARED BY RTI INTERNATIONAL Methodology and Technical Notes The 2013 Broad Prize for Urban Education Fourth column: Gaps are specified for the 2011 academic year. Fifth column: Gaps are specified for the 2012 academic year. Sixth column: The average change calculation is shown. Average change was calculated as the slope of the best fit line among the available data points for 2009 through 2012. Seventh column: The within-state decile rank of the 2012 internal district gap is shown. Decile ranks range from 1 for the smallest gaps in the state to 10 for the largest.

Eighth column: The within-state decile rank of the average change in the internal district gap from 2009 to 2012 is shown. Decile ranks range from 1 for the fastest-closing gaps in the state to 10 for those closing most slowly. Standardized Residuals for Reading, Mathematics, and Science (pages 7, 11, and 15) Standardized residuals in reading, mathematics, and science at the elementary, middle, and high school levels are shown on pages 7, 11, and 15. An ordinary least squares regression (OLS) analysis was conducted to determine the extent to which each Broad Prize-eligible district performed better or worse than other districts in its state given the district’s percentage of low-income students.

Specifically, the dependent variable in the regression analysis was the percentage of test takers in a district who were at the proficient or above level on the state test. The independent variable was the percentage of test takers at the relevant level in the district who were low income. The regressions were weighted by district size, as measured by enrollment. This approach gives greater weight in the regressions to larger districts and avoids possible undue influence of very small districts on the regression results. A separate regression was run for each year of data and each subject (reading, mathematics, and science) for each level (elementary, middle, and high school) within each state.

For each district, the expected or predicted proficiency level based on the regression was calculated. The differ- ence between the district’s actual percentage of students who scored at or above the proficiency level and the predicted or expected value is the residual. A positive residual indicates that the district is performing better than expected on the state test given the percentage of low-income students taking the test, while a negative residual indicates lower-than-expected performance. It should be emphasized that residuals are relative performance measures. A district’s performance was assessed relative to that of other districts in the state, not in absolute terms.

Some states changed tests over the period under review, and tests differed from state to state. Consequently, the interpretation of residuals varies. To allow for year-to-year comparisons, separate regressions for each year of data were run, and then the average change in the districts’residuals over the last four years was calculated. Because the residuals are a measure of relative performance in each year, the average change in the districts’ performance reflects the change in its relative standing. In addition, in order to have a measure with greater

PREPARED BY RTI INTERNATIONAL © 2013 THE ELI AND EDYTHE BROAD FOUNDATION 31 Methodology and Technical Notes The 2013 Broad Prize for Urban Education comparability, The Broad Prize methodology uses standardized residuals. A district’s standardized residual is cal- culated by dividing its residual by the standard deviation of all residuals from the state regression. Standardizing the residuals and measuring the average change in residuals over time mitigates the impact of test differences on the results.

As an example, a district in State A may have a residual in elementary reading of 5.7 (meaning they had 5.7 per- cent more students reach proficiency than their“expected level”given their district’s poverty).

At the same time, a district in State B may also have the same residual value in elementary reading. The assessment of how well each district is performing, however, may not be the same even though both have the same residual. If the majority of districts in State A are within 6 percentage points of the expected performance level, while the majority of districts in State B are within 2 percentage points of the expected level, then the district from State B is performing much better compared with its peers than the district from State A is performing compared with its peers. Standardizing the residuals helps account for differences in variability across states.

Caution must be used in comparing standardized residuals across states. For example, a district that performs above average in a state that ranks below the national average on NAEP may be performing no better than a dis- trict that performs below average in a state that ranks above the NAEP national average. Separate residuals were calculated for each subject (reading, mathematics, and science); level (elementary, middle, and high school); and year (2009, 2010, 2011, and 2012). The table on the lower half of each page shows the standardized residual values in reading, mathematics, or science at the elementary, middle, and high school levels, as well as average change values and decile ranks.

Improvement or average change in residuals was calculated as the slope of the best fit line among the available data points for 2009 through 2012. The slope was generally determined by regressing the available standardized residuals on year. If only one data point was available, or if residuals were missing for both 2011 and 2012, average change was not calculated. Data could be missing because either they were not available (indicated by“—”) or they were suppressed (indicated by . Residuals were suppressed if they were deemed unreliable or if state re- porting standards changed such that the number of districts included in the regression model was substantially lower than in previous years, rendering the results not reliable for inclusion in trends.

Positive average change values appear in color.

For purposes of comparing the magnitude of standardized residuals across states, decile ranks based on standardized residuals for all districts in a state regression were computed for each year. Decile ranks were calculated separately by level (elementary, middle, and high school) and subject. Decile ranks range from 1 for the largest standardized residuals in the state to 10 for the smallest standardized residuals. The decile ranks of the average change in residuals between 2009 and 2012 were also computed. Under the“decile rank”columns

32 © 2013 THE ELI AND EDYTHE BROAD FOUNDATION PREPARED BY RTI INTERNATIONAL Methodology and Technical Notes The 2013 Broad Prize for Urban Education for the three education levels, the decile ranks for 2012 standardized residuals and the average change in residuals are presented.

Decile ranks of 1 to 3 are shown in color. The last two rows of the table on the lower half of each page show the count of positive residuals for reading, mathematics, or science and the count of available measures for each subject, as well as the count of positive residuals and the count of available residual measures for all subjects (reading, mathematics, and science) com- bined. In addition, under the“decile ranks”column for the three education levels, the column on the left shows the average of the 2012 decile ranks for the three education levels, and the column on the right shows the aver- age of the decile ranks of the average change in residuals between 2009 and 2012.

Important Note Regarding the Comparison of Residuals for Different Districts The analysis provides information on both performance and improvement. In theory, districts with high initial per- formance levels might be expected to have lower levels of improvement. A district that performed consistently above expectations during all four years, but did not improve, could still be considered consistently high performing. In addition, because states use different tests and standards of proficiency, individual states may be subject to “floor”or“ceiling”effects. If proficiency levels are generally very high in a state (e.g., near 90 percent), then high- performing districts may not be able to show their relative achievement because their proficiency level cannot increase above 100 percent.

Similarly, if state proficiency levels are very low, then the relative achievement of the higher performers may be understated because the lower-performing districts cannot fall below zero percent. Standardized Residuals Data for Reading, Mathematics, and Science: 2009–12 (pages 7, 11, and 15) The upper half of each page shows three trend bar charts, with standardized residuals for reading, mathematics, or science for all students in 2009, 2010, 2011, and 2012 at the elementary, middle, and high school levels. (In- formation for reading appears on page 7, information for mathematics appears on page 11, and information for science appears on page 15.) The table on the lower half of each page is organized as follows: First column: Standardized residuals for each subject are specified for the district at each of the three levels (elementary, middle, and high school) for all students.

The table also shows the count of positive residuals and the count of available residual measures for all students in each subject (reading, mathematics, and science) and the combined counts of positive residuals and the count of available residual measures across all three subjects (reading, mathematics, and science).

Second column: Standardized residuals are specified for the 2009 academic year. Third column: Standardized residuals are specified for the 2010 academic year. Fourth column: Standardized residuals are specified for the 2011 academic year.

PREPARED BY RTI INTERNATIONAL © 2013 THE ELI AND EDYTHE BROAD FOUNDATION 33 Methodology and Technical Notes The 2013 Broad Prize for Urban Education Fifth column: Standardized residuals are specified for the 2012 academic year. Sixth column: The average change calculation is shown. Average change was calculated as the slope of the best fit line among the available data points for 2009 through 2012.

Seventh column: The decile rank of the 2012 residual value is shown. For the“count of positive residuals” rows, the decile rank is the average rank for the three education levels.

Eighth column: The decile rank of the average change in residual values from 2009 to 2012 is shown. For the“count of positive residuals”rows, the decile rank is the average rank for the three education levels. High School Graduation Rates (page 16) Three methods were used to calculate high school graduation rates, and all are considered reliable estimates of graduation rates in the absence of longitudinal student-level data.8 While using longitudinal data generates the most accurate estimates of graduation rates, such information is not currently available in most states. Federal CCD data on enrollments and completions (as described above) were used to generate the graduation rate estimates.

While each method uses CCD diploma counts for the graduating class in a given year, the methods rely on different years of enrollment data and, therefore, generate somewhat different results. Further descriptions of the individual methods are provided below. The average of the three methods is reported as well. Trend lines as well as specific graduation rates are shown for 2006 through 2009. Improvement or average change was calculated as the slope of the best fit line among the available data points for 2006 through 2009. If only one data point was available, or if data were missing for both 2008 and 2009, average change was not calculated.

Data could be missing because either they were not available (indicated by“—”) or they were suppressed (indi- cated by . Graduation rates were suppressed if they were deemed unreliable or if a subgroup represented less than 5 percent of the district enrollment. Calculations were performed on unrounded numbers. Positive average change values appear in color.

The three methods used to calculate high school graduation rates are as follows: 1. The Averaged Freshman Graduation Rate (AFGR) 2. Urban Institute method (a.k.a. Cumulative Promotion Index or CPI) 3. Manhattan Institute method (a.k.a. Greene’s Graduation Indicator or GGI) The methodology for each of these is briefly explained below. 8 State education agencies may use different methods to calculate the graduation rates they report for federal accountability purposes; the graduation rates presented here may not match state-published rates. The three methods used for The Broad Prize provide comparable measures across the 75 eligible districts that are located in 31 states and the District of Columbia.

34 © 2013 THE ELI AND EDYTHE BROAD FOUNDATION PREPARED BY RTI INTERNATIONAL Methodology and Technical Notes The 2013 Broad Prize for Urban Education Averaged Freshman Graduation Rate (AFGR) This method divides the number of students graduating in Year y by an average of the 8th-grade enrollment in Year y – 4, 9th-grade enrollment in Year y – 3, and 10th-grade enrollment in Year y – 2: Graduation Rate = Where: G = Number of graduates receiving a regular diploma y = School year Sgrade = Number of students in a specified grade Denominator = Smoothed estimator for first-time 9th-grade enrollment Urban Institute Graduation Rate (Cumulative Promotion Index or CPI) Also known as Swanson’s Cumulative Promotion Index (SCPI), this method assumes that graduation is a process composed of three grade-to-grade promotion transitions (9th to 10th, 10th to 11th, and 11th to 12th) in addition to the graduation event (grade 12 to diploma).

Each transition is calculated as a probability, dividing the enroll- ment of the following year by the enrollment of the current year for the grade in question. These separate prob- abilities are then multiplied to produce the probability that a student in that school system will graduate within four years of entering 9th grade.

Graduation Rate = Where: Sgrade = Number of students in a specified grade y = School year G = Number of graduates receiving a regular diploma Manhattan Institute Graduation Rate (Greene’s Graduation Indicator or GGI) The number of students who receive a diploma is divided by the product of a measure of high school population change over time and an estimate of the number of first-time 9th-graders. The population change quantity ad- justs for enrollment variability due to student mobility among districts and states rather than dropping out. Graduation Rate = Where: G = Number of graduates receiving a regular diploma y = School year Sgrade = Number of students in a specified grade ( ) 3 2 , 10 3 , 9 4 , 8 − − − + + y y y y S S S G y y y y y y y y S G S S S S S S , 12 , 11 1 , 12 , 10 1 , 11 , 9 1 , 10 * * * + + + ) ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ + + ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ + + + + + + − + + + + − − − − − − − − − − − 3 * 1 2 , 10 3 , 9 4 , 8 3 , 12 3 , 11 3 , 10 3 , 9 3 , 12 3 , 11 3 , 10 3 , 9 , 12 , 11 , 10 , 9 y y y y y y y y y y y y y y y y S S S S S S S S S S S S S S S G ( ) 3 2 , 10 3 , 9 4 , 8 − − − + + y y y y S S S G y y y y y y y y S G S S S S S S , 12 , 11 1 , 12 , 10 1 , 11 , 9 1 , 10 * * * + + + ) ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ + + ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ + + + + + + − + + + + − − − − − − − − − − − 3 * 1 2 , 10 3 , 9 4 , 8 3 , 12 3 , 11 3 , 10 3 , 9 3 , 12 3 , 11 3 , 10 3 , 9 , 12 , 11 , 10 , 9 y y y y y y y y y y y y y y y y S S S S S S S S S S S S S S S G ( ) 3 2 , 10 3 , 9 4 , 8 − − − + + y y y y S S S G y y y y y y y y S G S S S S S S , 12 , 11 1 , 12 , 10 1 , 11 , 9 1 , 10 * * * + + + ) ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ + + ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ + + + + + + − + + + + − − − − − − − − − − − 3 * 1 2 , 10 3 , 9 4 , 8 3 , 12 3 , 11 3 , 10 3 , 9 3 , 12 3 , 11 3 , 10 3 , 9 , 12 , 11 , 10 , 9 y y y y y y y y y y y y y y y y S S S S S S S S S S S S S S S G

PREPARED BY RTI INTERNATIONAL © 2013 THE ELI AND EDYTHE BROAD FOUNDATION 35 Methodology and Technical Notes The 2013 Broad Prize for Urban Education In a recent NCES study,9 it was reported that when calculating a statewide graduation rate, the Averaged Freshman Graduation Rate came closest to approximating a longitudinal graduation rate. The three methodologies some- times lead to very different results because each uses different types of data from different years. All three have strengths and weaknesses but are considered acceptable methodologies. It should be remembered that all three are estimates of the true longitudinal graduation rate.

The smaller the district, state, or student group being ana- lyzed, the less precisely the three graduation rates estimate the true longitudinal rate. Estimated high school graduation rates table: 2006–09 (page 16) In the upper half of the page, three different trend charts, with data for 2006, 2007, 2008, and 2009, are shown for each of the three different graduation rates for all students and for African American, Asian, Hispanic, and White students.

In the lower half of the page, the information in the table is organized as follows: First column: The average of results across and then individually for the three different graduation rate methods (Averaged Freshman Graduation Rate, Urban Institute Method, and Manhattan Institute Method) are specified for all students and for African American, Asian, Hispanic, and White students. Second column: Graduation rates are specified for the 2006 academic year. Third column: Graduation rates are specified for the 2007 academic year. Fourth column: Graduation rates are specified for the 2008 academic year.

Fifth column: Graduation rates are specified for the 2009 academic year. Sixth column: The average change calculation is shown. Average change was calculated as the slope of the best fit line among the available data points for 2006 through 2009. The rows for the average of the three graduation rates measures, however, show the average of all three of the average change calculations.

College Readiness Data (page 17) District-level measures of the college readiness of students include SAT, ACT, and AP. The table provides mea- sures of performance on these tests and participation rates. With district permission, College Board and ACT provided SAT (reading, mathematics, and writing) test scores and mean ACT (composite) test scores, respectively, for each district for 2009 through 2012. The College Board also provided the number of AP examinations at each score level (1 to 5) for each district for 2009 through 2011. The percentage of AP tests taken in which students earned passing scores (3 or above) was calculated.

The percentages of all AP tests taken with scores of 3 or above are detailed in this report.

9 Seastrom, M., Chapman, C., Stillwell, R., McGrath, D., Peltola, P., Dinkes, R., and Xu, Z. (2006). User’s Guide to Computing High School Gradua- tion Rates, Volume 1: Review of Current and Proposed Graduation Indicators (NCES 2006-604). U.S. Department of Education, National Center for Education Statistics. Washington, DC: U.S. Government Printing Office, and by the same authors, Volume 2: Technical Evaluation of Proxy Graduation Indicators (NCES 2006-605).

36 © 2013 THE ELI AND EDYTHE BROAD FOUNDATION PREPARED BY RTI INTERNATIONAL Methodology and Technical Notes The 2013 Broad Prize for Urban Education The College Board and ACT do not calculate test participation rates.

However, they provided the number of seniors who had taken the SAT and ACT tests (regardless of when they took the test during high school), as well as the num- ber of juniors and seniors who took any AP test in the given year. Participation rates were calculated using these numbers as the numerator and enrollment data for 11th- and 12th-graders from the CCD as the denominator.10 The tables also show the improvement or average change calculation. Average change was calculated as the slope of the best fit line among the available data points for 2009 through 2012. If only one data point was available or data were missing for both 2011 and 2012, average change was not calculated.

Calculations were performed on unrounded numbers. Positive change values appear in color.

Data were suppressed if they were deemed unreliable. Test scores were suppressed if they were based on the performance of fewer than 15 students, as required by the College Board and ACT. Participation rates were sup- pressed if a subgroup represented less than 5 percent of enrollment in the relevant grades. Participation rates that were initially calculated as greater than 100 percent were trimmed to 100 percent. Participation rates that were initially calculated as greater than 115 percent were either trimmed to 100 percent or suppressed if deemed unreliable. Rates of more than 100 percent were most often observed in states that required a particular test.

In 2012, these rates may have occurred because they were based on estimated rather than actual enrollment data. In addition, subgroup results were suppressed if data on the number of test takers whose race/ethnicity was identified represented less than 90 percent of the total number of test-takers for a given test and year.11 Test scores and participation rates on college readiness examinations: 2009–12 (page 17) The information in the table is organized as follows: First column: The table is divided into three sections, one for each college readiness assessment: SAT, ACT, and AP. Each section first shows achievement measures and then participation rates for all students as well as for racial/ethnic subgroups.

Mean scores and improvement on the SAT include all three subjects combined (reading, writing, and mathematics). Second column: Relevant values are listed for the 2009 academic year. Third column: Relevant values are listed for the 2010 academic year. Fourth column: Relevant values are listed for the 2011 academic year. Fifth column: Relevant values are listed for the 2012 academic year. Sixth column: The average change calculation is shown. Average change was calculated as the slope of the best fit line among the available data points for 2009 through 2012. 10 Participation rates in 2012 were calculated using 2011 CCD enrollments as the denominator, because 2012 enrollment data were not yet available at the time of analysis.

11 Race/ethnicity is self-reported in SAT, ACT, and AP, and the amount of missing race/ethnicity data varies by district and year.

You can also read