shadow

By Educators for Shared Responsibility.

In 2012, concerned that accountability for US student outcomes was being unfairly characterized as the sole responsibility of classroom teachers, a group called Educators for Shared Accountability emerged from the classrooms of the heartland and made a bold statement with a single press release. These educators didn’t accept the bald conceit of education reformers and Department of Education functionaries that teachers alone should be branded when American children struggle to meet learning and growth targets. These teachers wanted badly to share the accountability that had been hung around their necks with decision-makers who not only earned more money than them but who also had their hands on levers of educational influence that the teachers would never come near. After describing the harsh accountability environment that existed for teachers in 2012 (and remains little-changed today), the press release said this:

Strangely missing in all of this, of course, is any sort of mechanism for holding people like Arne Duncan publicly accountable for student-level data attributable to their performance in an important position of leadership.

Educators for Shared Accountability contended that the provision of a quality education for America’s schoolchildren was the responsibility of many, not just the burden of the men and women standing in front of classrooms. Policymakers, federal and state politicians, appointed officials, taxpayers, voters, lobbyists, and activist philanthropies—all these players have an effect on what happens in classrooms. All of them have fingerprints on the children and, as such, none of these actors should be held more or less accountable than the others. They all work in concert to craft the practices that ultimately drive student-level outcomes.

If our children are failing to meet academic expectations, Educators for Shared Accountability believed it to be tremendously dishonest—though politically useful—to pretend that teachers alone bear the fault. Instruction is not the only input affecting the education of children. Is funding equal from school to school? Teachers have no say over this. Are resources sufficient? Teachers have no say. Are schools crumbling? Are libraries stocked? Are nurses available? Are there arts or other creative opportunities available to the children? Extracurricular activities? Are adequate social supports in place for students?

Teachers have no say.

Yet until 2012, teachers alone enjoyed “accountability.”

Operating under the belief that what’s good for the goose is good for the gander, Educators for Shared Accountability developed an value-added accountability formula for the person at the top of America’s educational hierarchy: the United States Secretary of Education. If student-level data could indicate whether or not a teacher were successful in his or her endeavors, the same could be said for the secretary.

The formula was simple. Four factors would be taken into account. Two of these factors would be test scores: NAEP math performance and NAEP reading performance. The other two factors, in the spirit of multiple measures, would be tied to social outcomes related to education: student pregnancy rates and student employability.

Data Point 1: Teen Employability
One critical aim of the American education system is the holistic development of children. While test scores indicate the content area knowledge and/or the test-taking prowess of students, few dependable measures of a truly well-rounded education exist. How can one measure students’ critical thinking skills, communication skills, interpersonal social aptitude, and problem-solving abilities? Fortunately, there is an arena where those precise skills are valued and rewarded—the job market. That being the case, the first data point examined in this study is the employability of the American teen. Using data from the Bureau of Labor Statistics, ESA’s crack research team analyzed the seasonally-adjusted employment population ratio for Americans aged 16-19 years. Each secretary of education was assigned a number of points equal to this ratio for the quarter immediately before he or she took office (which was tallied as the “Beginning” value), and for the quarter immediately after leaving office (tallied as the “Ending” value). The data used for this portion of the value-added measure was gathered using the search feature found here (using these search criteria: both sexes, all races, all origins, 16-19 years, all educational levels, all marital statuses, Employment-population ratio, seasonally adjusted, quarterly).

Data Point 2: Teen Pregnancy

The second data point–also tallied as a “Beginning” and “Ending” value—is the teen birth rate (for mothers ages 15-19) as published by the CDC. Each secretary was assigned a “Beginning” teen birth rate and an “Ending” teen birth rate for his or her term in office.

Data Points 3 and 4: Math and Reading Proficiency
Progress in the mathematics and reading proficiency of students during each secretary of education’s time in office was gauged based on NAEP scores. The “Beginning” figure was the NAEP score immediately prior to each secretary’s taking office and the “Ending” score consisted of NAEP results for the test administration immediately following a secretary’s departure from office (or, in the case of Arne Duncan, who hasn’t left office as of this writing, the last available score). Specifically, this study looked at the nationwide NAEP scores of 13-year-olds.

Methodology
The “Beginning” and “Ending” data for the four data points described above were summed, and a total “Beginning” figure and total “Ending” figure was determined for each secretary. An increase in the figure from “Beginning” to “Ending” indicated improvement; a decrease indicated a decline in student performance. Absolute improvement in the data was considered an insufficient measure for establishing whether a secretary of education added value to students during his or her term in office. Instead, ESA researchers determined an average rate of improvement in the data across all secretaries of education. That average rate of improvement—5.788889, to be exact—became the target for each secretary of education, the measure by which all were judged. A secretary who exceeded the average rate of improvement by a minimum threshold was considered to have been a satisfactory educational leader, while a secretary whose rate of improvement fell short of that target was judged to be an ineffective leader.

The actual VAM score of a Secretary of Education is the number of points (positive or negative) difference between his or her rate of improvement from the beginning to the end of his or her term and the average rate of improvement for all secretaries of education. In order to assist the public in interpreting these VAM scores, clear and easy-to-understand labels were applied to a simple distribution of the scores. VAM scores ranged from -15.8889 on the low end to 17.81111 on the high end. The label “Superior” was assigned to VAM scores of 10 or higher. The label “Adequate” was assigned to scores between 5 and 10. The label “Needs Improvement” was assigned to scores from 0 to 5. Any score below 0–i.e., any VAM that fell below the average rate of improvement for all secretaries of education–was assigned the label “Ineffective.” (You can download a spreadsheet with more complete data here: vam 2015.)

VAMs

Arne Duncan caricature image credit: DonkeyHotey. Used with Creative Commons License.

Author

Anthony Cody

Anthony Cody worked in the high poverty schools of Oakland, California, for 24 years, 18 of them as a middle school science teacher. He was one of the organizers of the Save Our Schools March in Washington, DC in 2011 and he is a founding member of The Network for Public Education. A graduate of UC Berkeley and San Jose State University, he now lives in Mendocino County, California.

Comments

  1. Jack Haddard    

    Right on. Right on. Right on. Thank you Anthony for this. Jack

  2. Daun Kauffman    

    Great ‘360’ V.A.M. concept ! Truth to Power.

Leave a Reply