shadow

By Anthony Cody.

A fresh debate has emerged in the aftermath of the Los Angeles Times’ critique of the Gates Foundation’s education reform project. As I noted here, they seem to have forgotten their own role in promoting the Gates Foundation’s efforts – namely, their “Teachers investigation,” in which they hired an economist, and using several years of test score data, generated and published their very own VAM ratings for thousands of Los Angeles teachers.

Here was the debate on Twitter:

I tweeted my post:

LA Times Criticizes Gates & Deasy, Forgets OWN role: http://www.livingindialogue.com/la-times-criticizes-gates-deasy-forgets-role/ 

Alexander Russo retweeted this, adding: The publication of teachers’ scores is one of the most questionable pieces of edjournalism of the last decade. (and he has posted a followup here.)

I replied: thank you. These ‘journalists’ won 2nd place award fr. EWA for this hatchet job on teachers

Greg Toppo, who covers education for USA Today, and is Vice President of the Education Writers Association, replied:

“Say what you will about the stories. They are not journalists in quotes.”

To which I replied:

“If you define journos so as to exclude bloggers due to the latter’s advocacy, that applies to LAT also.”

Greg Toppo then replied:

“No. Journos, LAT included, go where evidence sends them, sometimes badly but always independently.”

So let’s dig in a bit here, because obviously Twitter is not the place to reach any real clarity.

A bit of context underscores why it is useful to delve into the ethics of this work. I have been engaged in a bit of a debate with the leaders of the Education Writers Association for the past couple of years, since they decided that, after I had won several of their awards for my writing, I was no longer eligible, because I was not a legitimate journalist. Journalists are supposedly objective. They are not advocates of any particular point of view. Just the facts. Bloggers like myself, on the other hand, are “advocates,” putting forth a biased version of reality.

I do not deny being an advocate. I worked in the Oakland schools for 24 because I was an advocate for my students and colleagues, and although I have left my work there, I remain loyal. I believe in public education, and the power of the creative spirit of teachers and students. My work, as I see it, is to support and elevate teachers and students as best I can.

But I believe many of those who are employed as journalists are advocates as well, and I think the LA Times Teacher Investigation series is a case study. So I would like to go back to that series, and provide some evidence to support this assertion.

First, let’s take a look at what reporters Jason Felch and Jason Song did.

Here is how Felch described the rationale:

Experts have long known that highly effective teachers can overcome the challenges students face both inside and outside of school. But why they are so successful — and whether their skills can be passed along to others — remains largely a mystery.

This idea was central to the Gates Foundation’s push to make “teacher effectiveness” the big lever of change in improving student outcomes. And as I noted, Felch himself cited Thomas Kane and the Gates Foundation’s Measures of Effective Teaching project as expert sources to validate this assertion.

Next, Felch explains something extraordinary. This project not only asserted this nonsense about the magical powers of great teachers. The LA Times hired an economist to create their very own VAM system, and analyzed and published scores for thousands of individual teachers, ranking them according to their supposed “effectiveness” at raising scores. Felch went on to write that:

Highly effective teachers routinely propel students from below grade level to advanced in a single year. There is a substantial gap at year’s end between students whose teachers were in the top 10% in effectiveness and the bottom 10%. The fortunate students ranked 17 percentile points higher in English and 25 points higher in math.

As part of an effort to shed light on the work of L.A. teachers, The Times on Sunday is releasing a database of roughly 6,000 third- through fifth-grade teachers, ranked by their effectiveness in raising students’ scores on standardized tests of math and English over a seven-year period.

The findings are based on an approach called value-added analysis, which is designed to allow fair comparisons of teachers whose students have widely varying backgrounds. Although controversial, the method increasingly has been adopted across the nation to measure the progress students make under different instructors.

L.A. Unified has had the underlying data for years but has chosen not to analyze it in this way, partly in anticipation of union opposition. After The Times’ initial report this month showed wide disparities among elementary school teachers, even in the same schools, the district moved to use value-added analysis to guide teacher training and began discussions with the teachers union about incorporating data on student progress into teacher evaluations.

The results of The Times’ analysis are not a complete measure of a teacher by any means, but they offer one way to see whether an instructor is helping — or hindering — children in grasping what the state says they should know.

So was this objective journalism? Did these reporters go where the evidence sent them, independent of influence?

I would argue that this was a prime case of advocacy cloaked as journalism. These reporters legitimized as an objective source the Gates Foundation’s MET project, which was part of a huge campaign on the part of Gates to reshape education policy according to his beliefs. Researchers Sarah Reckhow and Megan Tompkins-Stange have uncovered the way this worked, as described here:

It’s within [a] sort of fairly narrow orbit that you manufacture the [research] reports. You hire somebody to write a report. There’s going to be a commission, there’s going to be a lot of research, there’s going to be a lot of vetting and so forth and so on, but you pretty much know what the report is going to say before you go through the exercise.

Reckhow and Tompkins-Stange showed that these “research” products were used to influence policymakers – and clearly reached journalists Felch and Song as well. Reckhow and Tompkins-Stange observed:

Within a small orbit of think tanks, some university-based researchers, advocacy groups, and philanthropic funders, an argument favoring new evaluation systems and pay for performance to transform teacher quality was widely shared and transmitted in national policy arenas.

The underlying research supporting VAM was weak, and has been largely discredited as a useful means of improving schools. Many states are now shifting away from evaluation systems that states mandated to qualify for Race to the Top grants and NCLB waivers. Felch and Song acknowledge that the approach is “controversial,” but nonetheless embrace it, even going so far as to impugn the reputations of individual teachers based on their ranking system.

Felch and Song not only advocated the use of test scores to evaluate teachers. They actually went several steps further, in conducting the evaluations themselves, and publishing the results. Even today, you can click here and see the rating of Rigoberto Ruelas, which family members said contributed to his depression and subsequent suicide.

You do not have to take my word for it. Two researchers, Derek Briggs and Benjamin Domingue, conducted an in-depth review of the methods used by the LA Times, and reached the same conclusion.

They wrote:

Our findings do not support the assertion that a teacher’s ―scores are unaffected by low- achieving students. And, as we have noted, it is not possible to verify the findings—based on Buddin’s analysis of prior data from 2000 to 2004—that there is no ―meaningful‖ relationship between value-added estimates and classroom demographic variables such as gifted and talented status, special needs, ELL status and poverty levels. So while Buddin’s analysis has clearly proven itself to be useful from the perspective of the L.A. Times, this utility is misleading in that it casts the Times’ teacher ratings in a far more authoritative and ―scientific light than is merited.

In a further bizarre journalistic turn, Jason Felch wrote an article on this review, headlined “Separate study confirms many Los Angeles Times findings on teacher effectiveness” . Felch wrote: “A study to be released Monday confirms the broad conclusions of a Times’ analysis of teacher effectiveness in the Los Angeles Unified School District while raising concerns about the precision of the ratings.”

This was such a gross misrepresentation that Derek Briggs, one of the researchers, replied:

I don’t see how one can claim as a lead that our study “confirmed the broad conclusions”– the only thing we confirmed is that when you use a value-added model to estimate teacher effects, there is significant variability in these effects. That’s the one point of agreement. But where we raised major concerns was with both the validity (“accuracy”) and reliability (“precision”), and our bigger focus was on the former rather than the latter. The research underlying the Times’ reporting was not sufficiently accurate to allow for the ratings.

Further responses to the LA Times position are contained in two Fact Sheets, which follow the report at this link.

It seems a strange sort of journalism that has a reporter assigned to write a supposedly objective article about a critical review of his own work. Felch was clearly not up to the task – but the LA Times allowed him to do so anyway.

A truly objective investigation into the issue of teacher quality in Los Angeles schools might have started by taking on the challenge of defining how that quality or effectiveness is to be defined. Reporters might have looked into the controversy over ranking teachers based on their students’ test scores. They would not take as gospel the assertions of Gates-funded think tankers. Nor would they take it upon themselves to cook up their own rating system. And when that rating system is called into question by experts, they would not misrepresent this as some sort of validation.

Going back to Greg Toppo’s assertion that prompted this post – no, reporters do not always go where evidence sends them, and are not always independent of the subjects they cover. And I think this sorry episode provides ample evidence to make the case that this is so.

Update: June 7, 2016: The intrepid Leonie Haimson provided me with a tip this morning. Who funded the creation of the VAM database for the Los Angeles Times?  This article indicates:

A grant from The Hechinger Report helped fund the analysis — completed by senior economist Richard Buddin of RAND — on which the Times based its series. (The Report did not participate in the analysis.)

And where did The Hechinger Report (sponsored by none other than Teachers College), get the money? Here is one possible source I found after a bit of hunting: In September of 2009, Teachers College received a grant of $652,493 from the Bill and Melinda Gates Foundation in order: “to support the development of high quality education coverage in the nation’s leading newspapers and magazines.

Further update, June 7: This document provides a case study of the work done by Felch and Song, and indicates: “The Hechinger Institute in August 2010 awarded the Times a $15,000 grant that the paper used to help defray the cost of the consultant.”

What do you think? Was the LA Times engaged in objective reporting when they developed and published teacher VAM scores?

Author

Anthony Cody

Anthony Cody worked in the high poverty schools of Oakland, California, for 24 years, 18 of them as a middle school science teacher. He was one of the organizers of the Save Our Schools March in Washington, DC in 2011 and he is a founding member of The Network for Public Education. A graduate of UC Berkeley and San Jose State University, he now lives in Mendocino County, California.

Comments

  1. Máté Wierdl    

    Thanks, good note. Isn’t there some kind of law that binds journalists’ reporting, and if they break the “journalists’ code of ethics”, they can banned from practicing as a journalist?

  2. leonie haimson    

    Question: have you looked at who funded the LAT series and the formula used? I seem to recall it was Hechingwr but that likely was a passthrough from either gates or Broad,

  3. westello1    

    I’m a public education blogger as well. I normally tell people I am a citizen-reporter who reports AND does editorial writing (so as to make sure to acknowledge my advocacy work.) I am not a trained journalist and I never pass myself off as one.

    But many journalists are now working at newspapers, like the LA Times AND the Seattle Times (where I live), where the Gates Foundation is funding education reporting. No matter how these newspapers phrase this, there is virtually no one who wouldn’t agree that this funding impacts the reporting. That both the LA Times and the Seattle Times try to deny it only makes it worse.

    Good job, Anthony.

  4. Randy    

    Rand Corporation: Evaluating Value-Added Models for Teacher Accountability (2003, p. 119): “The research base is currently insufficient to support the use of VAM for high-stakes decisions.”

    https://www.rand.org/content/dam/rand/pubs/monographs/2004/RAND_MG158.pdf

Leave a Reply to leonie haimson Cancel reply