Teachers’ value-added scores still controversial

In August 2010, the Los Angeles Times ignited a firestorm of criticism when it published the value-added scores of thousands of teachers based on California standardized tests in reading and math (“Whatever Happened with the Los Angeles Times’ Decision to Publish Teachers’ Value-Added Scores?” National Education Policy Center, Sep. 6).  Although the brouhaha has died down, there are several lessons to be learned.

The No. 1 lesson is that so much of the effectiveness of teachers is the direct result of the students they are assigned.  Give a weak teacher a class full of Talmudic scholars and that teacher will shine.  The converse is also true.  Unless students are randomly assigned to teachers, which they rarely are, great care needs to be taken before drawing conclusions.

The other lesson is that it is unfair to compare teachers even in the same school unless they are assigned students with similar characteristics.  Yet critics continue to say that everyone knows who the best teachers are.  But what are the reasons?

Near the end of my 28-year teaching career in the Los Angeles Unified School District in the same high school, I was often given five classes of students from the inner-city who brought with them huge deficits.  If a poll had been taken, I’m sure I would have been ranked low because of the difficulty of bringing these students up to grade level.

Value-added scores also are a nightmare for principals because once the data are released parents will demand that their children be assigned only to those teachers with the highest scores.  Trying to explain the lack of validity to parents will not appease them.

(To post a comment, click on the title of this blog.)

4 Replies to “Teachers’ value-added scores still controversial”

  1. The DC school system still uses value-added scores and I expect that many other school systems do as well. Surprising that there has been so little in the media the last year or two re value-added scores or, more generally, re using student test scores to evaluate teachers. Suppose school systems are still doing this, but the media just is not paying attention any more.

    Value-added is probably a little better than raw test scores, but it’s still far too unreliable to use for $ or job-retention decisions. It’s patently obvious that there are too many factors that impact student test scores and are beyond the teacher’s control, even in the most sophisticated value-added methodology.

    A minor point — random assignment of students would yield more reliable value-added results than assignment based on some other basis. However, under random assignment, there would still be many cases where a teacher got a disproportionate number of easy students or a disproportionate number of hard students — like if you “randomly” dealt out a deck of cards to four players, it would be very unusual for each player’s 13 cards to be distributed 3-3-3-4 among the different suits. Sometimes a player would get 9 hearts, 3 diamonds, 1 spade and 0 clubs. In order to maximize fairness in a value-added or other system that relied on student test scores, the student assignments would have to be made with a conscious effort to assign each teacher the same mix of easy and hard students.


  2. Labor Lawyer: In order to draw valid inferences about any teacher’s effectiveness, every one would have to be assigned the same mix of students. But that is impossible. What usually happens is that certain teachers are given the best students based on favoritism. It merely exacerbates matters. Moreover, sometimes students don’t appreciate a teacher until many years later.


    1. Agree that favoritism often influences which students are assigned to which teachers. But, my impression is that often principals rely on legitimate considerations in assigning different types of students to different types of teachers — i.e., a teacher who has a track record of dealing effectively with out-of-control boys will get more than his/her share of such students while a teacher who has a track record of dealing effectively with ESL students will get more than his/her share of such students. Seems obvious that, to the extent principals rely on these legitimate considerations — and we want principals to do so — value-added or other student-test-score metrics for evaluating teachers are unfair/unreliable.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: