Skip to main content

Gadfly on the Wall Blog: When Good Students Get Bad Standardized Test Scores

Ameer is a good student.  

He takes notes in class, does all his homework and participates in discussions.  

He writes insightful essays and demonstrates a mastery of spelling and grammar.  

He reads aloud with fluency and inflection. He asks deep questions about the literature and aces nearly all of his classroom reading comprehension tests. 

However, when it is standardized test time, things are very different.  

He still arrives early, takes his time with the questions and reviews his work when he’s done – but the results are not the same.  

His grades are A’s. His test scores are Below Basic. 

How is that?  

How can a student demonstrate mastery of a subject in class but fail to do the same on a standardized test?  

And which assessment should I, his teacher, take seriously?  

After all, they can’t BOTH be correct. 

This is a problem with which most classroom teachers are forced to contend.  

Bureaucrats at the administrative or state level demand teachers assess students with standardized tests but the results often contradict a year or more of observation. 

Take the Measures of Academic Progress (MAP) test.  

This year at my Western Pennsylvania district, administration decided to use this computer-based standardized assessment as a pre-test or practice assessment before the state mandated Pennsylvania System of School Assessment (PSSA). 

I’ve already written about what a waste of time and money this isA test before the test!? 

But after reluctantly subjecting my classes to the MAP and being instructed to analyze the results with my colleagues, we noticed this contradiction. 

In many cases, scores did not match up with teacher expectations for our students.  

In about 60-80% of cases, students who had demonstrated high skills in the subject were given scores below the 50th percentile – many below the 25th percentile.  

These were kids with average to high grades who the MAP scored as if they were in the bottom half of their peers across the state. 

Heck! A third of my students are in the advanced class this year – but the MAP test would tell me most of them need remediation! 

If we look at that data dispassionately, there are possible explanations. For one, students may not have taken the test seriously. 

And to some degree this is certainly the case. The MAP times student responses and when they are input fast and furious, it stops the test taker until the teacher can unlock the test after warning them against rapid guessing. 

However, the sheer number of mislabeled students is far too great to be accounted for in this way. Maybe five of my students got the slow down sloth graphic. Yet so many more were mislabeled as failures despite strong classroom academics. 

The other possibility – and one that media doom-mongers love to repeat – is that districts like mine routinely inflate mediocre achievement so that bad students look like good ones.  

In other words, they resolve the contradiction by throwing away the work of classroom teachers and prioritizing what standardized tests say

Nice for them. However, I am not some rube reading this in the paper. I am not examining some spreadsheet for which I have no other data. I am IN the classroom every day observing these very same kids. I’ve been right there for almost an entire grading period of lessons and assessments – formative and summative. I have many strong indications of what these kids can do, what they know and what they don’t know.  

Valuing the MAP scores over weeks of empirical classroom data is absurd.  

I am a Nationally Board Certified Teacher with more than two decades experience. But Northwest Evaluation Association (NWEA), a testing company out of Portland, Oregon, wants me to believe that after 90 minutes it knows my students better than I do after six weeks! 

Time to admit the MAP is a faulty product. 

But it’s not just that one standardized test. We find the same disparity with the PSSA and other like assessments.  

Nationally, classroom grades are better than these test scores.  

In the media, pundits tell us this means our public school system is faulty. Yet that conclusion is merely an advertisement for these testing companies and a host of school privatization enterprises offering profit-making alternatives predicated on that exact premise.  

So how to resolve the contradiction? 

The only logical conclusion one can draw is that standardized assessments are bad at determining student learning.  

In fact, that is not their primary function. First and foremost, they are designed to compare students with each other. How they make that comparison – based on what data – is secondary.  

The MAP, PSSA and other standardized tests are primarily concerned with sorting and ranking students – determining which are best, second best and so on. 

By contrast, teacher-created tests are just the opposite. They are designed almost exclusively to assess whether learning has taken place and to what degree. Comparability isn’t really something we do. That’s the province of administrators and other support staff.  

The primary job of teaching is just that – the transfer of knowledge, offering opportunities and a conducive environment for students to learn.  

That is why standardized tests fail so miserably most of the time. They are not designed for the same function. They are about competition, not acquisition of knowledge or skill. 

That’s why so many teachers have been calling for the elimination of standardized testing for decades. It isn’t just inaccurate and a waste of time and money. It gets in the way of real learning.  

You can’t give a person a blood transfusion if you can’t accurately measure how much blood you’re giving her. And comparing how much blood was given to a national average of transfusions is not helpful. 

You need to know how much THIS PERSON needs. You need to know what would help her particular needs.  

When good students get bad test scores, it invariably means you have a bad test.  

An entire year of daily data points is not invalidated by one mark to the contrary.  

Until society accepts this obvious truth, we will never be able to provide our students with the education they deserve.  

Good students will continue to be mislabeled for the sake of a standardized testing industry that is too big to fail.

This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:

The views expressed by the blogger are not necessarily those of NEPC.

Steven Singer

Steven is a husband, father, teacher, and education advocate.