shadow

By Francesca Blueher.

Six years ago, I administered a 12 hour, 100-page test that could not be read by a single student who was mandated to take it. This test, the New Mexico Standards Based Assessment (NMSBA), was composed of multiple choice questions; reading passages to be answered with short responses; multi-step math problems which required drawing models, writing out thinking, and making number sentences; and a writing portion that demanded the students’ plans, a rough draft and a final draft of their piece. The six students I tested were in 3rd grade, 8 to 9 years old, in Special Education Programs with IEPs that clearly stated that they were at least 2 years below the grade level of the test administered, and all on the free and reduced lunch program.   One child was an English language learner, 2 had an incarcerated parent, and one was homeless.    By the end of the testing window, students’ shoulders were slumped, eyes were glazed, tears were shed, and anger vented.  Six months later, when they were in 4th grade, the NMSBA informed them that they were at “Beginning Steps” in Reading and Math—nothing about their brilliance in art, knowledge of multiple languages, curiosity, physical agility, joke telling abilities, empathy, or resourcefulness.

The day after I finished giving the NMSBA to these children, I got a case of shingles on my face that was both painful and disfiguring. I knew immediately why I got sick. It was a physical manifestation of giving the NMSBA to children. If I got scarred from shingles, what scar did it leave on the children I tested? Since that time, I have been passionate about being a voice for the children I tested and have pledged to learn all I can about why we give these tests to children, who benefits from the administration of these tests, and what the effects are of a standards based education and accountability system on our schools.

This is why I am urging New Mexicans to pay attention to the PARCC assessment that this year is replacing the NMSBA. The PARCC, which stands for Partnership for Assessment of Readiness for College and Career, is sold as being aligned with the unpiloted, untested Common Core State Standards (CCSS) and will be given for the first time this spring over March and April in ALL NM public schools on computers. The resources being spent on the PARCC, which is also untested and has no research to back its claims, are unprecedented in educational history. States around the country are spending millions of dollars expanding their districts’ band-width, investing in computers, adding man-power for technology and data collection, and implementing “professional development”, not on teaching and learning, but on how to teach computer skills necessary to take the test.

New Mexico is no exception in this effort. Every public school, whether they are in rural areas, reservations, or urban settings are investing time and money to prepare for the PARCC this spring (see sample tests here.) Teachers from elementary to high school are spending collaboration time, which had been spent on developing rich and engaging curriculum, and instead are now working on developing lessons on how to scroll between split screens, how to click and drag, and how to type essays and equations on the computer.  Our union is also offering courses to teachers on how to prepare kids for the PARCC assessment—again, not on rich, engaging programs.

This spring, as New Mexico takes the PARCC for the first time, every computer will be devoted to administering the test since NO public school has the technological resources to test all the students at the same time. In elementary schools, this translates to groups of students being cycled for weeks through the school’s computer labs and libraries.   In high school, CAD classrooms, libraries, computer labs, and any other technologically centered classroom will also be out of commission during the weeks long testing windows. During the 4 weeks of testing, these classrooms will be unavailable for instructional purposes for the entire student population. The average time for a student to take all sections of the PARCC in elementary school is 12 hours. The average time to test an 11th grader is 14 hours.

Soon, our teachers will be evaluated on the results of students’ PARCC assessments, our schools will be branded a letter grade based solely on the PARCC assessment, and New Mexico will be compared to other states in the country on the PARCC assessment.

My questions are these:

What are the benefits to NM children of the implementation of an unpiloted, untested CCSS and the accompanying PARCC?

Are schools in New Mexico improving with over 10 years of a standards based accountability system?

Are the millions being spent on testing, evaluation, and grading of our children, teachers and schools making for a more engaging, enriching education? If not, who is benefitting from the millions spent on high-stakes testing and accountability systems?

Francesca Blueher has been an elementary teacher, Instructional Coach, and Math Interventionist in public schools for the past 17 years.  Because of the enormous increase in high-stakes standards based tests given to children over the past decade of the accountability movement, she has grown to be an activist  speaking out about their destructive effects on our schools, communities, and students. Francesca is currently on a personal leave from teaching due to the intolerable policies mandated. Teaching the beauty and art of math to children and adults is her passion.

Image by UTC Library, used with Creative Commons license. 

Author

Anthony Cody

Anthony Cody worked in the high poverty schools of Oakland, California, for 24 years, 18 of them as a middle school science teacher. He was one of the organizers of the Save Our Schools March in Washington, DC in 2011 and he is a founding member of The Network for Public Education. A graduate of UC Berkeley and San Jose State University, he now lives in Mendocino County, California.

Comments

  1. howardat58    

    The logical plan is to let all the students perform really really badly in the first year of tests . This way almost anything in subsequent years will show up as an improvement. This apart from the overkill approach being used. I wonder what will happen when all school subjects are tested in this way – 5 times as much time spent being tested, so 12 x 5 x twice a year = 120 hours.
    Considering the number of large computer systems which have failed I fully expect this to result in chaos.
    (sounds better than the NM testing though !!)

  2. Michelle    

    Howardat58 – How is 9 hours split between paper-and-pencil written responses (PARCC PBA) and computer-based assessment requiring highlighting, drag-and-drop, split-screen navigation, keyboarding skills at a fluent level (e.g. by 5th grade able to type around 40 wpm – 2 pages in 30 minutes, edited and publish-ready), etc. “better than the NM testing”? Yes, 3rd graders are expected to have these skills… and if they don’t, then the test shows them as failing, regardless of their ability to read, to write, to compute, to think. The FIRST thing kids now must learn, in order to show what they know, is how to take the test, and how to keyboard. This REPLACES other learning; there is only so much time in a day, and only so much a child can do before fatiguing.

    We already were having to put aside diverse and exciting lessons on a variety of topics and modalities of expression in order to teach HOW to take the NM SBA. Now, even MORE time is being taken up with how to take the PARCC, both in written response and in the computerized format, which is NOT comparable to ANY OTHER FORMAT that they use for any other tech program. Often, hands-on activities for science and social studies are what ends up being lost.

    Go and picture yourself at 8 or 9 years old, click the link in the article to take a sample PARCC test. Inform yourself!

    Then add short-cycle (Discovery or MAPs – which, by the way, will be replaced by further PARCC “probes” soon – one company getting all your testing tax dollars…), ACCESS for those who are second-language-learners, etc.

    Take screening tools meant to inform teachers without being used as an assessment of teaching (e.g. student need, not instructional outcome) such as DIBELs/IDEL, or STAR (AR originally was just a “confirm the kids are actually reading during independent reading” screening and had nothing to do with actual assessment of skills), and suddenly tie teacher evals to them.

    Finally, since teachers DO need feedback from their students to inform their instructional decisions, but none of these tests, except the DIBELs screening – and that is for phonics and fluency only – provide the sort of information that is really needed to decide next lessons (in spite of hype from their publishers). So in between it all, when teachers do actually get around to teaching real information to their students, they then also have to have their kids take quizzes and tests that can help with real assessment of their students’ learning for the target the teacher is seeking to address.

    Discovery (and PARCC) does give a breakdown by standard, of what sort of questions most kids were missing. However, it doesn’t help the teachers see WHY the kids were thinking a certain way, like teacher-selected or self-made tests can give. When I test my students using my own assessment choices, I keep a close eye on how they respond to the questions I’m asking them… while they are thinking about what they’re going to say. I see who is responding quickly because they know it, and who is responding quickly because they don’t. I see who is interested in the topic, who has grasped what components, and how their reasoning is accurate or not. I watch eye movement as they scan text for possible difficulty with scotopic sensitivity, visual motor issues, etc. As they work through a math problem, I watch the sequence in which they mark algorithm steps, how long it takes them, what strategy they’re using (fingers, tallies, touch math dots, or fact memorization) to get through the basic computation, and looking for where they might fall apart in their reasoning.

    I challenge the test designers of PARCC to be able to tell WHY a kid is answering the way they do… and to do so accurately and not just off a statistical guess. Saying that, because teachers know their students, they should use the PARCC data combined with knowledge of their students to determine what they’re going to do, is a logical fallacy. The whole POINT is that the data obtained from standardized testing tells teachers nothing new about their students… it isn’t needed to determine that next step because the teacher can use their own, shorter, more informative assessments of their own design and choice to do exactly the same thing. So, we’re spending millions on it why, exactly?

Leave a Reply