By Anthony Cody.

We have been badgered for the past 14 years by reformers insisting on the fierce urgency of change, and they have had their way – twice! First, seven years of NCLB, followed by the past seven years of Race to the Top, and now the “next generation” of tests, which were promised to be “smarter,” computer-adapted, and deliver results more quickly. None of it worked. Scores on the independent NAEP tests are flat or down. The SBAC and PARCC tests are more difficult without being any “smarter” in telling us about what our students can do. The idea that these tests could somehow promote and measure creativity and critical thinking is debunked. The growing opt out movement poses a huge threat to the standardized testing “measure to manage” paradigm.

So what is to be done?

Reinvent the tests once again, using technology. And who better for the job than Tom Vander Ark, formerly of the Gates Foundation, and now associated with a long list of education technology companies. The latest package of solutions is being called “competency based learning,” and it was featured prominently in the Department of Education’s latest “Testing Action Plan.”

Here is how Vander Ark frames the challenge:

Jobs to be done. To get at the heart of value creation, Clayton Christensen taught us to think about the job to be done. Assessment plays four important roles in school systems:

  1. Inform learning: continuous data feed that informs students, teachers, and parents about the learning process.
  2. Manage matriculation: certify that students have learned enough to move on and ultimately graduate.
  3. Evaluate educators: data to inform the practice and development of educators.
  4. Check quality: dashboard of information about school quality particularly what students know and can do and how fast they are progressing

Initiated in the dark ages of data poverty, state tests were asked to do all these jobs. As political stakes grew, psychometricians and lawyers pushed for validity and reliability and the tests got longer in an attempt to fulfill all four roles.

With so much protest, it may go without saying but the problem with week long summative tests is that they take too much time to administer; they don’t provide rapid and useful feedback for learning and progress management (jobs 1&2); and test preparation rather than preparation for college, careers, and citizenship has become the mission of school. And, with no student benefit many young people don’t try very hard and increasingly opt out.

Note that the source used to define the phrase “jobs to be done” is Clayton Christensen, who has popularized the business concept of disruptive innovation,” which is the main framework used by “innovators” like Vander Ark.

So what is “competency-based learning”? Here is Vander Ark’s description:

For states ready to embrace personalized and competency-based learning, CompetencyWorks, an online community and resources supported by iNACOL, outlines five components of competency-based education (CBE):

  1. Students advance upon mastery.
  2. Competencies include explicit, measurable, transferable objectives that empower students.
  3. Assessment is meaningful and positive learning experience for students.
  4. Students receive timely, differentiated support based on their individual needs.
  5. Outcomes include application of knowledge and development of important skills and dispositions.

The definition sets a high bar by requiring well stated learning targets, powerful learning experiences, better reporting systems, and new rules for matriculation management. It focuses primarily on the first two jobs: student learning and progress management.

And central to the model (though not stated above) is that this process is managed through technology. Students are given tasks and assignments to complete on computers, which perform the “formative assessments.”

This is explained more clearly in the links at the bottom of Vander Ark’s post. This post, Path to Personalization: Better Models & Better Tests, describes the new tests that are envisioned. This references work by Gene Wilhoit and Linda Darling-Hammond, which I addressed in this post a year ago. He writes:

The biggest opportunity is for assessment frameworks that support competency-based learning sequences. Districts and networks of schools could develop assessment systems that according to iNACOL, “Measure individual student growth along personalized learning progressions,” and, “Use multiple measures of learning, including performance-based assessments.” A state could create an innovation zone to pilot the use of on-demand (or frequently scheduled) end of course demonstrations of learning to manage student progress thus reducing the need for end of year exams.

In this post, “Teachers Deserve Better Tools for Tracking Subskills, Vander Ark makes it clear that teachers will be required to manage more data than ever.

To boost student engagement and simply stakeholder reporting, the solutions should be, as Michael Fullan suggests, “irresistibly engaging” for students and “elegantly efficient” for teachers. Students should be able to log into a mobile application and quickly understand what they need to learn and options for demonstrating mastery. Teachers should be able to efficiently monitor progress, benefit from informed recommendations and dynamic scheduling, and pinpoint assistance for struggling students.

Of course, no system of assessment would be complete without formative assessments:

Digital learning and the explosion of formative data means the beginning of the end of week long state tests. By using thousands of formative observations it will be increasingly easy to accurately track individual student learning progressions. But making better use of the explosion of formative data will require leadership and investment.

This new vision for accountability does include room for juried portfolios of student work. Here is what Vander Ark suggests:

The test based options tend to be more reliable while the student work product approaches are more valid and authentic. A jurying process for portfolios can boost reliability but adds cost and complexity. A state could combine both approaches by requiring a series of several ACT tests (Plan, Explore, Compass) and incorporating them into a body-of evidence or complication of assessments approach. A state could also combine short end of course exams with a body of evidence approach to gain affordable validity and reliability.

So where does this lead us? We have the test makers defining concepts for students to learn, which are clearly delineated so the learner and the teacher know precisely what they are accountable for. We have frequent “formative assessments” built into assignments that students complete on computers, to be checked by those computers, with tagged data provided to teachers (and presumably to those tasked with supervising teachers.)

There are two unwritten assumptions that are constant from the beginning of NCLB and carry through to this new version. Teachers are not trusted to make judgments about what students learn, how they learn it, or how learning is assessed. Assessment is defined as the external monitoring of the work inside the classroom. The second assumption is that data and technology must be instrumental in whatever process is devised. The main innovation here is the more thorough and intrusive penetration of the classroom via computers capable of monitoring learning.

Both of these assumptions are unsupported by any evidence or track record, in terms of their ability to enhance learning.

The flat or declining NAEP scores demonstrate that external accountability systems have failed to lift performance. Repeated experiments with technology-based instruction have failed to show any advantage. Virtual charter schools, the ultimate extension of this model, have been shown to be virtually useless.

There IS a track record for juried portfolios, such as those in use at New York’s Performance Standard Consortium schools. I visited one of these schools last winter, and heard how the teachers there work together to define course objectives, and then help their students prepare portfolios demonstrating their achievements. This is authentic work, driven by the teachers, not by some external body. This is the one bright spot in Vander Ark’s vision. But note that it does not require either external oversight, nor technology. For that reason, it is rather overshadowed by all the other elements of competency based learning, and I am not sure how it would survive in the computer-managed environment Vander Ark describes.

The essential feature of our current accountability paradigm is its lack of trust in teachers. This suits those who wish to “disrupt” education quite well, because they can come up with one “innovation” after another, and as each one disappoints, they can innovate away again, and every time there is a new status quo to be disrupted and replaced, and a new product to be sold.

As Dr. Myron Atkin reminded us this week, the feature that makes formative assessment work also makes it NOT work when it is packaged and sold.

Formative assessment, so defined, is a pivotal element of everyday classroom teaching. It occurs throughout the school day. It requires collaborative involvement of both teacher and student. And it isn’t something purchased from a vendor that can be used in an identical fashion anywhere, like an instruction book or a cooking recipe.

He goes on to explain:

The key benefits of formative assessment emphasized in the research literature are associated with changes in the classroom that result when teachers and students collaborate closely in examining the quality of student work. What does quality look like? What might the student do to improve school work to bring it to a higher quality than it is right now? This integration of teaching, learning, and assessment is complex work, but potent. It takes time and effort: hours, days, weeks, and months – not the periodic 15 or 20 minutes needed to respond to questions purchased from a remote “item bank” developed by the testing companies to foreshadow the final examination. Reporting mini-test scores to the students and even discussing common incorrect answers has little relationship to the type of feedback studied by Black and Wiliam that produced such large gains in achievement.

This sort of formative assessment also takes expertise on the part of teachers. The externalization of this process disempowers and de-skills teachers, leaving them the intellectually barren work of monitoring student performance based on computer-assessed tasks.

The presence of portfolios in this largely technology-driven vision is not enough to make it worthwhile. As Nancy Bailey points out, this vision misses so many “competencies” that cannot be measured by tests or through a computer. Once again, this is old wine in a shiny new bottle, and once again, it has become vinegar.

What do you think? Is “competency-based learning” worthwhile?


Anthony Cody
Anthony Cody

Anthony Cody worked in the high poverty schools of Oakland, California, for 24 years, 18 of them as a middle school science teacher. He was one of the organizers of the Save Our Schools March in Washington, DC in 2011 and he is a founding member of The Network for Public Education. A graduate of UC Berkeley and San Jose State University, he now lives in Mendocino County, California.


  1. 2old2tch    

    What is the difference between standards based and competency based education? One is produced through scripted interaction between the teacher and student and the other is produced through scripted interaction with a computer. Neither approach requires a teacher; neither approach exemplifies good teaching. Both attempt to reduce learning to breaking tasks down into measurable components. Both totally negate the messy, immeasurable but necessary social aspects of learning.

  2. howardat58    

    I take math as one part of this. CBE completely misses the point of education. Since the vast majority of students will have no later use for high school math as a bag of competencies, the Mastery approach (no proceeding to B til you have mastered A) is going to get in the way of understanding and APPRECIATING math and the insights it can give into the way of the world, which t me is the main purpose.

    I guess you have read the NGA paper on CBE. What a lot of wishful thinking. I would be surprised if any of the Governors actually read it all.. Anyway, here’s the link:

  3. Monty Neill    

    Thanks, Anthony. This is likely to emerge as a major battle as issues of the goals and purposes of education, who decides and who controls, continues as the testocracy attempts to organize a technological solution to its serious political problem, the test resistance. As always, they insist that only what is measurable is what should count (ignoring Einstein’s dictum) and central authorities + corporations should decide what counts.

  4. Joanna Best    

    It’s not worthwhile if it is something schools have to buy. If it’s a concept that teachers can learn about in their reading in teacher training courses and continuing ed, then bring it on. But if it involves contracts at the state level and then jobs of “coaches” (tired teachers turned coaches) to make sure it’s done right. . .NO! Forget that. Enough of that.

  5. Nancy EH    

    Maine has had a “proficiency-based education” high school diploma requirement in statute for several year.s The actual implementation date has been put off at least once, but it’s now required of the Class of 2018. There has been no research (about which I’m aware) that shows PBE has ever been used anywhere for any length of time to any advantage. And yet, we have it.

    How well high schools (let alone pK-8) are doing in creating a PBE environment is anyone’s guess. I’m sure there are many who say they’re doing just hunky-dorey and, of course, the Maine Department of Ed is absolutely sure everything’s in place, but I – and others – have our doubts.

    References: Save Maine Schools –
    Maine Department of Education –
    Maine DoE high school list:
    FB page:

  6. chris sturgis    

    I think it is important to make a distinction between adaptive software and the efforts of districts to move beyond the policies and practices of the traditional time-based system to a system that is designed to make sure that students really learn every step of the way i.e. competency-based or proficiency-based learning. I visit districts and school that are becoming competency-based and most use adaptive software as a resource/supplement for students. Competency education does generate more data on student learning for teachers to use about how students are progressing and how they learn — information management systems are important as they create transparency about how students are progressing (as compared to A-F grading that doesn’t tell you anything about what students actually know and can do). I hope that as you write about competency education you make a difference between creating a new infrastructure to replace the time-based system which allowed schools to pass students on with Cs and Ds without their actually learning and online learning. Online learning is about how we deliver instruction, competency educaiton is the structure that ensures that students are really learning and can apply their skills in new contexts. If you’d like to learn more there are lots of case studies on districts that are becoming competency based at — some of them have very limited online learning.

Leave a Reply