The Common Core State Standard initiative brought with it a complication just as burdensome to educators as the task of turning over the content and depth of what is taught in schools. That complication is the assessment piece. States have to decide who they will assign the mission of creating a test to measure the performance of students on the Common Core subject areas, namely, math and English. Like dating, if the match isn’t great, both parties can go their separate ways.
Take the case of the state of Ohio. Like 22 other states and the District of Columbia, they gave the Partnership for Assessment of Readiness for College and Careers (PARCC) a shot at being the state’s testing system provider for all K-12 schools in 2010. The result? Last June, Ohio decided to drop the PARCC after its first year of implementation. Educators across the board, from Superintendents to Principals, to teachers, gave the PARCC lousy reviews in an online poll, citing the test’s online requirements as a key issue. Before Ohio’s departure from the consortium of PARCC states, several other states had dropped the PARCC so that as of July 2015, there are only nine active members left!
Clearly the good folks at the PARCC are backpedaling, attempting to figure out how to best service the needs of the remaining members. In the fall of 2014, an organization called Teach Plus, was hired to enlist the help of over 1,000 teachers to review the quality of the PARCC assessment. One of the findings of this one day study was that participating teachers were mixed on whether the test was grade-appropriate or too challenging. But why such ambiguity?
The PARCC tests deployed without educators being able to decipher results for their students. In other words, teachers were unable to determine the significance of students’ scores. Imagine the frustration of all stakeholders, getting student results in the mail, inspecting them, and being unable to tell whether a student did good or bad (achievement) on the test. Say a student earned 80 out of 137 possible points on the 11th-grade English test section. Is that an F grade at 58%? It’s not that easy. Individual student performance needs to be compared to the results of every other student taking the same test and measured up against a set standard. Up until last school year, there was no way for the PARCC to provide states with performance thresholds…they didn’t exist!
“Cut scores” are what give meaning to the scores students earn on PARCC tests. To give around 5 million students, their parents, and teachers a true sense of accomplishment, educators met in Denver in early September of this year to work on the arduous task of setting cut scores for each performance band. They have already agreed on a five band (Level 1 to Level 5, with five being the top) ranking system. That was not as difficult as going through each test question and determining how many points (out of the total possible number of points for that particular question) a student would have to earn to fall into each of the five levels. For example, on a particular four point question, if a student earns 2 points, is that at Level 2, Level 3, or Level 4? As you can see, finalizing PARCC results is a complicated process.
Now imagine a school leader, a Superintendent or Principal, trying to explain this complexity to a parent group. Parents are used to a simplified “right or wrong” paradigm. Many of them will not understand that their child’s results will be more dictated by where these performance levels end up, by where the test item reviewers believed a student should be (in terms of understanding) at each grade level, disregarding their unique socio-economic experiences in the classroom and whatever nuances exist across state lines. Oh, the joys of Common Core.