Soon the Regents scores will be coming out, with the usual comparisons of how this years scores compare with previous years scores. The media, NYSED, and school administrators will be all over this data, claiming it reveals something about what is going on in our classrooms. Most likely, at least when it comes to the English Regents, there will be concern about a decrease in mastery and passing rates.
No one will mention that comparing this years scores to last years scores is invalid. As the state is pushing Common Core, more testing, and APPR teacher scores based on these tests, they are manipulating Regents scores in order to make it appear that students are doing worse year-to-year. Teachers will be labeled unsatisfactory, because their APPR scores are, in large part, contingent upon the Regents scores.
The email I sent to Steven Katz, NYSED Director of Assessment, explains the problem here. I have yet to receive a reply as to my question re: the rationale of changing the scoring rubric in order to allow more students to fail. Here is my email, followed by the scoring charts for 2011, 2012, and 2013:
I have few questions about the NYS English Regents scoring chart. In 2011, a student scoring 17 on the multiple choice and 7 points on questions 26-28 would earn a score of 71; in 2012, that student would earn a score of 66, and this year, that student would fail with a score of 63. In 2011, there were 77 boxes on the chart that allowed for passing scores; in 2012, there were 70; this year, there are 57. You get my point.
What is the rationale behind making it more difficult for students to pass year-to-year? Why are these scoring changes not publicized, when you are well aware that when mastery and passing rates go down, the media and the public see it as a failure on the part of teachers?