now pearson wants to machine-score essay tests. [View all]
http://www.huffingtonpost.com/todd-farley/lies-damn-lies-and-statis_b_1574711.html?ref=standardized-testing
As an anti-testing guy, and a writer, I never thought I'd be wowed by a technology that claims to be able to assess student writing without even being able to, you know, read. In the end, however, I couldn't help but be impressed by the shiny statistics provided in support of the study's own findings. There was page after page of tables filled with complicated numbers, graph after graph filled with colorful lines, plus the Pearson r, all purporting to show the viability (if not superiority) of automated essay scoring engines when compared to human readers.
Even if all that is true, what is it about the wonders of automated essay scoring that this study really is claiming? Come to find out, the study asserts very little. Perhaps most importantly, the study makes no claim about those automated scoring engines being able to read, which they emphatically cannot do. That means that even though every single one of those automated scoring engines was able to pass judgment on every single one of the essays painstakingly composed by American students, not even one of those scoring engines understood even one word from all those kids.
Provocative thoughts in those essays? The automated scoring programs failed to recognize them. Factual inaccuracies? The scoring engines didn't realize they were there. Witty asides? Over the scoring engines' heads they flew. Clichés on top of clichés? Unbothered by them the scoring systems were. A catchy turn-of-phrase? Not caught. A joke about a nitwit? Not laughed at. Irony or subtlety? Not seen. Emotion or repetition, depth or simplicity, sentiment or stupidity? Nope, the automated essay scoring engines missed 'em all. Humanity? Please.