Democratic Underground Latest Greatest Lobby Journals Search Options Help Login
Google

At some schools, failure goes from zero to 50

Printer-friendly format Printer-friendly format
Printer-friendly format Email this thread to a friend
Printer-friendly format Bookmark this thread
This topic is archived.
Home » Discuss » Topic Forums » Education Donate to DU
 
flashl Donating Member (1000+ posts) Send PM | Profile | Ignore Tue May-20-08 05:11 AM
Original message
At some schools, failure goes from zero to 50
In most math problems, zero would never be confused with 50, but a handful of schools nationwide have set off an emotional academic debate by giving minimum scores of 50 for students who fail.

Officials in schools from Las Vegas to Dallas to Port Byron, N.Y., have proposed or implemented versions of such a policy, with varying results.

Their argument: Other letter grades — A, B, C and D — are broken down in increments of 10 from 60 to 100, but there is a 59-point spread between D and F, a gap that can often make it mathematically impossible for some failing students to ever catch up.

"It's a classic mathematical dilemma: that the students have a six times greater chance of getting an F," says Douglas Reeves, founder of The Leadership and Learning Center, a Colorado-based educational think tank who has written on the topic. "The statistical tweak of saying the F is now 50 instead of zero is a tiny part of how we can have better grading practices to encourage student performance."

USA Today
Printer Friendly | Permalink |  | Top
Sancho Donating Member (1000+ posts) Send PM | Profile | Ignore Tue May-20-08 06:11 AM
Response to Original message
1. You can't perform meaningful math operations on ordinal data anyway...
Stevens, S.S. (1946). On the theory of scales of measurement. Science, 103, 677-680.

So USA Today (and all the state officials cited in the article) have only confirmed that they can't read and do not know much about measurement. Most first courses in measurement in education, psychology, or science discuss the basics.

Grades today are no more than a record of participation. They have very little real use as measures of performance. All ordinal measures (grades, Likert surveys from "agree to disagree", percentile ranks, etc.) have unequal intervals and should not be averaged, added, or compared. Schools, universities, pollsters, newspapers, and most of the public just don't know better so they just do it anyway.

(If you get really excited about this, modern mathematics has statistical discussions about it all the time: http://www.rasch.org/rmt/rmt111n.htm for example...)

Steven Friess from USA Today (paste below) demonstrates that journalists don't know what they are doing. Typical...

"A look at how a minimum-50 policy would affect three hypothetical students in their four marking periods:
Student 1
Scores: 30, 40, 80, 80
Average: 57.5, an F
Average with 50 Policy: 65, a D
Student 2
Scores: 40, 40, 40, 80
Average: 50, an F
Average with 50 Policy: 57.5, an F
Student 3
Scores: 0, 70, 70, 70
Average: 52.5, an F
Average with 50 Policy: 65, a D"
Printer Friendly | Permalink |  | Top
 
HereSince1628 Donating Member (1000+ posts) Send PM | Profile | Ignore Tue May-20-08 06:53 AM
Response to Reply #1
2. Isn't the Kolmogorov-Smirnov Test intended to compare ordinal
datasets?

Printer Friendly | Permalink |  | Top
 
Sancho Donating Member (1000+ posts) Send PM | Profile | Ignore Tue May-20-08 07:43 AM
Response to Reply #2
3. That is one of a world of non-parametric statistical tests...different issue.
K-S is hypothesis testing of distributions, and there are also non-parametric statistics for correlation, hypothesis testing, and many other applications.

In this case, the notion is that the quality of the original data usually reported descriptively as "meaningful" (grades, etc.). Raw scores in this case don't measure up (pun intended). If you determined that the grade point average at Harvard (as a distribution) was NOT "significantly" different than the grade point average at Podunk Junior College; which is a likely finding - would you conclude that a B average at Podunk was the equivalent of Harvard?

The original grade point average data is not compared on the same meaningful metric of knowledge, so the conclusion is flawed.

If you apply a statistical test to an hypothesis, you'd be responsible to meet the mathematical assumptions of the test and be aware of the limitations of the data that you were using...also something that many authors don't do! Supposedly, scholarly journal reviewers check for such things even though you can find glaring examples of problems in the most prestigious scientific publications.

Studies of published research indicate that the quality of the original data is one of the most common problems. GIGO (garbage in, garbage out). It is relatively easy to get interval level (good quality) data for "grading", but most schools don't even try.
Printer Friendly | Permalink |  | Top
 
HereSince1628 Donating Member (1000+ posts) Send PM | Profile | Ignore Tue May-20-08 10:16 AM
Response to Reply #3
4. Yeah, I know I have had a few courses in non-parametric statistics
Edited on Tue May-20-08 10:22 AM by HereSince1628
My point was that ordinal data can actually be compared.

There certainly should be issues with the validity of evaluation devices used by instructors that end up pooled into summative data. Listening to students coming through my 1st year biology and zoology courses it is clear that in high school they have been exposed complex systems of assessments. It is common for these students to expect a course will include attendance; participation points; homework; group and individual inquiry-based submitted as writing projects and/or oral reports and/or power-point presentations; objective (lol!) exams; standardized national exams, and the devil's playmate-'extra credit.' Unfortunately among the end-users of the 'grades' (students, parents, graduation committees, admissions committees for graduate and professional programs and a few employers) there is little concern that this array of components can be comparably assessed let alone pooled with similar contributions of variance to a single categorical score that is typical of the student's performance in a course. There is even less concern among the end-users that a student assessment be diagnostic and well suited to guide student development at the collegiate/university level.

Setting all that aside and turning to a different topic, it seems to me that the above article and the proposed solution referred to in the title manifest how tradition places blinkers on the summary representations of data of student performance.

Why is the summative performance assessment to be based on the arithmetic mean of constituent component assessments (which themselves are not equivalent in type, validity, or contributions of variance) used in the course? Everyone knows averages are leveraged by outliers. Why isn't the level of assessment based on the median? Why is there no measure of dispersion? Why not assign grades based on interquartile ranges of the underlying assessment data? Don't the end-users of the grades want to both know how a student performs and how consitent a performer the student is? (No! They don't seem to want that at all.)

With the myriad ways to possibly create grading systems, why mandate one that uses naked arithmetic averages linked to categorical scores? I suspect it is because the end-consumers of grades want it that way. They want one value that sums up and communicates a judgement of a student's performance. They think in terms of performance 'on average,' with no nuance towards various ways to describe "typical" performance. And, perhaps most importantly, they (especially, parents and relatives) want a reported measure to work across generations. I suspect that collectively the end-consumers want it just like it is. And that they want it that way even more than they want its structure and underlying principles to be useful to educators or to be mathematically sound. The folks outside education want tradition albeit with students hoping, if not praying, for a bit of bias in their favor.




Printer Friendly | Permalink |  | Top
 
Sancho Donating Member (1000+ posts) Send PM | Profile | Ignore Tue May-20-08 05:20 PM
Response to Reply #4
5. I actually doubt that we disagree...
Ideally, students would have precise measures of what they know and can do. We can measure most "school knowledge" objectively and do so more in reading and math than we do in science, but there is good work in science assessment that is pretty useful. The newspapers make lots of crazy statements, but I've been peeved about the myth that a "grade" out of context of the difficulty of the test or context has meaning. At this time, there is no reason that schools (or universities) can't measure pretty accurately the basic skills and knowledge. What "grade" that you want to give to the student would be the arbitrary standard. I would not advocate a simple average. I would prefer using better measures.

I was just noticing a few myths in the original article.

Applications of Rasch Measurement in Science Education - edited by Xiufeng Liu, State University of New York, Buffalo and William Boone, Miami University (Ohio) - $ 63 Hard Cover (ISBN 1-934116-00-9), $51 Soft Cover (ISBN 1-934116-01-7)
Printer Friendly | Permalink |  | Top
 
Ka hrnt Donating Member (235 posts) Send PM | Profile | Ignore Mon May-26-08 10:03 AM
Response to Reply #1
6. What exactly did they do wrong?
"...demonstrates that journalists don't know what they are doing."

Printer Friendly | Permalink |  | Top
 
Ka hrnt Donating Member (235 posts) Send PM | Profile | Ignore Mon May-26-08 10:14 AM
Response to Original message
7. Massive Grade inflation!
Wow...this is bad idea.

"It's a classic mathematical dilemma: that the students have a six times greater chance of getting an F," says Douglas Reeves, founder of The Leadership and Learning Center, a Colorado-based educational think tank who has written on the topic."

The "odds" being six times greater of failing would only be true if you were randomly selecting a number between 0-100. If this were true it would imply grades are random. Short of Christmas-treeing a multiple choice test, grades are NOT going to be random. This is what education "think tanks" come up with? Scary.
Printer Friendly | Permalink |  | Top
 
DU AdBot (1000+ posts) Click to send private message to this author Click to view 
this author's profile Click to add 
this author to your buddy list Click to add 
this author to your Ignore list Wed Apr 24th 2024, 11:16 AM
Response to Original message
Advertisements [?]
 Top

Home » Discuss » Topic Forums » Education Donate to DU

Powered by DCForum+ Version 1.1 Copyright 1997-2002 DCScripts.com
Software has been extensively modified by the DU administrators


Important Notices: By participating on this discussion board, visitors agree to abide by the rules outlined on our Rules page. Messages posted on the Democratic Underground Discussion Forums are the opinions of the individuals who post them, and do not necessarily represent the opinions of Democratic Underground, LLC.

Home  |  Discussion Forums  |  Journals |  Store  |  Donate

About DU  |  Contact Us  |  Privacy Policy

Got a message for Democratic Underground? Click here to send us a message.

© 2001 - 2011 Democratic Underground, LLC