Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

Igel

(35,304 posts)
9. There's a difference in definition.
Sun May 10, 2015, 10:36 AM
May 2015

What's a grade showing, anyway?

One view is that it's showing relative mastery. How well did the group do, and how do you rank in that group? Then the normal distribution is for you, with a gut check: If the normal distribution is too different from the data then either the test was too hard or the test was too easy. (Or the students were exceptionally smart/hardworking or stupid/lazy--or the teacher was execptionally good or bad). Either way, you'd except a lot of Cs or whatever your "this kid has average mastery" grade is. Often this kind of grading assumes that there's a core of content that should be mastered for a C, but that kids will go above and beyond what's required to learn some information that's not required.

Another view is that the grades show absolute mastery. There's X amount of information, and no more. It all gets presented and reviewed. There are no surprises. The test is a hurdle to see if you've "mastered" this information adequately. Then "adequately" might be 100%, if there's a small amount of material to be learned. These are criterion-based tests. The problem is that often you have to limit the content because "everybody should be able to make an A."

If you like the second definition, then it's possible to use the info to evaluate teachers, if a lot of assumptions hold true. (Thing is, a lot of the assumptions don't always hold.)

Latest Discussions»Issue Forums»Education»What’s the Point of a Pro...»Reply #9