What are we marking for anyway? (1)

I’ve thought a lot about marking, and I have never come up with any good answers.

What exactly are we marking for?

The topic that got me thinking on this recently was a question about learning relatively rare irregular noun paradigms in Greek. Language instructors, I feel, have a tendency to include the irregular patterns as something students are tested on, sometimes to a level of perverse sadism. And so, is it fair that, say, ναῦς should be weighted the same as λόγος? What would the effect and the rationale of altering the weighting be? For example, if we scored a full paradigm of ναῦς on a test as 1 compared to λόγος as, say, 8. Suddenly we de-emphasise the importance of less common forms, reflecting that they are less important for reading. The argument might go the other way, that we should reward students who put in the extra work and learn something less common. Then λόγος would be 1, and ναῦς would be worth bonus points for difficulty.

Actually, neither of these is good solutions, really. Though, if I had to choose between 1-1, 1-8, and 8-1, I would probably argue that weighting less common forms with less marks makes the most sense.

The thing that really matters, is the question of what we are testing for. If we are testing to see whether students have learnt material, then our marking is diagnostic. In which case, all marking ought to do is go back into our teaching – students haven’t mastered the forms of ναῦς? I need to produce more content and expose them to more comprehensible repetitions of ναῦς in its forms until they do. Oh, wait, they’re not on top of λόγος? We need to backtrack and not move forward.

In this sense, the semesterisation of education is deleterious. When testing and marking that ought to be diagnostic becomes instead evaluative, all this tells me is that some students mastered the material to a greater degree than others. And yet, apart from fail grades, this has zero impact on pedagogy and pacing. If a Greek course were divided up into discrete ‘bite-sized’ pieces of atomised information, and students couldn’t proceed without mastering the material to that point, this would be fine. But a semester’s worth of material is not a suitable point to say, “no, not enough, do-over.” That is actually stupid. Imagine a language tutor getting to the end of 100 hours, and then deciding that you hadn’t learnt enough, so they were going to repeat the whole 100-hour block.

So long as you have a cohort of students, language instruction ought to be open-ended to the extent that a class does not move on unless students genuinely comprehend content. 90% at 90%. Testing is not the only way to diagnose that, but it’s one that is difficult to eradicate in formal contexts. What is imperative, however, is that we actually test both what they ought to learn, and what they have been taught. Which is why paradigm testing is actually terrible.

In a later post I’ll talk about my issues with essay-marking as an evaluative practice.

%d bloggers like this: