(I confess, I wrote this in more of a twitter-thread style of composition, so it reads a bit scattered)
I take it that summation assessment is an attempt to assess to what extent a learner has met the overall final learning goals of a unit of instruction. I’m going to assume that we’re talking about something like a 12-14wk semester block, which is part of why it’s a problem. Let’s imagine we’re talking about a Latin 101 class for fun.
Summative Assessment is certainly possible. That is, we can design and deliver an assessment that accurately gauges whether a leaner can do the things we expect them to do. The question is, what is this information useful for? Formative assessment is clear in its goal, it ought to feed back directly into the learning/teaching process to adjust instruction to match learner’s competencies.
So let’s consider possible outcomes of summative assessment:
Can this learner move on to the next module?
In a way this is a kind of “block” formative assessment. And that’s why it’s bad. Because if you judge that a leaner isn’t ready, e.g. for Latin 102, and your instructional blocks are always this large, you don’t have much choice but to send them back to do Latin 101 again. That’s disheartening and discouraging.
And, if your Latin 101 is exactly the same material, it’s also going to be boring. There’s a way to solve that – if your Latin 101 covers the same competencies but its content continually varies, then that’s solvable.
Ideally, you wouldn’t have to gatekeep learning in blocks this large. You’d find out a student can’t do x y z at the level you want, and you’d scale and individuate their instruction over a smaller block to accommodate this. On an individual level this is possible, but it’s more and more difficult on a group level.
Summative assessment is used to make generalised conclusions about a student’s academic ability.
I’m kind of dubious about this, because (a) there are so many individuating factors for any given module, and (b) it’s not at all convincingly clear to me that marks on a language module accurately reflect broader academic skills anyway.
This is even worse if assessment is meant to encompass “student effort”.
A terminal assessment of competency.
I actually think this is a legitimate outcome of summative assessment. That is, where a student is finishing a course of study or else needs some kind of benchmarking for external purposes, you can do a summative assessment against can do competencies, and deliver a report that says “so and so can do X, y z with the language, to this level”. This would be ideally a kind of proficiency testing against clear standards like ACTFL or CEFR.
I want to suggest, though, that most end of unit assessment actually goes nowhere. Sure, it’s an assessment that gauges learner competencies against module outcomes, and presuming they pass minimally (45 or 50% or whatever your jurisdiction uses), they continue on. Nobody does any kind of pedagogical action on this information, except *maybe* the student, but let’s be honest – very very few students will look at corrections on a final exam (if they even get them), and then develop a useful action plan to address any missing gaps.
So that information, if it goes anywhere, just gets fed into the bureaucratic machinery of the institution to make mostly meaningless conclusions about teacher efficacy, program efficacy, or student populations. Most of which, I would submit, are undetermined by their available data.
In sum, Summative assessment is difficult to do well because it’s difficult to find its proper goal.