We call it “AI-assisted grading,” where machines assist humans in grading consistently and quickly. For many questions, Gradescope’s AI will be able to learn how to grade all student submissions from a small number of answers graded by the instructor, such that an instructor would only have to grade about ten answers out of a hundred submissions. For known questions, Gradescope’s AI will be able to grade instantly, without any human input.
This seems to be the evolving narrative of man and machine, collaborating together for the greater good, software and robots becoming “partners” in work rather than usurpers of it. And for the time stressed teacher or teaching assistant, having to “grade” ten essays instead of hundreds is certainly appealing.
But nothing changes on the essayists end, except of course the question. Now it’s “What does the software want?” instead of the teacher.
I’m not anti-assessment. Really, I’m not. But I am more interested in feedback than grades. And as someone who graded tens of thousands of pieces of student writing in my career, an assist from AI certainly has its appeal. Assuming, of course, that traditional methods of assessing the work are what you want.
I wonder, however, how many of those essays are written for authentic audiences. How many of them end up being read by anyone other than the teacher, or now, the program? I wonder how many were assessed by the impact they had on real problems and conversations in the community and world. And to what extent is using “the grade” a sound motivator for students to “improve” their writing (or anything else, for that matter)?
My mantra remains: Technology for learning first, teaching second. Otherwise, this is just about working harder to do the wrong thing right.
(Image credit: Camille Kimberly)