Developing Student-Generated Rubrics
I tell them that the upcoming paper will be peer scored on a scale of 1-10, where 1 is low and 10 is high. I explain the task. (There are three department-dictated tasks, the titles of which are uniform because they appear in their graduation portfolio.)
Then I ask, "Given what you know of this paper, what do you think would characterize the very best paper?" It's a fluency task, so I write on the board every attribute they offer, but I do put their attributes into my own three prearranged categories (response to the task, organization, mechanical control). Their ideas always fit those three categories. I don't name the categories; I just make my lists in three clumps.
"All right, what would be the attributes of an awful paper?" I write those attributes on the other side of the board, again in three clumps. "So what would a paper look like if it were not good enough to be up here nor bad enough to be down here?" I write those attributes in three clumps in the middle of the board.
Then I flash the first page of a same-task paper from a previous group on the overhead screen and ask everyone to read just that first page and give it a number. We compare numbers (which typically run a range of about four), and we talk about rationales for ascribing this or that number. That makes them want to adjust what's on the board.
Then I flash the first page of another paper on the overhead, they read and score, and we talk. This time we're typically within about three.
I distribute a whole paper of about four pages to the class, and they read and score. They tend to remain within about three. That's my objective because I tell them that when they score, they will do so in triads, and every paper will be read by two, and if the disparity is greater than one, a paper will be read by a third reader. All I want is a similar kind of read.
I find the activity very powerful, and their use of the rubric very powerful because they routinely self-report that they think about the rubric when they write, and that helps them know what a good paper is supposed to be like.
There are two drawbacks, and I haven't resolved either of them. First, they are very critical readers and scorers, so the scores tend to be lower by peer judgment than by my judgment. As many as a third of them complain that they didn't get a fair score.
The other problem is that they just don't know very much about mechanical control. Perhaps they didn't get the information from their teachers, but a depressingly high percentage of them believe that commas are associated with breathing and that as long as there's a noun and verb (which they know a lot about), there has to be a sentence (which they don't know very much about).
I follow the first paper with a second, and for the second, they revise the rubric to fit the task. They're usually surprised that the prompt dictates much of the rubric. I always wonder why that's such a surprise to them, given that we've been talking about prompt-driven assessment for 25 years, and they've had to have had teachers who know about that.
Then I read and score two consecutive papers on a ten-point scale. I begin my scoring by giving each student an automatic 7 for the quality of the content, and I deduct in increments of .5 for instances when they're off-point, when their information is inaccurate, when the organization of the paper compromises the quality of the presentation. I then read each paper a second time, giving ach student an automatic 4 for mechanical control, and I deduct .5 for each mechanical error. On a third and cursory review, I make a subjective judgment about cleverness, interesting use of language, flexible thining, and authenticity of voice, giving back as many as 3 points and as few as none. Some students do better than 10 on a ten-point paper.
That's how I do it. I have three such classes this semester with 38, 41, and 42 students respectively.
Thank you for your interest.