Feedback and revision processes are a key piece of designing course projects, and critical for project-based courses. However, student buy-in and participation in peer review can be challenging, and students require instruction and practice to create feedback that will be effective and useful to their peers. I have a generative writing process that I recommend for instructors to use with their students to create feedback that is largely descriptive, positive, and forward looking. In “Using Assessment to Improve Peer Review Feedback,” a team from the University of Arizona took an intriguing big data approach (they analyzed “13,717 comments exchanged during peer learning”) to examine the quality of student feedback created under the DES (Describe, Evaluate, Suggest) response model. Drawing on existing research on word count, they found that a plurality of the feedback fell in what they call the “messy middle” – likely too short to be fully successful, but long enough to still be of use. Further qualitative analysis bore this out, and they propose some suggestions for targeting improvements in the messy middle.
I liked their emphasis on the need to structure peer review as an activity for students, requiring both instruction and assessment. Their primary suggestions for intervention are more structure and more explicit instruction – something like, say, a generative writing model. I also very much appreciated hearing their experiences with qualitative research on student work, and again recognized my own experiences. They used the eminently sensible DES framework and found that much of what was produced by the humans was a hybrid mashup that confused and confounded (“collapsed” is their term) the categories.