Approaching Assessment

Home » ChatGPT, LLMs, and AI » Approaching Assessment

The rise of generative AI poses new questions about how to effectively assess student learning. Designing assessments in the context of generative AI tools relies on best practices for assessment design, with some added considerations. For example, it is helpful for faculty to understand the capabilities and limitations of these tools so that they can effectively identify how they might support or detract from an assessment. As you design your course assessment, you may consider inputting your assessment parameters into a generative AI tool to get a sense for how well its outputs meet assessment criteria and how that might change your expectations for appropriate and inappropriate use of generative AI tools.

Questions to guide assessment design

Derek Bruff, interim director of the Center for Excellence in Teaching and Learning at the University of Mississippi, shares these six questions to guide your assessment design:

  • Why does this assignment make sense for this course?
  • What are specific learning objectives for this assignment?
  • How might students use AI tools while working on this assignment?
  • How might AI undercut the goals of this assignment? How could you mitigate this?
  • How might AI enhance the assignment? Where would students need help figuring that out?
  • Focus on the process. How could you make the assignment more meaningful for students or support them more in the work?

When thinking about the purpose of your assessment, consider whether you want to assess what students have learned (also called assessment of learning) or if you want to assess to help students learn (also called assessment for learning). When designing assessments of learning, you may want to explore approaches that minimize the use of generative AI, but when designing assessment for learning, you might intentionally build in opportunities for students to use AI as part of their learning.

Assessments using generative AI

Generative AI tools can be integrated into assessments in ways that support and potentially advance student learning. When designing assessments that leverage generative AI, focus on developing evaluating higher-order skills like critical thinking, communication, and synthesis of information. For example, you could provide an output generated by an AI tool and ask students to evaluate the prompt for biases or limitations, assessing their ability to think critically. Students might be tasked with improving an AI-generated essay by revising, adding details and examples, and enhancing coherence. The key is crafting prompts that use generative content as a starting point while requiring students to demonstrate human strengths like creativity, collaboration, and communication. With intention and care, assessments that use generative AI can support student learning while also teaching them how to use these tools effectively and appropriately.

Banning generative AI

Faculty may ban the use of generative AI in all or some aspects of their assignments/assessments. It is important to note, however, that policing student use or detecting AI-generating work will be difficult, if not impossible. AI-detection tools cannot accurately detect AI-generated work consistently and they generate false positives. Policing generative AI use, or implementing ultra-restrictive assessment practices to prevent its use (e.g., handwriting tests and blue books), could have the effect of harming students and student learning. This is especially true for students who need accommodations such as extended time, keyboarding in place of handwriting, and use of tools such as Grammarly to support their learning and ability to demonstrate their wide range of knowledge.