Guidance from administration

Home » ChatGPT, LLMs, and AI » Guidance from administration

Throughout the academic year, you can expect to receive communications from Middlebury’s administration with guidance or updates related to generative AI. This page will provide a record of those communications.

Email to faculty on 2/8/24

Dear colleagues,

We are writing to share updates and guidance relating to generative AI, building on guidance we provided before the fall semester.

Generative AI tools continue to proliferate and advance rapidly. Google, Microsoft, and OpenAI are expected to launch the next generation of their Large Language Models this year and are building additional features that extend the capabilities of existing and future tools. For example, Google has added generative AI features to their Chrome browser, including a Help Me Write feature that will be available when a user right-clicks on a text box or field. Open AI has launched plug-ins for ChatGPT 4 that connect it to third-party services like Wolfram or Speak, an AI language tutor. ChatGPT 4 users can now create their own GPTs, customized versions of ChatGPT that generate responses based on instructions and data sources uploaded by users. The capabilities of these tools extend beyond text into images, audio, video, and data analysis/visualization.

As we did in the fall, we ask that all faculty include a statement in their syllabi that explicitly outlines permitted and not permitted uses of generative AI in their classes. You can review the sample syllabi statements we provided last fall as a starting point. DLINQ is collecting sample syllabus language and use cases; please consider sharing examples from your classes.

We recognize that the sophistication of generative AI tools creates both opportunities and  challenges for teaching, learning, and assessment. You will likely need to revise your class assessments to intentionally incorporate generative AI or to mitigate the impact of its use. This Approaching Assessment page provides resources and guidance that will be helpful for you as you finalize assessment plans for your spring courses. You can further consult with teaching support colleagues in DLINQ, CTLR, and Writing & Rhetoric to discuss options for your class assessments. We want to highlight that generative AI detection tools continue to be unreliable and are not considered valid proof of whether or not a student used a generative AI tool to complete an assignment. These tools are often biased against students for whom English is a second language, and they can falsely flag students’ work as plagiarized. Therefore, we ask that you avoid relying on these tools.

Faculty are encouraged to expand their own understanding of generative AI and to consider potential impacts and uses for their courses, their research, and their work. DLINQ’s 2024 Digital Detox, which includes a variety of hands-on activities with generative AI tools, is a good starting place for faculty who are new to these tools. We will offer ongoing workshops and conversations in the spring to support your explorations.

Michelle McCauley

Interim Executive Vice President & Provost

Amy Collier

Associate Provost for Digital Learning

Email to faculty on 8/22/23

Dear colleagues,

As you are all aware, the landscape of generative AI is moving quickly and we are still learning about its impacts on education. Generative AI tools, like ChatGPT, BingChat, DALL-E, and Bard may disrupt the ways we currently assess student learning, as students can use those tools to produce various kinds of work, including text, images, and audio. During this academic year, we will engage faculty, staff, and students in developing comprehensive policies and guidelines for Middlebury. Today we write to provide preliminary guidance on how to approach generative AI in your classrooms for the upcoming academic year.  

Building on conversations with Middlebury faculty and students during the spring 2023 semester, our guidance is as follows: 

1.Communicate clearly with students your expectations related to generative AI via your course syllabi, within the specifics of relevant assignments, and in class conversations. You have options for how to handle generative AI in your classes: 

Option 1: Use prohibited. Sample syllabus language: Using AI tools (e.g., ChatGPT, Bard) is forbidden in this class. You may not use them to assist in any part of your homework or other assignments. Any use of generative AI tools will be treated as a violation of Middlebury’s academic honesty policies.

Option 2: Limited use. Sample syllabus language: You may use AI tools (e.g. ChatGPT, Bard) to help generate ideas and brainstorm, but only on assignments for which I have given permission to use AI tools, as specified on the syllabus. Outputs generated by these programs may be inaccurate, incomplete, or otherwise problematic. I will hold you accountable for the accuracy of your work. Be aware that use of AI may also limit your own independent thinking and creativity. Do not submit any work generated by an AI tool as your own. If you include material generated by an AI tool, it should be cited like any other reference material (e.g., MLA or APA style citation). Any uncited or inappropriate use of AI tools will be treated as a violation of Middlebury’s academic honesty policies.

Option 3: Required use. Sample syllabus language: You are expected to use AI tools (e.g., ChatGPT and image generation tools) in this class. In fact, some assignments will require it, with appropriate citation (e.g., MLA or APA style citation). Learning to use AI is an emerging skill and we will be learning that skill together. If you have concerns about using these tools, please talk to me about your concerns so that we can find suitable alternatives.

In addition to including language in your syllabus, we ask that you have conversations with your students about appropriate use of AI tools and how they may support or detract from learning in your classes. 

2. Middlebury’s academic honesty policies presume academic honesty of all students, and thus we encourage faculty to design assignments that most effectively accomplish your learning goals, rather than attempting to prevent or detect unauthorized use of AI. Policing generative AI use, or implementing ultra-restrictive assessment practices to prevent its use (e.g., handwriting tests and blue books), could have the effect of harming students and student learning. This is especially true for students who need accommodations such as extended time, keyboarding in place of handwriting, and use of tools such as Grammarly to support their learning and ability to demonstrate their wide range of knowledge. Additionally, while AI detection tools exist, they are flawed and can generate false positives. Given this, we do not recommend the use of AI detection tools to evaluate submitted work. If you suspect a student has inappropriately used generative AI tools in your class, you may pursue academic disciplinary action as described in the Middlebury Handbook, though it may be very difficult to prove student use of these tools. 

3. Before requiring students to use generative AI tools for your class, consider offering alternatives for students who are concerned about their data privacy.

4. Do not input protected student data or institutional data into generative AI tools. These tools are susceptible to security risks, and anything you input into them is not protected by intellectual property laws. For example, inputting student work into ChatGPT without their permission could be considered copyright infringement and could expose FERPA protected student data. See Middlebury’s data definitions to learn about protected data. 

5. Many of you will choose to use generative AI to support your work as a faculty member. We hope you will approach your own use of AI from a mindset of transparency, and consider when it may be appropriate to inform others about how and when you are using generative AI. As always, we invite you to consult with DLINQ to explore approaches to assessment that make use of or that mitigate the impacts of generative AI tools.

During the upcoming semester, President Patton will charge an Advisory Group to develop robust academic policies and plans for generative AI at Middlebury. This group will engage the broader Middlebury community through workshops, open conversations, and opportunities for feedback on work in progress. The group will be convened by Interim Provost Michelle McCauley and Associate Provost for Digital Learning Amy Collier and you will hear more about this over the year. Finally, we recognize that the proliferation of these tools may require us as a community to reimagine aspects of teaching, learning, and assessment. We are grateful to be doing this work with colleagues who have repeatedly demonstrated their commitment to our students as learners.  

Thank you, and best of luck for the start of the semester.

Sincerely,

Michelle McCauley
Interim Executive Vice President and Provost

Amy Collier
Associate Provost for Digital Learning

Email to students on 8/22/23

Dear students,

We are writing to provide you with guidance on how to approach generative AI tools in your classes in the upcoming academic year. Generative AI tools, like ChatGPT, BingChat, DALL-E, and Bard produce responses or outputs to prompts/questions provided by the user. You may already be using these tools in various aspects of your life. However, here at Middlebury, you should expect that there may be restrictions on your use of these tools as part of your educational experience.

We are providing this initial guidance now, but we are planning to develop more comprehensive policies and guidance by engaging with faculty, staff, and students during the upcoming year. The landscape of generative AI is moving quickly, and we are all learning together. For now, our guidance is as follows:

1.Comply with stated class policies on generative AI use. Your faculty should include a statement about the use of AI in their class, either forbidding its use or allowing it with some restrictions. If a professor has not specified their policy on generative AI use, you should assume that it is forbidden unless the professor grants you permission verbally or in writing. Failure to comply with class policies on generative AI use could be considered a violation of Middlebury’s academic honesty policies and result in disciplinary action.

2. If you are permitted to use AI tools in your class, you must provide appropriate citation (e.g., MLA or APA style citation). Please consult with your faculty member as to the correct citation style. You are responsible for the accuracy of any work you submit, including any output generated by an AI tool. As always, if you have concerns about using AI please talk with your professor. There may be alternative methods for completing class assignments.

3. Finally, do not input any private or confidential data about you, your classmates, or your professor into generative AI tools. Generative AI tools use any data you input to train their models, and inputting your or others’ personal data exposes that data to security and privacy risks.

During the upcoming semester, President Laurie Patton will charge an Advisory Group to develop robust academic policies and plans for generative AI at Middlebury. This group will engage the broader Middlebury community through workshops, open conversations, and opportunities for feedback on work in progress. The group will be convened by Interim Provost Michelle McCauley and Associate Provost for Digital Learning Amy Collier and you will hear more about this over the year.

DLINQ’s Interns are available to support you as you explore the potential and appropriate use of generative AI technologies. Learn more.

Thank you, and best of luck for the start of the semester.

Sincerely,

Michelle McCauley
Interim Executive Vice President and Provost

Smita Ruzicka
Vice President for Student Affairs