What do you need to know?

Home » ChatGPT, LLMs, and AI » What do you need to know?

Generative AI represents a significant advancement in the field of artificial intelligence, with algorithms and models designed to autonomously generate content, including images, text, music, and other forms of data. These tools have powerful capabilities and, as educators and learners, we need to explore and understand their capabilities and limitations.

How do they work?

Large Language Models, like those powering ChatGPT, operate using a type of artificial intelligence called deep learning. These models are trained on huge amounts of text, primarily collected from the internet, to learn the structure and patterns of language. The key task during training is to predict the next word in a sequence based on the words that came before it. Through this process, the model refines its ability to generate coherent and relevant text. With billions of internal settings, often called parameters, these models can store a significant amount of linguistic information, allowing them to respond to a wide variety of prompts and questions.

Because LLMs are designed to generate language, their outputs might be linguistically sophisticated and persuasive, but factually incorrect. In fact, ChatGPT’s creators warn us that “outputs may be inaccurate, untruthful, and otherwise misleading at times.” To further complicate matters, LLMs may also hallucinate (or fabricate) sources, citing papers that do not exist and misattributing works and quotations. Ongoing training can improve accuracy but anyone who uses these tools should be aware of the risk of their producing inaccurate information.

FAQs

Do I need an AI policy in my syllabus?

Yes! Students are wanting and expecting guidance about what is and is not appropriate use of generative AI tools in your courses.

How can I or my students cite use of generative AI tools on work?

Unlike other text sources, generative AI outputs are not stable texts that represent the work of a single or small group of authors. This makes citing generative AI as a source more challenging. Organizations that offer style guides (like the APA, MLA, Chicago) provide guidance for handling attribution of work created via generative AI. In addition to citation within work, we recommend Outside of this, our recommendation requirements that students add footnotes or endnotes to their work, describing how they have used AI to support their learning as part of the assessment. Such statements should include information including the tool(s) used, the prompt(s) provided, and how  the AI output was used or adapted into the final work.

I’m concerned about data privacy. What do I do?

Your concern is well-founded. Before using any generative AI tool, read the privacy policy and explore options for opting out of certain kinds of data collection (for example, with ChatGPT, you can opt-out of having your data used by them). You should also refrain from inputting personal data, student data, or institutional data into any generative AI tool.

How can I get help with generative AI?

If you are faculty or staff at the College, support is available from the Center for Teaching Learning and Research, the Program in Writing and Rhetoric, and from DLINQ.

If you are faculty or staff at the Institute, Schools Abroad, Language Schools, or Bread Loaf, support is available from DLINQ.

If you are a student, we recommend that you reach out to DLINQ Interns to schedule a consultation, or drop into one of DLINQ’s lab spaces during staffed hours. CTLR Tutors may also provide support within their disciplinary support areas.