This Panel Conversation was held, via Zoom, Monday May 15, 2023.
This panel, the second of a series, was organized to offer our audiences the opportunity to hear directly from research teams currently conducting ChatGPT research on the Illinois campus. This session is part of our ongoing focus on ChatGPT's impact on research and teaching, and what we must consider as we continue to study teaching and learning in response to the rapid development and implementation of ChatGPT and other generative AI tools.
Generative AI: Implications and Applications for Education?PI: Bill Cope, Educational Policy, Organization and Leadership, College of Education Research Team: Anastasia
Olga Tzirides (Google, Illinois Academic researcher – UIC Visiting
Lecturer); Gabriela Zapata (University
of Nottingham, UK); Duane Searsmith (eLearning Tech Engineer);
Akash Saini (PhD student);
Mary Kalantzis (Professor); Vania Castro (Postdoctoral
Fellow); Theodora Kourkoulou (PhD student); John Jones
(SUNY), Rodrigo Abrantes da Silva (University of Sao Paolo); Jennifer
Whiting (PhD student), Paulina Kastania (PhD
student) Project: The launch of ChatGPT in November 2022 precipitated a panic among some educators while prompting qualified enthusiasm from others. Under the umbrella term “Generative AI,” ChatGPT is an example of a range of technologies for the delivery of recomposed text, image, and other digitized media. This paper examines the implications for education of one generative AI technology, chatbots supported by large language models (C-LLMs). It reports on an application of a C-LLM to AI review and assessment of complex student work. In a concluding discussion, the paper explores the intrinsic limits of generative AI, bound as it is to language corpora and their textual representation through binary notation. Within these limits, we suggest the range of potential applications of generative AI in education. Access Cope, et al. presentation slides. |
Assessing Second Language Writing with (and despite) ChatGPT: What is left to assess?PI: Dr. Rurik Tywoniw, Department of Linguistics Project: The fields of language education and language assessment have long incorporated and accounted for novel technologies to improve the practices of teaching and testing. With the appearance of the writing technologies from the typewriter up to grammar checkers like Grammarly, language testers found ways to refocus writing assessment on higher-order cognitive skills and more involved genres. However, with openly available AI tools such as ChatGPT, even rhetorical structure, cohesion, and unity can all be automated within conventional templated writing formats. At this time, the field of language testing is behind in understanding how this tool hurts or helps assessment of the writing construct. The current research seeks to understand how AI chat tools produce prompted writing, how automatically generated essays can be differentiated from human-generated essays, and given this differentiation, what qualities are important to assess in writing moving forward. Essays generated by ChatGPT on various topics were compared to human-written evidence-based essays using moves analysis, as well as fine-grained automated measurement of linguistic features. These features are used to qualitatively and quantitatively distinguish essays of both types. Future directions are also discussed regarding research on how higher-scoring human-generated essays can be investigated more closely to understand what is valued in human-rater perceptions of text quality and text providence. Access Tywoniw presentation slides. |
Metaphors and Mental Models of ChatGPT usePI: Michael Twidale, School of Information Sciences Research Team: Micheal Twidale and Smit Desai, Graduate Assistant, School of Information Sciences Project: In our project we are looking at the metaphors that people use when they talk about ChatGPT in order to make sense of the software, what it does, how it does it, when and how to use it – and when not to use it. Based on earlier work looking at metaphor use around domestic conversational agents (such as Alexa and Google Home) we find that multiple mixed metaphors can be a very accessible way to talk about different use-contexts and scenarios for ChatGPT as people figure out more and less appropriate, helpful and useful ways of using the software and fitting it into their lives. |
Panel ModeratorJessica Li is Associate Dean for Research and Director of the Bureau of Educational Research in the College of Education, and a professor of Human Resource Development |