How to Reap the Benefits of Generative AI in Education—While Avoiding the Pitfalls
by Tom Hanlon / Oct 9, 2023
Two College of Education professors are at the cutting edge of harnessing the power of Generative AI in education. Here, they share their story and thoughts on the possibilities that Generative AI holds for education and the problems it poses.
Generative AI is capable of generating text, images, and other media through the learning of the patterns and structures of their input training data.
It creates content, music, 3D modeling, videos, games, images, codes, art, voices, and much more.
Its reach spans a wide range of industries, from writing and software development and product design to healthcare finance and marketing.
Generative AI is powerful. It has also become a cause for panic.
Is it cheating when students use it to help with their work or even do their work? Will it put knowledge and culture industry employees out of work?
Generative AI is like a sonic boom, creating shock waves spreading far and wide at a dizzying speed. And it is changing the way we think about, experiment with, and conduct education.
Caught in a Riptide
"Generative AI has caught educators in a riptide," says Mary Kalantzis, a professor in the Education Policy, Organization and Leadership (EPOL) department at the College of Education. "In a riptide between the anxieties and the possibilities, in a way that no other technology has ever done before in education."
William Cope, also an EPOL professor, compares the surge and impact of Generative AI to the internet. "The internet did this to some degree, but it didn't do it overnight," he says. "It took probably 20 years for it to slowly sink in, but it was not until the time of COVID that it sunk in with everybody. With Generative AI, it just happened overnight."
Cope and Kalantzis have been toiling in the area of applying technologies in education for a long time. "We've worked for about ten years now with people in computer science in the whole natural language processing field, so we are well versed in the capacities of technology to be part of the learning process," Cope says. "But even for people like us who are immersed in it, the speed has come as a shock. Just the pace of change in the last year or so has been much faster than anyone could have predicted."
Excitement and Angst
That fast pace has resulted in many responses among educators: excitement, confusion, angst, doubt, and hope.
Consider that one of the Generative AI tools, the large language model-based chatbot called ChatGPT (Chat Generative Pre-trained Transformer), can pass bar exams and medical exams and write coherent essays, and it's easy to see how educators can be concerned.
"It's a logistical problem for universities," Cope acknowledges. "The only solution, the only way to find out what students have learned—and we're being a little facetious when we say this—is to lock them up in a room without a computer or wireless connections and have a proctor watch over them."
Even so, educators are clamoring to figure out how to use the technology to benefit students and instructors. This summer, Cope and Kalantzis contacted school district superintendents across the nation about a study they are conducting related to ChatGPT for use in writing and assessment tasks in middle and high school. "The response was phenomenal," Kalantzis says. "Everybody wants to figure out how to use it. It's evidence of the degree to which people want to be part of this and understand it but are panicking about it at the same time."
Seasoned Researchers
If Generative AI can be described as a tidal wave, Cope and Kalantzis have been perched on top of the swell that created that wave for a long time. Cope leads the Cyber-social Learning Laboratory in the College of Education. After ChatGPT was launched, the lab rapidly developed a module in the CGScholar e-learning platform that deploys Generative AI to give feedback on student work. He is also president of Common Ground Research Networks, a not-for-profit corporation he and Kalantzis founded in 1984. Common Ground Research Networks has been developing knowledge ecologies and researching scholarly communication technologies since 2000—back when AI's current power and capabilities hadn't been dreamt of yet.
"Networks" is an appropriate descriptor for Common Ground (CG). The corporation hosts numerous annual conferences spread across the globe and attracts elite scholars as plenary speakers. CG has over 90 journals to its credit, with over 30,000 articles. It has published over 250 books, focusing on interdisciplinary studies in education, the arts and design, sustainability and environment, the humanities, and social sciences.
Supported by grants from the Institute of Education Sciences in the US Department of Education and the Bill and Melinda Gates Foundation to the College of Education and Common Ground, Cope and his team have built CGScholar, a platform where learners from grade 4 through university can write works, offer and receive feedback, and have their work published to the web. CGScholar has more than 330,000 accounts and over 40,000 monthly users.
"We've worked very hard to keep CGScholar going all these years," Cope says. "It does a lot of things that the main learning management systems don't." Those systems, he says, haven't advanced in any significant ways since their basic architecture was established in the '90s.
"With CGScholar, we've built an environment as collaborative as social media," he says. "We also provide very granular analytics, so students can log in every day and visualize their progress towards course objectives across thousands of data points. The main learning management systems have not been architected to do that. And now we've built in Generative AI, which again the main learning management systems don't do either."
Cope likens CGScholar to a "sandbox tryout of a lot of pedagogical ideas"—not only new ideas but dreams and aspirations as old as those of Jean-Jaques Rosseau, John Dewey, and Maria Montessori. "Many of these educational values have not been practicably possible to realize, and now they are, thanks to these new technologies," he says.
A Double-edged Sword
Ah, those new technologies. The ones that both frighten and intrigue educators and the entire field of education.
Besides the specter of cheating, concerns have been raised about Generative AI tools propagating racism, sexism, ableism, and other systems of oppression. AI merely generates responses from the corpus of information they contain, guided by the prompts they are fed—and that corpus, mainly on the web, includes harmful and valuable material.
In addition, AI models can generate fabricated information. With AI comes the potential for ethical and privacy concerns as well.
Perhaps most frightening: Many teachers fear being replaced by Generative AI. "A lot of things that we traditionally teach in schools and universities may well be automated, and one of those jobs that might be automated, at least in part, is the job of a teacher," Cope acknowledges. "This really sets a cat amongst the pigeons."
That's the glass-half-empty view. There's also a glass-half-full perception of the positive possibilities of Generative AI in education. Here are a few of those beneficial uses of Generative AI:
- It enables one-to-one teaching. Cope points out the common situation where students in the same classroom have varying levels of understanding and educational needs. Some kids are bored by the instructions; others can't follow them. "With AI," he says, "a student is asking how do I do this, and 30 conversations are going on at once, which you can't do with one teacher and 30 students. But with AI, there's no reason you can't do one AI for 1,000 students. This opens up the possibility of universal, mass one-to-one education. That's big. But it's incredibly frightening for teachers' jobs. So, that's a big issue."
- It frees up the teacher. "Generative AI can make life easier for the teacher," Kalantzis says. "Allowing the dialogue between the student and the Generative AI frees the teacher to do other things." Speaking of her experience with Generative AI, she adds, "By harnessing it, by figuring out how to make it work well, it has made a difference to the performance of our learners and made our lives easier as educators in the grading process."
- It can provide students with tremendous feedback. "The worst job in the world as a teacher," Cope says, only half-joking, "is grading and giving adequate feedback to students. Automating this provides a phenomenal amount of feedback that is surprisingly useful. We didn't know if it would be useful when we started this. Honestly, we're shocked." Additionally, that feedback is well-rounded. "Because it's generative, it's always restating things, and sometimes the second or third response a student calls from the AI is more interesting and useful than the first," Cope says.
- Its feedback is complementary to peer-to-peer feedback. "Students do the assignment, get feedback from the AI, revise the assignment, and get feedback from their peers," Kalantzis says. "It's a very collaborative process, and one of the collaborators is the AI," Cope adds.
- It has the potential to increase the quality of education. "Does Generative AI make our jobs as educators harder? No. It makes it easier to increase the quality," Kalantzis says. "But we as researchers have had to figure out what do you put in place to ensure that quality."
The Power of the Prompt
Generative AI models create original content by identifying patterns and structures within existing data. Chatbots and virtual assistants such as Siri and Google Assistant rely on natural language processing to improve efficiency.
"It's called Generative AI because, with a prompt, it will generate text or draw a picture," Cope says. "In this new world of GPTs, the AI response is only as good as the prompt it has been given. Generative AI has created a new discipline called prompt engineering. Prompt engineering is about calibrating the prompts so that they give the optimal responses to students."
The research team in the Cyber-Social Learning Lab has built prompts into the software "using a large language model chat relationship to give feedback," he says. "We have been fortunate to have had Duane Searsmith, a brilliant computer scientist and AI expert, to turn the vision of lab members into reality."
"The prompts make all the difference," Kalantzis explains. "We've integrated the AI with our rubrics, guiding the prompts our team puts into the system. So, the students aren't asking just anything of the AI. The results depend on the quality of the prompts. We've discovered that through multiple cycles of experimental implementation and agile software development."
However, Kalantzis warns that you can't just rely on AI for all feedback. "The human check really matters," she says. "You can't just rely on the corpus because what already exists in the world is uneven and biased and wrong, so you need good prompts, an excellent rubric that fits with your goals, and then that human check."
Cope agrees. "You can't ask AI for anything factual," he says, "because these systems cannot tell fact from fake. Their sources are bad. That's rule number one. Rule number two is don't ask for references because it will make up references, and you won't be able to tell which are true and which are not. So, there are some fundamental problems that are endemic to the technology. There's no way the system will be able to know whether the source is factual. Getting facts and references right is up to the students."
Cope notes that they are upgrading the process so that as students get responses from the AI, they can ask the AI follow-up questions. "With this, you can say, 'Explain that a bit more to me.' It's this one-to-one dialogue," he says.
Their team is also beefing up their large language model, an AI algorithm that uses deep learning techniques and large datasets to understand, summarize, generate, and predict new content. "We have every assignment that our students have done for the past eight to 10 years, somewhere between 50 and 100 million words," Cope says. "So, we're going to add those 50 to 100 million words into the LLM, completely anonymized. Our students have consented to use their de-identified work for R&D."
Using Generative AI to Enhance Education
Of course, veteran educators are not losing sight of the actual value and potential of Generative AI: It's not there to dazzle and wow; it's there to enhance the quality of education.
"We're working with our computing people to test these ideas against our values as educators," Kalantzis explains. "We're constantly assessing the quality of the education, the relationship between students, knowledge and students, teachers and students."
It's possible that Generative AI can be one of the answers to a critical dilemma that the field of education is facing.
"We have an enormous shortage of teachers," Kalantzis says. "The education system is caught in this dilemma about how they design and deliver what they used to know very well. As a College that prepares teachers, we need to address these issues. We need to help our teachers."
The College Steps to the Fore
The College, she adds, has had the foresight to deal with this dilemma. "We were prepared for it," she notes. "We've hired the right people, new faculty with the tech and the content area. And that's unusual for a college of education."
The College, Kalantzis says, has been collaborating on Generative AI with people across different domains. "We work with people in medicine, in art, in music, like never before, because this Generative AI is impacting every aspect of meaning-making in every discipline," she says. "We have to expand our repertoires and interdisciplinarity, and collaboration is key.
"We have very different kinds of roles this moment as teacher educators and as universities, and nobody's quite got a road map yet."
Cope and Kalantzis's Contribution
Kalantzis and Cope, however, are firmly established in their own direction.
"We've tried to narrow it in our space around writing because that's tradition; everybody has to write," Kalantzis says. "So, how do we use Generative AI to write and to assist students? That will be our contribution in a world in flux and very difficult for educators now."
Kalantzis says that as researchers in a Research 1 university of the quality of the University of Illinois Urbana-Champaign, "We are duty-bound to explore this technology and see how it will meet our values and how we can get collaborative learning going. It's not easy. But we have no choice, and it's what we've chosen to do."
Cope adds, "We're very lucky to have an amazing developer in Duane Searsmith helping us over the past decade. And to be at a university that allows us to do this kind of work."
Into the Future
The future of creativity, knowledge, innovation, and so much else depends on harnessing the new affordances that Generative AI allows, Kalantzis says. "But it's more than just harnessing those affordances—how do we regulate them? We need to regulate it because it has scraped the whole repertoire of human knowledge as its source without permission from its creators," she says. "That's another problem, and many others need addressing, from regulation to how we invest to enable our schools to experiment.
"We have to experiment, test it, get it wrong, try again, and work collaboratively with our schools. As educators, researchers, and practitioners, we will keep moving the needle, as America has always done."