How will Artificial Intelligence (AI) Power New Learning in Education?
by Tom Hanlon / Feb 7, 2023
H. Chad Lane and Mike Tissenbaum are two of a handful of College of Education professors whose research in artificial intelligence—AI—in education is changing the landscape for teachers and students. Here, Lane and Tissenbaum share their AI research and views regarding its use in education.
A chatbot could have written this article. (But it didn’t. Honest.)
It could also write a college student’s essay. And, based on the assessment of students’ writing skills lately, it could probably write a better essay than at least some students could.
That’s just part of the ever-evolving world of technology. As with any advance, the increasing use of—some would say infiltration of—artificial intelligence in our lives brings a mix of reactions: intrigue, excitement, skepticism, fear.
The Explosion of AI
H. Chad Lane likens the explosion of AI to that of televisions. In the 1930s, TVs were affordable only to the privileged few: a set cost anywhere from $200 to $600; the average annual salary was $1,368. But from 1950 to 1960, the percentage of US households that owned a television rose from 9% to 87%.
“People were worried that no one was going to leave their home anymore, that they were going to watch TV all day long,” says Lane, associate chair and associate professor in the College of Education’s Educational Psychology department. “But that didn’t bear out.”
In the same way, Lane says, people are now concerned about the advances of AI and how it will impact our lives, in education and beyond. And he wants to allay their fears.
“AI does have some risks to it,” he admits. “There’s always an issue about privacy. AI systems need data to do things that respond to them. It’s a benefit/cost question we have to ask. There’s always the risk that someone with access misuses your data because it’s sitting on a server somewhere. But in a lot of cases, we have some pretty robust mechanisms for privacy and safety. For example, de-identifying data, using IDs rather than names.”
Lane also acknowledges the skeptical view many have of AI.
“I get that it’s kind of creepy that we’re trying to make computers more like people, but I’m not convinced that AI is specifically unique from previous technological advances,” he says. “It’s been the tropes of ‘evil AI’ in movies like The Terminator, and that image still lingers, but it’s getting better with the positive examples of AI out there."
Mike Tissenbaum points out some of those positive examples that are specific to education.
“AI is really good at taking a lot of complex, real-time data and processing it and using the kinds of patterns that often would take weeks and weeks and weeks to do qualitatively or quantitively,” says Tissenbaum, assistant professor in Curriculum & Instruction.
It can also be used to make suggestions for grouping students for optimal collaborations and discussions based on their understanding of the content. And it can provide students the materials or prompts to scaffold discussions.
“AI is not replacing the teacher,” Tissenbaum emphasizes. “It takes care of the grinding, hard-to-do work so the teacher can focus on the students. It can really empower the teacher and the students.”
Lane agrees. “I can say with 100 percent certainty that no one in the AIED community has that goal [of replacing teachers], because most of us are or were teachers. We understand what it takes to teach and we understand how unique and special that is. Teachers are irreplaceable. So, we’re building tools that can make their lives better.”
In Lane’s research, those tools are primarily aimed at helping middle school and high school students in STEM classes, even though AI can and does apply to all levels of education—and beyond, such as tutoring systems for aging adults.
“It’s generally most helpful for beginners,” Lane says. “The greatest potential is in helping kids who are struggling at the beginning of learning math and science, because that’s where they have the most misconceptions and the greatest gaps in knowledge—and often, reduced motivation. So, you have to think about making it engaging and interesting and fun and adapting it to their interests.”
"Not a Panacea"
Tissenbaum is optimistic about the use of AI in education, but cautions that it’s not a panacea.
“We have to be cautious about the willy-nilly implementation of these things and understand the risks so that we can innovate responsibly,” he says. “I’ve been using AI tools since 2010 at some level. I believe AI is going to be quite transformational in time, but none of this is ‘We’ve just solved education now.’”
Tissenbaum believes the more commonplace use of AI in classrooms across the country is at least 10 years out. “We have a lot of people doing interesting work in AI, both here and elsewhere, but we’re just now starting to understand it in small implementation,” he says. “I’d be cautious to rush to scale yet until we really know what it does for learning. But I am very positive about the directions that we’re heading because I do believe people are thinking about this in the right ways.”
AI Supports Collaborative Learning
Tissenbaum is part of a multi-university team working on a large NSF National AI Institute for Student-AI Teaming project that focuses on how AI can support students and teachers as they engage in collaborative learning.
“We’re building models and using complex data mining to understand speech patterns in ways that would be very hard for us to know on our own,” he says. “And we’re developing agents that interact with the students and let the teacher know how to engage the kids and get them back on track. All of this is based on a lot of qualitative data analysis that would be very hard for a teacher to do all over her classroom at once. We’re using machine learning and AI to process this data and become an integral part of the classroom.”
Tissenbaum is also working on another NSF grant, called SimSnap, developing and researching collaborative learning in middle school life science classes using tablet-style computers that support simulations of biological systems.
“We’re looking at students combining the written word and spoken word together, doing data mining, and trying to find associations for the kids,” he says. “So, we do a lot of natural language processing in real time, using speech-to-text tools and mining that to understand what the kids are talking about.”
The Use of AI in STEM Classes
Lane’s work in computer science and natural language processing began back in the late 1990s when, as a graduate student, he built a natural dialogue system to help his students, who were learning to write programs, with natural language processing. Since coming to the University of Illinois in 2015, he’s worked on several grants that are studying Minecraft for understanding science, astronomy, and engineering. Minecraft is a 3D game that offers players a great deal of freedom (there are no preset goals) and creativity to interact with.
“My students and I are looking at how AI techniques can be used to assess learner knowledge and behaviors in educational games and build agents that help them in a variety of ways,” Lane says. “Much of my research focuses on using Minecraft as a STEM learning environment, so we have built models that analyze learners' exploratory behaviors, how they make scientific observations, and how they approach investigating planets for habitability. Kids get to explore simulated exoplanets in Minecraft, take measurements of relevant variables such as oxygen and radiation, etc., and then build habitats for survival.”
His work, as does Tissenbaum’s, involves creating agents that interact with students. In fact, he is researching how the visual presentation of the agent influences a learner’s behavior. “We’re looking at gender, race, how they’re dressed,” Lane says. “And how they help a learner. Are they more hands off and just available or do they sometimes give demos? We’re implementing a variety of strategies for these agents to support learners in Minecraft.”
To point out the necessity for an effective agent, he dredges up the well-intentioned but poorly-executed Microsoft Word agent, Clippy, who appeared in the form of a paper clip and asked obvious (and irritating) questions as you worked on a document.
“Clippy is often referred to as the wrong way to do an agent,” Lane laughs. “We work carefully to make sure our agents are more helpful, fun, interesting, and engage the kids in meaningful ways. We want the kids to come out of it with a better understanding and a better attitude about all of the topics that they’re learning about, and that the agents have influenced that learning in some way.”
The Evolution of AI
AI in education is hardly new. The International Artificial Intelligence in Education Society has been around since 1997, boasting 1,000 members from over 40 countries. Lane edited a special 25th anniversary issue of the society’s journal, the International Journal of AI in Education; he served on the society’s executive committee for six years and was twice nominated to be president. The journal he edited focused on the next 25 years of AI in education research.
“One thing that’s come out of all these decades of research is the field is no longer taking this AI hammer and trying to solve every problem with it,” he says. “In the early days, it was ‘I’m going to take this AI thing and build an educational tool out of it.’ But that’s no longer true.”
Instead, he says, the research is now focused on where AI can make a real difference in education.
“We’re trying to make strides on how AI systems can assess learners in more nuanced and culturally relevant ways, giving kids immediate feedback and helpful suggestions on their homework, and reporting to the teachers on how well the kids understood it,” he says. “AI has actually advanced the science of learning. We’ve learned more about how human beings learn because we have these AI tools.”
"Don't Do Tech for Tech's Sake"
Tissenbaum sees a distinction between AI researchers who are not so focused on the educational learning side of the field and those for whom the sole focus is on enhancing the educational experience of students. He gives the example of speaker diarisation, which is the process of partitioning audio streams of speech into homogenous segments according to the identity of each speaker. The process allows researchers to know who spoke when and to group segments together.
“They’re like, ‘Wow, we went from 45 percent to 65 percent accuracy,’ he says. “From a tech standpoint, that’s good. But from a supporting students standpoint, that’s not accurate enough. We need to be mindful about getting things out there and trying it but understanding the potential impact. If you can fail or harm students, why are you doing this? Don’t just do tech for tech’s sake.”
Instead, he says, research needs to focus on enhancing education. “It can’t just be, ‘Can we use the tools?’ It’s ‘What can we do to advance learning?’ That’s what I try to do in my work.”
A Variance in Programs
The proliferation of AI tools, and specifically AI tools in education, means, of course, that there is a wide range of tools to choose from and that some tools are more effective than others in enhancing the learning experience.
“Imagine two computer systems helping a student, and one system just tells the student if they were right or wrong, while the other looks at the steps the student took, the things they typed in, their ideas, and it processes that mapped up against a cognitive model of how to think about it,” Lane says. “You can give so much deeper, richer, contextually relevant feedback to the child in the latter situation. And that’s why AI has an advantage over the more traditional systems. This advantage has been empirically validated in studies over and over again.”
The system that provides that nuanced and detailed feedback is a knowledge-based system. Lane says such systems are better in almost every case.
“The problem is, they’re hard to build,” he notes. “They take a lot more time and effort. Some companies have partially solved that problem, but that’s why we don’t see AI systems in every school. But there are plenty of people looking at that and trying to scale it up.”
Bots are Far From Perfect
Building knowledge-based systems is one problem. Another is, as Tissenbaum notes, “AI is designed by white males in Silicon Valley for use by white males in Silicon Valley, and it uses data that is of interest to white males in Silicon Valley.” The result is both limiting and biased.
For example, he brings up one of the latest marvels in AI, ChatGPT, which was released at the end of November. The chatbot employs both supervised and reinforcement learning techniques as it interacts with users, applying a deep learning technique called “transformer architecture” to sift through several terabytes of data that contain billions of words to create answers to prompts or questions.
ChatGPT can do a lot: It can write articles (again, not this one!), letters, and, yes, college students’ essays. Google reports that it would hire the bot as an entry-level coder if it interviewed for the job.
Then again, ChatGPT and other AI bots regularly show their “human side” by making errors. For example, CNET, the tech news and product reviews publication, has used AI to write articles. It had to correct multiple errors in one article that explained compound interest. One answer had a person earning $10,300 in interest, instead of the correct $300.
“ChatGPT is wrong—a lot,” Tissenbaum says. “It’s mostly a marketing gimmick [for its developer, OpenAI] to show it can write an essay, it can do this or that. But you don’t want it to teach a critical pedagogy to students because it’s probably not going to be historically accurate.” The content could be skewed by racism, biases, and otherwise factually incorrect information. Tissenbaum himself tested ChatGPT, having it write an essay for him.
“It was kind of right,” he says. “Maybe 80 percent. Is that good enough? It could probably replace an undergrad essay that’s probably only 80 percent right.”
The possibility of using ChatGPT to write an essay is obviously problematic for instructors. “Maybe we’re going to have to change how we operate in undergrad courses,” Tissenbaum says. “Who knows? The essay might become antiquated. I’ve heard people talk about having ChatGPT create the essay and having students critique it.”
"Teach Healthy Skepticism"
Lane, on the other hand, sees the good side of ChatGPT.
“My early assessment is that ChatGPT is a good thing,” he says. “Human-centered AI, the area of AI focused on how to create AI systems that interact with and collaborate with humans, is what we all want. The potential to support creative processes such as writer’s block, explore new solutions, design and plan events or software or anything, and learn, seems endless to me.”
The bigger issue, Lane says, is teaching students how to think about and use AI.
“It’s critical to teach kids that AI systems are driven by human data,” he says. “Anything ChatGPT tells you is derived from a knowledge base from human content. So, it could be wrong. As long as kids realize what it tells them is not 100 percent accurate, it’s not an oracle, that’s good. Teach healthy skepticism. That’s a good thing in general.”
A Positive Future
The bottom line, Lane says, is that AI can be used to help students in “amazingly powerful ways.”
Human-centered AI—systems that amplify and augment rather than displace human abilities—are here to stay. And Lane sees that as having profoundly positive implications, “if we make the investment in it and choose our research agenda wisely.”
“The idea of human-centered AI has really caught on,” he says. “AI has been successfully applied to education in many settings, and I think it is something that will continue to explode. AI can achieve a deeper awareness of how we think and solve and work as individuals. This awareness will continue to deepen, especially with respect to age, individual needs, and culture. AI systems will continue to evolve to interact with us in more meaningful ways, to help us solve problems, be creative, and learn. I think this all points to a very positive future.”