Assistant Professor of Cognitive Science Joshua de Leeuw recently submited a complicated mathematics problem to the artificial intelligence bot ChatGPT. De Leeuw wrote a single line of computer code and stopped typing. Then the AI bot took over, spitting out reams of code and solving the math problem in a matter of seconds.
A few weeks earlier, de Leeuw’s colleague, Professor of Cognitive Science Kenneth Livingston, asked an AI bot to write a paper based on an aspect of cognitive science. It responded with a document that contained a lot of relevant material but failed to explore the subject matter in sufficient depth. Livingston said he would not have accepted the paper if it had been submitted by a student who decided to use AI as a shortcut to completing the assignment.
These two exercises performed by members of the Vassar faculty exemplify both the promise and the pitfalls of AI in the field of education. De Leeuw, who will teach a course in the fall in which his students will interact with an AI bot they create themselves, says that while he’s wary of some of the still unknown consequences posed by this revolutionary technology, he’s intrigued by the possibilities it holds for developing new drugs to cure disease or finding new ways to combat climate change. “It’s too early for any of us to know all the applications AI can have in education,” he said, “but as a scientist, I’m eager to explore this new technology, and I’m open to the challenges and opportunities it presents.”
Livingston says AI has become a rich part of the discussion in the cog-sci world, but he adds the jury is still out on whether its positives outweigh its negatives. “AI can get you about 75 percent of the way with your project or your paper, but you may spend more time on the last 25 percent of the job than you would have if you had done the work in a more traditional way,” he said.
Another major downside to using AI for research, Livingston said, is that the bot often cannot discern real data from fake data. He cited the case of a Georgetown University law professor whose AI-generated biography contained information about “a conviction for child molestation that never happened.”
Livingston said one of his main concerns about students using AI for tasks they now perform on their own is rooted in his understanding of how the brain functions. Just as an AI bot “learns” by recognizing patterns that emerge in the interactions in its neural network, Livingston explained, “When we write things down, we are stimulating neuropaths in our brains that help us learn. If we are not exercising that part of our brain, neuroscientists know how dangerous and unfortunate that can be.”
Vassar trustee Brian Farkas ’10 is a litigator at ArentFox Schiff LLP in New York City and an adjunct law professor at Cardozo Law School. He echoed Livingston’s concerns. “It’s amazing how plausibly written a lot of AI-generated legal documents are,” Farkas said. “But if using AI replaces tasks that help students learn important building blocks, that’s a real problem for educators.”
Farkas said AI may be useful in helping lawyers – or law students – find statutes and court decisions that may be relevant to building legal arguments. “But that’s only the first part of the job,” he said. “The Vassar adage ‘Go to the source’ is really applicable to lawyers. It’s not enough to find a supposedly relevant piece of information in legal research; the next step is to read and analyze the actual document and decide its relevance. AI can’t perform that task, at least at this stage, because so much delicate human judgment is required.”
Farkas is aware that some students might be tempted to use ChatGPT to write a paper, and this might prompt him to develop some in-class exercises that he can monitor more closely. “Having more in-class and collaborative activities might not necessarily be a bad thing,” he said.
Members of the Vassar faculty first addressed issues arising from AI at a Pedagogy in Action workshop in January hosted by Associate Professor and Chair of Philosophy Jamie Kelly. Kelly said the first session was mostly informational. “We talked a little about the rudiments of AI and some possible pedagogical ramifications, but most of us didn’t have enough information to develop a consensus or strategic plan,” he reported. The agenda for the second session, held this summer, included how to design courses that incorporate information about AI technology, how to use AI as a teaching tool, and how to prevent it from being abused, he said.
Kelly said that while many in the field have warned of the potential dangers of AI and have called for a pause in AI research and some form of government oversight, he doubted there is any meaningful regulation on the horizon. “There’s too much money at stake for me to see this race slowing down,” he said.
As for the impact of AI on middle school and high school teachers and students, Professor of Education Christopher Bjork said he feared financially strapped school districts may try to use AI as a substitute for human teachers. “If I were a high school English teacher, I’d be worried,” Bjork said.
Asked if he thought Vassar students might try to use ChatGPT to write their papers, Bjork said it was a minor concern. “I’m not naïve enough to think some students won’t try it,” he said, “but in a college like Vassar, the students are here to learn by doing the work themselves.”
But de Leeuw says that rapid advances in the field of cognitive science are already compelling him to modify and update material in many of the courses he teaches, and the explosion of AI technology is certain to have a significant impact on how and what he teaches. “Things in the field are changing so rapidly that we really don’t know all of the possibilities for our curriculum,” he said.
Livingston, who has been witnessing and reacting to advances in science as a Vassar professor since 1977, suggested that those in the field of education take a longer view of scientific progress as they cope with this new technological explosion. “Every time a new wave of technology hits us, someone uses the phrase ‘We are playing with fire,’” he said. “Yes, fire can burn your house down, but where would civilization be without it? We’ve learned to harness it, so I’m not entirely pessimistic about this latest wave.”