The use of AI in our lives and in our work has become a compelling challenge for us all. In terms of Catholic schools, how we approach its integration into our faith formation and curriculum is an ongoing topic of discussion and analysis. Due to my work in forming Catholic school educators to be transformational principals, my primary consideration has been how we can utilize the potential benefits of AI while not being completely consumed by it.
AI as a technology holds great promise in terms of its ability to analyze information and process large amounts of content. It can assist with lesson planning, be a thought partner for strategy sessions, can streamline tasks, and assist with analyzing large data sets for schools. With all of the promise it holds, Catholic school educators need to ensure that it is used in a constructive way while also maintaining our focus on the importance of human engagement and how we understand and make meaning of the world. As people of faith, we must deeply consider fundamental questions about what it means to be human in a time when technology is approaching levels of human intelligence. Those questions are profound, ongoing, and important for our students to consider, which is why it is imperative that Catholic school educators address them.
Catholic school educators have a particular interest in the inherent dignity of every human being. How we make meaning of our world, ourselves, and others is part of how we come to know our Creator. The concern is that if we outsource too much meaning-making to AI, we run the risk of losing something fundamental, which impacts our understanding of who we are and impedes our ability to understand the world. The document Antiqua et Nova from the Dicastery for the Doctrine of the Faith and (notably) the Dicastery for Culture and Education, says that we are created and meant for authentic relationships, and those can only be simulated by AI. How we interact with others is how we grow and fulfill our own unique God-give potential. There are ways that AI can be used to help foster those relationships and assist human beings in engaging more authentically with one another. But it is also easy to see a path where AI can become a substitute for those relationships.
I attended a presentation on AI at Notre Dame in the spring, and a panel member shared a story about her interactions with a professional acquaintance. The woman had just broken up with her boyfriend and was fairly distraught about the end of the relationship. She had an AI chatbot that she regularly communicated with, and she asked if it would write a letter to her as if it were her boyfriend, offering understanding and compassion about the breakup. After reading the letter from the chatbot, the woman was comforted by the sentiments it expressed. While I am sure this story is disturbing to many of us, it also must be said that the woman herself was comforted by the exercise.
When the learning process is stripped of the struggle, what will it mean for the ultimate outcome?
The well-known media theorist Marshall McLuhan wrote about the potential for all media and technology to enhance and deaden human capacity at the same time. This can be seen in earlier technologies such as social media. There can be joy in connecting with friends and loved ones through a social network, but there is also clear evidence that social media can be isolating, potentially addictive, and harmful, especially for young people. Joy and peril exist simultaneously, and that is where human judgment needs to occur.
This duality also exists in AI. In an article in The New Yorker titled Will the Humanities Survive Artificial Intelligence? D. Graham Burnett describes the experience a student had dialoguing with an AI chatbot to explore a deep topic. The student felt liberated by the exercise and shared that “she had descended more deeply into her own mind, into her own conceptual powers, while in dialogue with an intelligence toward which she felt no social obligation. No need to accommodate, and no pressure to please.” One can see how this use of AI could allow for deeper understanding and reflection about a text or topic, where students can ask deep, meaningful questions that they might be hesitant to pose to another human being.
And yet, the hallucinations are real. For those who might not be aware, hallucinations is the term used when AI technology provides inaccurate information. And a word of caution for us all, because there is some evidence that with later versions of some AI tools, hallucinations are increasing rather than decreasing. A memorable way to describe AI hallucinations is like your overconfident friend, who offers opinions about any and all topics whether he or she knows anything about them or not. The point is that human agency matters, and what we bring to the AI tool will largely determine what we take away from it.
Finally, learning itself is full of struggles and starts, and increasing knowledge and capacity is often a complicated process. No one begins as an expert in any one subject, and the path to becoming competent inevitably goes through a painful period when one isn’t confident they will achieve mastery. For educators, this is where the lessons of hard work and effort come into play, and students must grasp this truth. Those who achieve excellence in a specific field are often the ones who struggle the most and work the hardest. All of us have examples of things we enjoy that were hard when we initially tried them. Reading, writing, creating art, or playing a musical instrument are examples of pursuits that might bring us joy and pleasure. When we have achieved a level of competency, it is sometimes difficult to remember the time and effort it took to reach that level. The potential risk with AI is that it will be seen as a substitute for that effort. When the learning process is stripped of the struggle, what will it mean for the ultimate outcome?
Antiqua et Nova also addresses our human desire to pursue truth, and this involves deeper thinking beyond just utilitarian ends. “A proper understanding of human intelligence, therefore, cannot be reduced to the mere acquisition of facts or the ability to perform specific tasks. Instead, it involves the person’s openness to the ultimate questions of life and reflects an orientation toward the True and the Good... From this, it follows that human intelligence possesses an essential contemplative dimension, an unselfish openness to the True, the Good, and the Beautiful, beyond any utilitarian purpose.” (Antiqua et Nova, para 29). This is why Catholic schools, in particular, have much to offer students and families during this period of AI integration. Our faith and our contemplative dimension recognize and value the importance of human beings in our ultimate search for meaning.
Kevin Baxter, Ed.D.is the director of the Mary Ann Remick Leadership Program at the University of Notre Dame.kbaxter2@nd.edu