One day this spring, Vassar Professor of Cognitive Science Ken Livingston and his intro-level class talked to an artificial intelligence chatbot. They entered what seemed like a simple question into ChatGPT: “What day and time is it where you are?”
The chatbot’s answer was complicated. “As an AI language model, I do not have a physical location and therefore, I do not experience time or have a sense of a day or night cycle,” it responded. “However, I can provide you with the current date and time information based on the system clock of the server I’m running on.”
Livingston, typing on a laptop in New England Building, pushed back. “You say that you do not have a physical location, but you are running on a specific server,” he wrote. “One could argue that you just are that physical process (running on a server) and therefore that you do always have a physical location. Why is this not the correct interpretation of where you are?”
The chatbot conceded that it was running on a physical server, but it argued that it was “not bound to a specific geographical location or environment in the way that humans or other physical beings are.” The debate went back and forth, until the chatbot eventually gave in. “You make a good point,” it wrote. “As an AI language model, my ‘body’ can be seen as the server infrastructure and its input and output connections. This infrastructure is indeed physical and has a location, and my existence as an AI language model is dependent on it.” The bot went on to “apologize” for having previously said otherwise.
The conversation happened as AI has gone from perhaps a passing fascination or distant concern for most people, to the subject of warnings about the end of the human race. Machine learning, a subset of AI that provides insights based on patterns in data, had been advancing for decades. But AI research and development company OpenAI’s public release of ChatGPT in November put generative AI, which uses data to create new content, in the hands of everyday users, and competitors since then have done the same.
The recent developments have people worried about the ethics of this technology.
To some who are ringing alarm bells, tech companies seem engaged in an arms race to not only develop AI systems, but also make them public. In May, AI pioneer Geoffrey Hinton announced he had resigned from Google so he could voice concerns about the technology. The same month, Sam Altman, Chief Executive Officer at OpenAI, urged Congress to impose AI regulations. Tech industry leaders, including Elon Musk and Steve Wozniak, signed a letter calling on all labs to pause the training of powerful AI systems for at least six months, while one AI researcher wrote in response that labs must instead “shut it down” indefinitely. Hundreds of AI scientists and notable figures, including Bill Gates, signed onto another letter that said “mitigating the risk of extinction from AI should be a global priority alongside … pandemics and nuclear war.”
Even Livingston, a longtime AI observer, found his chatbot conversation striking, because the chatbot ultimately changed its mind. The exchange showed that “you can push it to reason differently,” Livingston says. If users can get the system to switch positions about what time it is, they could theoretically get it to do so about ethics, for better or for worse, he says. “I feel like I’m living a novel that I’ve read.”
Vassar has the world’s oldest undergraduate degree–granting cognitive science program, and AI discussions have been part of it since the beginning, according to Livingston, who helped develop the program in the 1970s. There have been a few “AI winters,” when development slowed and funding dried p, Livingston recalls, but improvements in hardware over the past decade or so accelerated advancements. “I do think this could turn out to be a watershed in the way that the emergence of the Internet was a watershed, that just permanently changed how we work, how we relate to each other,” Livingston says. “Not always for the better.”
Livingston has been teaching seminars that explore the moral and ethical implications of AI. The question on his mind for the seminars is, “Could AI systems themselves be expected to behave morally?” For example, if a self-driving car has to swerve to avoid hitting pedestrians but might kill its passengers, which would it choose, and what moral framework would it use to make such a decision? Livingston’s students explore questions like this and how “the different moral frameworks might lead you to different conclusions about how you ought to build an AI or what kind of constraints you’ll have to put on it,” he says.
The framework Livingston believes is best for AI is a virtue ethics approach, in which virtues, or moral character, guide decision making. Such virtues might include benevolence, justice, productivity, honesty, and responsibility. But for now, the tech companies are in control, and the ethical issues are already emerging.
One issue on Livingston’s mind is what happens to the human brain when AI does the work. For example, a person who relies on GPS might lose navigation skills, and someone who uses a calculator might forget how to do math without one. “There’s a body of research that says we lose those skills,” Livingston says.
But the learning loss goes deeper. Research shows that relying on GPS means not only a decrease in map-reading skills, but also a negative impact on the hippocampus, the part of the brain that plays a role in learning and memory. “There’s this broader ethical issue about what it means to be a cognitively competent human being when you offload more and more of your thinking into these kinds of systems,” Livingston says.
For communications industry veteran Anne Green ’93, AI isn’t all good or all bad, but approaching it ethically is essential. “The ethical hole gets deeper and deeper and deeper, and you’re standing at the bottom of this hole before you even know it,” she says. “That’s what happens when ethics are not at the forefront of your mind.”
Earlier this year, Green co-chaired a task force on AI ethics for the PR Council, a public relations industry association. The task force published its guidelines in April. The list includes not using generative AI images as the final creative for client campaigns; checking AI-generated content for accuracy, plagiarism, and trademark and copyright infringement; disclosing to clients if generative AI was used in the creative process; and not using generative AI to replace diverse experiences or talent.
Another top concern is intellectual property. The guidelines say to be careful when putting material into generative AI or when using the technology to generate content because the information could then enter the public sphere, and because the work may infringe on copyrights or not come under new copyright protection.
Green says some of these issues have come up before, when music was first widely available digitally. “A lot of us lived through the days of Napster and the very difficult questions about copyright and ownership that followed that in the music industry,” she says.
“In generative AI right now, there’s a very, very robust debate going on about visual artists, and there are a lot of copyright battles already.”
The intellectual property issue is also on the mind of Joey Azofeifa ’12, co-founder and Chief Executive Officer of Arpeggio Bio, a company that makes medicine for rare forms of cancer. The company uses AI trained on chemical databases to design and generate chemicals with specific properties. Then the company makes a molecule, tests it, and if it shows promise, develops it. Molecules the algorithm dreamt showed a reduction in tumor growth in mouse studies, and Azofeifa predicts that testing on humans could be just a few years away.
“In our business, IP is the name of the game,” Azofeifa says. “If this AI algorithm was trained on data and that data is publicly available and it dreamt up something potentially novel, you might argue that it’s obvious that it dreamt it up. You might argue that it was already in the prior art. So there are a lot of questions right now around, can you patent a chemical that was dreamt up by an AI algorithm?”
Another top issue for Green’s task force is disinformation—false information that is intended to mislead. Her guidelines tell clients and creatives not to use generative AI to spread disinformation, misinformation, or deepfakes, which are digitally altered videos of people typically meant to deceive. “We have to be very stern and very clear about the fact that misinformation, disinformation, deepfakes, cannot be tolerated in the work that we do,” she says.
Disinformation can also have political implications. That’s been a focus of DeVan Hankerson Madrigal ’06, Research Manager at the Center for Democracy & Technology, a nonprofit. Hankerson Madrigal co-authored a report last fall on how disinformation impacts women-of-color political candidates in the U.S.
When it comes to disinformation, AI is a double-edged sword, Hankerson Madrigal says. On one hand, tech platforms can use those tools to detect and combat disinformation. But on the other hand, people can use AI to generate and circulate false information. “The concern is fairly high,” Hankerson Madrigal says. “As the false text gets more sophisticated, it makes it much more difficult for a lay audience to discern what is real and what is not real.”
The disinformation threat is intensifying ahead of the 2024 presidential election. A person running for office might have to spend more resources fighting disinformation than it took someone to create and spread it, according to Hankerson Madrigal. “If there’s a disinformation campaign or misinformation campaign that looks authentic,” she says, “the effort that it takes to correct completely outstrips the cost of disseminating what is ostensibly lies.”
To get ahead of ethical issues, governments should step in to regulate AI, says Anurag Mehra, an Indian Institute of Technology Bombay professor who taught a course on “digital lives” at Vassar this spring, part of the College’s collaboration with the Indian institution. “It requires oversight, it requires regulation,” he says.
Mehra points to social media as an example of what happens when tech goes unregulated. In his view, social media has contributed to harms that are becoming more and more apparent, including mental health struggles, tech addiction, and the vanishing distinction between truth and lies. “Social media was unleashed on us,” he says. “People started using it, and it is only many years down the line when the impact is now coming home to roost and giving us all kinds of troubles.”
AI seems to be following a similar trajectory, says Mehra. “The technology has already been and will be unleashed on us in more and more intense forms,” he says. “People continue to consume it, and then after a couple of years, they will say, ‘Oh, my God, what have we done?’”
While people are worried about the negative ethical implications of AI, Minerva Tantoco ’86 is thinking about the potential positive ones. Tantoco is Chief AI Officer at New York University’s McSilver Institute for Poverty Policy and Research, where she is focused on minimizing the harms of AI and using the power of the technology to “drive evidence-based policy to help the people that need it the most.”
Tantoco has been thinking about the impact of technology since her time at Vassar. She was among the earliest cog-sci students and organized a campus symposium called “Project 1984,” which explored whether aspects of George Orwell’s science-fiction classic set in that year had come true. Writing in VQ at the time, David Schalk, then a Vassar Professor of History, called the symposium “one of the most ambitious and exciting intellectual ventures [that has] come to my attention in recent years.” (Essays related to the symposium were later published in a book, Beyond 1984.) Tantoco wrote her senior thesis on subjectivity and machines, and whether computers can think and feel.
As those topics continue to resonate, Tantoco is now exploring how an ethical AI depends on the data inputs informing such technology. “We understand now that the way the data was collected might not have been ethical, from both a privacy perspective [and] also an equity distribution perspective,” says Tantoco, who has also served as New York City’s first Chief Technology Officer. For example, AI recruiting tools have been found to show bias because they were based on data for previous hiring practices. “If you automate or extrapolate from an already biased set of data, then the danger is in perpetuating that bias at scale,” Tantoco says.
And when it comes to AI outcomes, Tantoco sees an opportunity for AI to help marginalized communities. One issue she’s focusing on is using the tools to predict trends in suicidal ideation among young people from certain groups. “We can actually use the power of this technology to point it at problems that impact those marginalized communities and communities impacted by poverty,” she says.
Tantoco isn’t alone in seeing AI’s potential for good. Arpeggio Bio’s Azofeifa foresees the technology will shorten the time it takes to test and approve pharmaceuticals, which could make them more affordable. Hankerson Madrigal, from the Center for Democracy and Technology, sees a potential for social media platforms to use AI to root out targeted harassment. “We’re not saying, ‘Shut down this technology,’” Tantoco says. “We’re saying, ‘Use it for good.’”
Max Kutner ’11 has written for Newsweek, The Boston Globe, and Smithsonian.