No one can argue that technology has impacted nearly every aspect of our society in the last couple of decades and college admission is no exception. Half a century ago it would probably have been hard to imagine that artificial intelligence (AI) might someday be used in selecting applicants to enroll at a university. Perhaps now it is not surprising that technology in the form of AI can be used in many decisions that once would have been unfathomable, including college admission. Yet although use of AI may be intended to select applicants more fairly, the reverse could also happen unintentionally.
While AI can help admission and enrollment professionals select an incoming class, the technology still needs human guidance, according to proponents and critics alike.
“The No. 1 abuse of AI would be to rely on it exclusively to make decisions,” says Robert J. Massa, a former admission administrator, adjunct professor of higher education at the University of Southern California, and principal and co-founder of Enrollment Intelligence Now.
In an email exchange with the Journal of College Admission, Massa said there will always be a need for each admission and enrollment official to “use their own experience and their own instinct developed over the years to shape the incoming class and to award scholarships and financial aid.”
Those sentiments echo concerns expressed by Alex Engler, a governance studies fellow at Brookings Institution, a Washington, DC-based research organization, in a September 2021 paper about the use of algorithms in higher education.
“These algorithms are valuable to colleges for institutional planning and financial stability, as well as to help reach their preferred financial, demographic, and scholastic outcomes for the incoming student body,” Engler wrote. “Unfortunately, the widespread use of enrollment management algorithms may also be hurting students, especially due to their narrow focus on enrollment.”
Among other things, Engler argues that colleges’ use of algorithms in enrollment decisions might be shortchanging students because the algorithms seek to identify students who are the most likely to enroll and require the least amount of scholarship aid.
“Unfortunately for students, if algorithms succeed in their intended goal of effective scholarship allocation, they may also short-change students,” Engler writes.
If colleges seek to minimize the amount of scholarship aid awarded, it could have other repercussions as well, Engler writes. For instance, he argues that such a scholarship allocation strategy “may contribute to pre-existing crises in higher education, such as an increase in student debt burdens, higher dropout rates, and the failure of many colleges to proportionately enroll students of color.”
But AI need not have those effects, says Juan Gilbert, a University of Florida computer science professor and developer of Applications Quest, a software tool that helps expedite the enrollment process.
Gilbert calls use of AI in admission and enrollment a “double-edged sword.” By that, Gilbert means if AI is designed to emulate human decisions, then the AI will exhibit the same kinds of human biases.
“We have a diversity issue: We have severe underrepresentation in higher ed ... and this precedes AI,” says Gilbert. “So humans have been [making decisions] and guess what, we have a diversity crisis. So from that perspective, create AI that behaves like humans, then yes, you’ll continue to have the same outcomes.”
But Gilbert says AI also has a key characteristic that gives it an edge over humans: “AI has an advantage over humans in that AI doesn’t have a conscience.”
By way of example, he said if a police officer pulls over a motorist and the motorist alleges the reason why is because the motorist is Black, even if it’s true, “there’s no way the officer is gonna say, ‘That’s correct.’”
But if AI were behind the traffic stop, “it cannot lie to you and say, ‘I pulled you over because of something else,’” Gilbert says. “So the AI can be held accountable in a way that humans cannot. So you can get to the root. But it all depends on what AI is trained to do.”
Massa, the co-founder of Enrollment Intelligence Now, says AI today is “typically based on machine learning, where computer software learns to better predict enrollment behavior as it processes more and more cases.”
But training computers to think like humans when it comes to college admission has been problematic from the start.
For instance, as documented in Untold History of AI: Algorithmic Bias Was Born in the 1980s, a computer program used to make admission decisions at St. George’s Hospital Medical School in London was found to agree with human decisions 90 to 95 percent of the time. However, the program actually took away points from women and nonwhite applicants.
“At a deeper level, the algorithm was sustaining the biases that already existed in the admissions system,” the article states about the program designed by Geoffrey Franglen. “But by codifying the human selectors’ discriminatory practices into a technical system, he was ensuring that these biases would be replayed in perpetuity.”
Could the AI of today commit similar acts of discrimination? According to Engler, the answer is yes.
“Like many other algorithmic applications, such as algorithmic hiring or facial analysis, enrollment algorithms are susceptible to the possibility of biased outcomes—such as against racial minorities, women, people with disabilities, or other protected groups,” Engler wrote. “Some vendors clearly encourage using SAT scores and GPA to help determine levels of scholarship funding, which may further wealth and racial disparities, although the inclusion of families’ ability to pay may have a countervailing influence.”
Massa says there’s no way to get around colleges’ need to generate revenue from families that are able to pay tuition.
Asked about the three best uses of AI in college admission, Massa said:
“First and second, it allows us to predict not so much on an individual level who is going to enroll and what they will require in financial assistance in order to do so, but it will better predict the aggregate size of the incoming class and the total net revenue they will bring. Third, when combined with the admissions officer’s experience and instinct, the use of artificial intelligence in admissions can provide a safety net in this highly competitive and volatile area. Remember, most institutions rely on enrollment for revenue. It is for many, the largest revenue generator in the institution. So any tool that can help an enrollment manager be confident that they are making the right aggregate decisions is a welcome addition to the admissions toolbox.”
But many colleges may be reluctant to use AI in their admission and enrollment decisions.
“When I talk to a university, the admissions people immediately look at it and say: ‘Will this take my job?’” Gilbert says of his efforts to get more colleges to use Applications Quest, which he says is currently used at the University of Florida and one undisclosed university. “There’s a fear there and I have to go, ‘No. I’m preparing you to do your job.’”
Whatever decisions AI makes—with or without the help of humans—they bear scrutiny. In his Brookings paper, Engler mentions a bias audit as a way to guard against discriminatory decisions in AI.
Gilbert suggests it would be difficult to do. But Engler says much of the work could be accomplished by a moderately competent data scientist with free or open-source software.
“Further, these systems are already saving colleges money,” Engler says. “It is not too much to ask that they check to make sure they aren't worsening discrimination in the process.”
Jamaal Abdul-Alim is a journalist living in Washington, DC.
Alex Engler, author of the Brookings Institution report, Enrollment Algorithms Are Contributing to the Crises of Higher Education, urges colleges using or thinking about using AI to do three things: