
When Keith Whitfield, PhD, assumed the role of president at the University of Nevada, Las Vegas (UNLV) in 2020, he had an ambitious, if somewhat unrealistic, goal to speak with every student on campus — all 30,000 of them. Last month, in support of that vision, he commissioned a digital version of himself that is accessible to anyone at any time as a chatbot. The conversational chatbot acts as a “one-stop shop” for information on a wide range of topics, including campus resources, enrollment data, and much more.
“I hope people feel a little more comfortable because they get to know me and feel like they are a piece of the UNLV community,” Whitfield explains. “When you feel like you belong and when you know people at your university, your success increases because it doesn’t feel odd to ask questions, and you don’t feel like there is no one you can go to.”
Creating a digital avatar of a president is one of the more innovative ways that colleges and universities have begun using artificial intelligence (AI), but UNLV is not alone in its endeavor to use this technology to support students and build community. Georgia State University (GSU) and Elon University have previously implemented some form of AI to promote student success. This approach, if balanced with effective in-person communication, can be especially beneficial for first-generation and economically disadvantaged students who are less likely to know how to navigate the college experience, Whitfield explains.
“We know that getting a college degree is not easy, so we’re always looking for ways we can try to help,” he says. The ultimate goal of the avatar is to ensure students and their families can easily access information about vital resources on campus and that it can be thought of as a “digital concierge.” Users can verbally ask or type questions to the chatbot version of Whitfield and get a response related to more than 500 topics.
“We’re hoping that rather than clicking around [on a website] looking for something, you get to talk through what you’re trying to figure out,” Whitfield says. “We don’t want students to have to go through six people to get an answer — that just means we aren’t being as efficient as we need to be.”
Whitfield, who has a background in psychology, recognized the potential for the avatar when he noticed that students were seeking information about mental health services during the COVID-19 pandemic. “I was worried about mental health issues and people feeling like they weren’t connected,” he explains. Since its launch in February, the most common questions the chatbot has received are related to mental health and financial aid services.
In addition to helping users, this type of AI allows the university to better gauge the needs and concerns of a larger pool of students than traditional communication methods do, says Whitfield. All user data remains anonymous, but the questions asked are collected and relayed back to the UNLV administration. That information can then be deployed to adjust policies and bolster campus programs and initiatives. The university is also considering building kiosks on campus so that the avatar is even more accessible to students and visitors.
Across the country, GSU has seen major success with the chatbot that it created in 2016. Known as Pounce, the bot was designed to maintain student enrollment and reduce “summer melt,” or the phenomenon of students — especially those who are first-generation, low-income, or underrepresented — who enroll in college but fail to attend classes in the fall.

Pounce tries to counter this problem by sending text messages to those at risk of dropping out. It provides information about financial aid, course registration, and various placement exams that often serve as barriers to re-enrollment. In its first year, Pounce reduced summer melt by 22 percent. New findings show that the chatbot also improves academic performance.
“Receiving direct text messages about their class assignments, academic supports, and course content increased the likelihood students would earn a B or higher and, for first-generation students, increased their likelihood of passing the class,” GSU stated in a press release in 2022. “First-generation students receiving the messages earned final grades about 11 points higher than their peers.”
Elana Zeide, a professor at the University of Nebraska College of Law and expert on the ethical implications of AI, wrote in a 2019 EDUCAUSE Review article that while this technology is becoming more popular in higher education, colleges have only begun to scratch the surface of its potential uses. Other possible applications include automatic course load scheduling, grading, providing additional resources to ensure success in class, and approving microloans to financially support students who are struggling, according to Zeide.
She warns, however, that institutions must use this technology ethically. Data privacy, especially regarding students, should be a top priority for schools looking to use AI. Additionally, colleges should be aware that machine learning can, in certain instances, reinforce historical biases. For example, the graduate admissions committee at The University of Texas at Austin stopped using the Graduate Admissions Evaluator algorithm after critics pointed out it was less likely to select applications from underrepresented students for human review.
“Keep in mind that for all the hype and buzz, these AI tools are just computer systems,” Zeide writes. “They can go wrong, they are created by humans, [and] their values are shaped by companies and institutions. Their data is not neutral but is defined by the historical patterns.”
Despite the potential drawbacks, AI’s usage will continue to grow among higher education institutions. The market for this technology in the education sector is expected to increase by 40 percent by 2027 from $1 billion in 2020, according to research and consulting firm Global Market Insights, Inc. However, employed effectively, ethically, and with the goal of improving equity, AI can help colleges and universities provide a better experience and education to their students, says Whitfield.
“We used to be afraid of [AI],” he says. “Now what we’re seeing is that, if used correctly, it can really be a useful tool.”●
Erik Cliburn is a senior staff writer for INSIGHT Into Diversity.
This article was published in our May 2022 issue.