The University of California (UC) System recently announced it will adopt policies that promote the responsible use of artificial intelligence (AI). These include the creation of a public database that outlines the system’s specific applications of AI and the establishment of departmental councils that will assess the implementation of the technology. The policies are based on the recommendations of “Responsible Artificial Intelligence,” a recent report by UC’s Presidential Working Group on AI.
The working group focused primarily on how this technology might affect individual rights in various university settings. Their report lists four recommendations based on the potential impact that AI could have on policing, health, academics, and human resources through automatic decision-making, chatbots, and facial recognition.
“Because of UC’s size and stature as a preeminent public research university as well as California’s third-largest employer, the principles and guidance from the report have the potential to positively inform the development and implementation of AI standards beyond university settings within the spheres of research, business, and government,” a UC statement reads.
Though AI has the potential to reduce human workloads, it also raises serious ethical concerns. According to the report, using AI in the admissions process, in particular, could cause real problems without significant human oversight. By using historical data in the process, AI could reinforce traditional biases and ultimately reduce equitable outcomes.
“This means that the computation model must be able to take into account difficult-to-quantify criteria such as valuing life experiences as part of a student’s capacity for resilience and persistence needed to complete college-level work,” the report states. “If the computational model does not accommodate criteria such as life experience, a human must remain in the loop on that part of the review.”
As AI’s presence at higher education institutions continues to grow, it is vital that similar policies are adopted and that administrators are aware of the technology’s potential flaws, said Brandie Nonnecke, founding director of UC’s CITRIS Policy Lab, in a news release.
“It’s good we’re setting up these processes now,” Nonnecke said. “Other entities have deployed AI and then realized that it’s producing discriminatory or less efficient outcomes. We’re at a critical point where we can establish governance mechanisms that provide necessary scrutiny and oversight.”●
This article was published in our December 2021 issue.