

As artificial intelligence (AI) becomes more commonplace in our lives, many activists and academics have raised concerns about the ethics of this technology, including issues with maintaining privacy and preventing bias and discrimination.
These concerns have spread throughout the AI field, leading even large corporations such as Microsoft to develop internal guidelines for using this technology. In June, the company publicly shared its new “Responsible AI Standard” framework that is aimed at “keeping people and their goals at the center of system design decisions and respecting enduring values like fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability,” according to a Microsoft blog post. As a result of these standards, the company phased out an emotion recognition tool from its AI facial analysis services following criticism that such software was discriminatory against marginalized groups and not proven to be scientifically accurate.
Businesses are not the only organizations looking to solve ethical questions about AI. Multiple colleges and universities are also creating research centers, educational programming, and other efforts that will help develop a new generation of scientists and engineers who are dedicated to using this form of technology to better society.
One of those institutions is Brown University, home to several innovative projects intended to bolster ethics in AI development and usage.
“The subject of ethics and justice in technology development is incredibly urgent — it’s on fire,” Sydney Skybetter, a senior lecturer in theater arts and performance studies at Brown, explained in a recent university news release. Skybetter is one of three faculty members leading an innovative new course titled Choreorobotics 0101 in the computer science department. The class allows students with experience in computer science, engineering, dance, and theater to merge their interests by learning how to choreograph a 30-second dance routine for a pair of robots provided by the company Boston Dynamics. The goal of the course is to give these students — most of whom will go on to careers in the tech industry — the opportunity to engage in discussions about the purpose of robotics and AI technology and how they can be used to “minimize harm and make a positive impact on society,” according to the release.
“I feel it’s my job to help students understand the implications of the technology we create now and in the future, because they are the future,” Skybetter said. “I can’t resolve the issues we’re exploring, but my hope is that maybe they can.”
Brown is also home to the Humanity Centered Robotics Initiative (HCRI), a group of faculty, students, and staff who seek to address societal problems through robot technology. One of its projects involves developing “moral norms” for AI systems so that they “can become safe and beneficial contributors to human communities if they — like all beneficial human contributors — are able to represent, learn, and follow the norms of the community,” according to the HCRI website. To develop these norms, researchers are examining the processes humans use to understand and adhere to common values within society and applying them to AI. These include observation of the environment and community members’ behavior, as well as learning from explicit instruction.
Another HCRI project focuses on creating robotic technologies that can provide elderly adults with emotional support and assistance with daily tasks. The initiative is collaborating with toy manufacturer Hasbro to design a new version of its animatronic Joy for All Companion Pet Cat that will have advanced cognitive, communicative, and sensory capabilities to better respond to the needs of elderly users.
One of the most expansive efforts to apply ethics to AI is taking place at Emory University in Atlanta. In early 2022, the school launched the AI.Humanity Initiative, a campus-wide project designed to create a cross-disciplinary community dedicated to integrating this technology in fields beyond the sciences.


“We want [AI] embedded [across disciplines] just like we would with math,” explains Ravi Bellamkonda, PhD, provost and executive vice president for academic affairs at Emory. “We have math departments, but math shows up in physics, astronomy, chemistry. It shows up in medicine, in economics. We think of AI in that sense, that it is in the service of a number of ideas.”
To achieve this goal, the university plans to hire 60 to 75 new faculty members specializing in AI over the next three to five years. Instead of being siloed in the computer science department, these professors will be placed throughout all nine of Emory’s schools to promote faculty collaboration and increase knowledge of the technology across four general areas, including:
- Arts and humanities
- Business and free enterprise
- Human health
- Law and social justice
The initiative has already resulted in nine new hires, adding to Emory’s existing expertise in the field. Among its faculty are Kristin Johnson, the Asa Griggs Candler Professor of Law, whose research focuses on how AI can be used to protect legal rights, and Anant Madabhushi, the Donnell Institute Professor of Biomedical Engineering at the Emory School of Medicine, who plans to utilize AI to improve patient outcomes and address health inequities.
Although it can be difficult to attract AI experts to higher education when tech companies often offer significantly higher pay, the university’s success in recruiting faculty thus far is due to the fact that many AI scholars have a passion to make a difference, says Bellamkonda.
“All of us, in some way, seek impact and meaning,” he says. “Emory positioning AI in the context of things that [scholars] care about, I think, is resonating in our first year of hiring.”
As part of AI.Humanity, Emory is also in the process of conducting an international job search for a new endowed position, the James W. Wagner Chair in Ethics; this person will lead the multidisciplinary conversations and collaborations in AI happening across campus.
A faculty task force is also working to develop a slew of educational programming on AI and ethics, including major and minor specialties, co-curricular activities, workshops, research opportunities, and more that will be open to all students. In addition, faculty have been conducting a lecture series and other seminars featuring AI experts to help raise awareness around the initiative.
The ultimate hope, according to Bellamkonda, is that students will receive enough exposure to AI education and training while at Emory to understand the importance of ethics when using this technology later in their careers. In this way, the university will be able to effect change far beyond campus.
“There is an idea that technology will come in and save the world,” Bellamkonda says. “I disagree. As an engineer myself, I believe strongly that I want people who know the human condition to be in charge of using and deploying technology, and that’s the vision driving the AI.Humanity effort.”●
Lisa O’Malley is the assistant editor of INSIGHT Into Diversity.
This article was published in our September 2022 issue.