U Researchers Address Critical AI Ethics Challenges – @theU

0
4

Key Takeaways

  • Researchers at the University of Utah are tackling AI’s ethical challenges from medicine, economics, literature, and political science, showing that technical fixes alone are insufficient.
  • An interdisciplinary workshop used an “open problem session” format to break down silos, prompting concrete collaboration ideas across fields.
  • Participants emphasized that the hardest AI dilemmas are conceptual and political—questions of bias, power, and who gets to define concepts like “repression” or “ground truth.”
  • Organizers plan to sustain the momentum with recurring half‑day workshops and a growing cohort focused on AI and ethics.
  • The initiative underscores the university’s role in subjecting AI enthusiasm to rigorous, critical academic inquiry rather than merely riding the wave of technological hype.

Physician‑Led AI in Clinical Decision‑Making
Physician Ryan A. Metcalf is investigating how generative AI could assist doctors in deciding whether a patient truly needs a blood transfusion—a common, lifesaving yet costly and often overused intervention. “We want AI to flag cases where transfusion is likely unnecessary, but we must ensure it does not override the clinician’s bedside judgment,” Metcalf explained. His work highlights the tension between algorithmic efficiency and the irreplaceable nuance of human expertise, especially in high‑stakes settings where over‑transfusion can lead to adverse outcomes while under‑transfusion jeopardizes patient survival. By integrating AI as a decision‑support tool rather than a replacement, Metcalf aims to preserve clinical autonomy while reducing waste and improving patient safety.

Economic Power and Labor Implications of AI
Economist Ellis Scharfenaker is probing who will control AI’s growing economic influence as it reshapes work, noting the technology’s dual potential to alleviate drudgery and improve safety while simultaneously intensifying surveillance, deskilling, and inequality. “If the gains from AI accrue mainly to owners of capital, we risk deepening the very divides the technology promised to bridge,” Scharfenaker warned. He advocates for policies that democratize access to AI‑driven productivity tools and protect workers from invasive monitoring, arguing that ethical AI deployment must be coupled with robust labor protections and inclusive economic governance to ensure broad societal benefit.

Literature as a Lens for AI Ethics
English professor Elizabeth Callaway argues that literature offers a powerful way to think through ethical dilemmas posed by AI, such as whether companion AI can alleviate loneliness without exploiting it, and whether the best version might actually foster human connection. “Stories force us to confront the messiness of motivation, desire, and unintended consequences—precisely the terrain where AI ethics lives,” Callaway said, referencing her ongoing work on narrative empathy. She suggests that fictional scenarios can serve as thought experiments, helping technologists anticipate societal impacts that pure data‑driven models might overlook, thereby enriching the design of AI systems that respect human values.

Political Science and Global AI Bias
Political scientist Yuree Noh (pictured above) is using AI to analyze a massive global dataset on censorship and surveillance, questioning how to ensure a large language model’s judgments hold up across countries—including authoritarian regimes—without reinforcing biases that could shape policy. “What if these systematic biases are affecting those who have the least power to push back?” Noh asked, citing aid allocation as a concrete concern. She is experimenting with strategies such as explicitly informing models about a nation’s political system to sharpen analysis, or leveraging donated chatbot data and secure platforms like Signal to capture voices that standard surveys might miss, striving for a more equitable grounding of AI‑derived insights.

Workshop Origins and Interdisciplinary Goals
The initiative emerged from a conversation between philosophy fellow Tuan Nguyen and computer‑science professor Jeff Phillips, who sought to create a space where technologists and humanists could confront AI’s ethical dimensions together. “We need to pause and think, ‘Is it ok to do it that way?’” Phillips remarked, framing the workshop as a deliberate interruption of unchecked AI enthusiasm. Their goal was not merely to teach ethical theories but to cultivate the “harder human work of making sound moral judgements,” a skill that arises only through sustained cross‑disciplinary dialogue.

Structure of the Open Problem Session
The workshop’s hallmark was an open problem session: a dozen researchers pitched big questions to the room, then invited interested colleagues into breakout groups to work toward solutions—a format borrowed from computer‑science conferences Phillips has attended. Nguyen cautioned, “The warning: this is highly experimental. We have not seen anyone try to do an open problem session in an interdisciplinary setting before, so we have no idea if this will work.” Despite the uncertainty, the session generated lively exchanges, with participants moving fluidly between topics ranging from algorithmic fairness in healthcare to the governance of AI‑generated content in authoritarian contexts.

Outcomes and New Collaboration Ideas
After the event, Scharfenaker said the sessions did a better job of fostering genuine interdisciplinary conversations than other campus gatherings, leaving him with several concrete ideas for collaborations that would not have emerged from his own department alone. “The most valuable aspect, by far, was seeing what questions other departments are actually working on and where our concerns overlap,” he noted, adding that the visibility revealed not just shared interests but shared blind spots—arguably more useful for anticipating unintended consequences. Noh echoed this sentiment, describing direct, substantive feedback on a problem she felt stuck on, including the novel idea of using political‑system disclosures to refine AI analysis of repression.

The Conceptual and Political Core of AI Ethics
Both Noh and Scharfenaker emphasized that the hardest problems in AI are not merely technical but conceptual and political. “Who decides what ‘repression’ means, for example? What counts as ground truth when even human coders disagree?” Noh asked, underscoring the need for ongoing deliberation about value‑laden categories that shape model outputs. She hopes future iterations of the workshop will dig deeper into these messy, human‑centric questions, allowing scholars to co‑craft frameworks that acknowledge uncertainty and power dynamics rather than pretending AI can be neutral.

Future Plans for a Sustained AI‑Ethics Cohort
Looking ahead, Phillips and Nguyen aim to build a lasting cohort around AI and ethics at the University of Utah, planning to hold one or two half‑day workshops each semester. They invite interested faculty, students, and staff to sign up for initiative emails to stay informed about upcoming events. By institutionalizing these cross‑disciplinary encounters, the university seeks to fulfill its mission of subjecting AI enthusiasm to the kind of critical scrutiny that only genuine academic inquiry can provide—ensuring that innovation serves democratic knowledge production, access, and the broader public good rather than merely advancing technological capability for its own sake.

U researchers confront urgent AI ethics questions

SignUpSignUp form

LEAVE A REPLY

Please enter your comment!
Please enter your name here