Expanding the Design Space for Explainable AI in Human-AI Interactions

The next BostonCHI meeting is Expanding the Design Space for Explainable AI in Human-AI Interactions on Mon, Nov 3 at 6:00 PM.

Register here

BostonCHI in partnership with NU Center for Design at CAMD presents a hybrid talk by Katelyn Morrison

Expanding the Design Space for Explainable AI in Human-AI Interactions 

Explainable AI (XAI) has largely been designed and evaluated through the lens of four recurring metrics: Trust, Reliance, Acceptance, and Performance (TRAP). While these metrics are essential for developing safe and responsible AI, they can also trap us in a constrained design space for how explanations provide value in human-AI interactions. Furthermore, mixed results on whether XAI actually helps calibrate reliance or foster appropriate trust raise the question of whether we are designing XAI with the right goals in mind. This talk explores how we can expand the design space for XAI by moving beyond the TRAP goals. I will discuss how domain experts appropriate AI explanations for purposes unanticipated by designers, how AI explanations can mediate understanding between physicians and other stakeholders, and how we can repurpose generative AI as an explanation tool to support various goals. By reframing XAI as a practical tool for reasoning and human–human interaction, rather than solely as a transparency mechanism, this talk invites us to consider what’s next for explainable AI

About our speaker
Katelyn Morrison is a 5th-year Ph.D. candidate in the Human-Computer Interaction Institute at Carnegie Mellon University’s School of Computer Science, advised by Adam Perer. Her research bridges technical machine learning approaches and human-centered methods to design and evaluate human-centered explainable AI (XAI) systems in high-stakes contexts, such as healthcare. In recognition of her work at the intersection of AI and health, she was awarded a Digital Health Innovations Fellowship from the Center for Machine Learning and Health at Carnegie Mellon University. Her research experience spans industry, government, and non-profit organizations, including the Software Engineering Institute, Microsoft Research, and IBM Research. Before joining Carnegie Mellon University, Katelyn earned her bachelor’s degree in Computer Science with a certificate in Sustainability from the University of Pittsburgh. She is currently on the job market for faculty, postdoc, and research scientist positions.

Naviagation: Enter the building through this gate and take left.

AI-Supported Multitasking in Human-Computer Interaction

The next BostonCHI meeting is AI-Supported Multitasking in Human-Computer Interaction on Wed, Oct 15 at 6:00 PM.

Register here

BostonCHI in partnership with NU Center for Design at CAMD presents a hybrid talk by Philipp Wintersberger

AI-Supported Multitasking in Human-Computer Interaction

In the future, humans will cooperate with a wide range of AI-based systems in both working (i.e., decision and recommender systems, language models, or industry robots) and private (i.e., fully- or semi-automated vehicles, smart home applications, or ubiquitous computing systems) environments. Cooperation with these systems involves shared (i.e., concurrent multitasking) and traded (i.e., task switching) interaction. As it is known that frequently changing attention can yield decreased performance as well as higher error rates and stress, future systems must consider human attention as a limited resource to be perceived as valuable and trustworthy. This talk addresses the emerging problems that occur when users frequently switch their attention between multiple systems or activities and proposes to develop a new class of AI-based interactive systems that integrally manage user attention. Therefore, we designed a software architecture that utilizes reinforcement learning and principles of computational rationality to optimize task switching. While computational rationality allows the system to simulate and adapt to different types of users, reinforcement learning does not require labeled training data, so that the concept can be applied to a wide range of tasks. The architecture has demonstrated its potential in laboratory studies and is currently being extended to support various multitasking situations. The talk concludes with a critical assessment of the underlying concepts while providing a research agenda to improve cooperation with computer systems.

About our speaker
Philipp Wintersberger is a Full Professor of Intelligent User Interfaces at IT:U Linz, as well as an external lecturer at TU Wien and FH Hagenberg. He leads an interdisciplinary team of scientists on FWF, FFG, and industry-funded research projects focusing on human-machine cooperation in safety-critical AI-based systems. He has (co)authored various works published at major journals and conferences (such as ACM CHI, IUI, AutomotiveUI, or Human Factors), and his contributions have won several awards. Further, he is a member of the ACM AutomotiveUI steering committee, has contributed to HCI conferences in various roles in the past (Technical Program Chair AutomotiveUI’21, Workshop Chair MuM’23, Diversity and Inclusion Chair Muc’22), and is one of the main organizers of the CHI workshop on Explainable Artificial Intelligence (XAI).

Naviagation: Enter the building through this gate and take left.

Personal Anecdotes of AI Bias and where we go from here

The next BostonCHI meeting is Personal Anecdotes of AI Bias and where we go from here on Tue, Sep 23 at 6:00 PM.

Register here

BostonCHI presents a hybrid talk by Avijit Ghosh

Personal Anecdotes of AI Bias and where we go from here

Artificial intelligence systems increasingly shape our daily experiences, from voice assistants to image generation tools, yet these technologies often fail to recognize or fairly represent diverse users. This talk combines personal narratives with empirical research to examine how AI bias manifests in real-world interactions and its psychological impact on marginalized communities. Drawing from studies on accent bias in voice cloning services and representation gaps in image search results, we explore how current AI systems perpetuate exclusion through both technical limitations and training data skewed toward Western, predominantly white perspectives. The presentation reveals the human cost of algorithmic bias—from feelings of erasure to impacts on self-esteem—while also highlighting emerging efforts to create more inclusive AI systems. Through case studies ranging from voice recognition failures to beauty standard reinforcement, we demonstrate how personal experiences of bias reflect broader systemic issues in AI development and deployment.

About our speaker
Dr. Avijit Ghosh is an Applied Policy Researcher at Hugging Face. He works at the intersection of machine learning, ethics, and policy, aiming to implement fair ML algorithms into the real world. He has published and peer-reviewed several research papers in top ML and AI Ethics venues, and has organized academic workshops as a member of QueerInAI. His work has been covered in the press, including articles in The New York Times, Forbes, The Guardian, Propublica, Wired, and the MIT Tech Review. Dr. Ghosh has been an invited speaker as a Responsible AI expert, at various high impact events such as SXSW, MIT Sloan AI Conference and the Summit on State AI Legislation. He has also engaged with policymakers at various levels in the United States, United Kingdom, and Singapore. His research and outreach have led to real-world impact, such as helping shape regulation in New York City and causing Facebook to remove their biased ad targeting algorithm.

The Human Side of Tech