CMU CoALA Lab

Co- {Augmentation, Learning, & AI}

The CoALA Lab is an interdisciplinary research group within the Human–Computer Interaction Institute at Carnegie Mellon University. Through partnerships with practitioners and community stakeholders, we create new technologies to complement and bring out the best of human ability in fundamentally human endeavors (such as social, creative, or care-based work). We study how AI systems are being designed and used to augment worker practices today. We develop new methods and tools to support more responsible AI design and deployment in practice.

Research areas

 
noun_brainstorming_2056584+%281%29.jpg

Augmenting human intelligence at work

How can we design technologies to bring out the best of human ability in fundamentally human endeavors such as social, creative, or care-based forms of work?

Large JPG-20140228_Trade 151_0046.jpg

Supporting responsible AI in practice

How can we scaffold more responsible & participatory practices across the AI development lifecycle, from early ideation and problem formulation to system design and evaluation?

8b21c958-71ca-4a7a-b122-ff4e125425d7_rw_1200.png

Designing for complementarity

How might we design systems to combine complementary strengths of humans and AI systems?

 

Values

  • Taking an iterative, de-risking approach to research. At each stage of a project, we identify the quickest research methods (e.g., interviews, observations, lo-fi prototypes, or minimal deployments) to get the insights we need to resolve the largest uncertainties. We regularly reflect and re-plan together.

  • Giving credit where it is due. We credit each other and other researchers/designers appropriately! For instance, this set of principles borrows from Glassman Lab @ Harvard SEAS and from the OH Lab @ CMU. This website’s header image and lab logo were designed by Natalia Oskiera and Mary Beth Kery, respectively!

  • Understanding the broader contexts in which we work, and addressing meaningful problems. Many of the issues we study in this lab (e.g., in human-AI interaction, futures of work, and algorithmic justice & fairness) are tied to deep-rooted systemic, societal problems. At times, our expertise in HCI will be best suited towards making meaningful progress on relatively small pieces of these problems. However, we will frequently reflect on how we might more effectively direct our efforts, through new interdisciplinary collaborations, research-practice and community partnerships, or problem reformulation.

  • Taking care of ourselves and each other. We prioritize getting enough sleep and attending to our mental and physical health, so that we can bring our best selves to our work (see “Lab Counterculture” by Jess Hammer, Alexandra To, & Erica P. Cruz). We also actively work to support each other, and lift each other up! …And even if not everyone is the best of friends, we are civil to each other, and show each other respect as colleagues.

  • Making tacit knowledge visible. We believe that each generation of researchers and designers should work to ensure that the next generation faces fewer unnecessary hurdles than they themselves did. With fewer hurdles in their way, more people will have more opportunities to advance our fields. In line with this belief, we value sharing tacit knowledge with newcomers to the lab, and shining a light on “hidden curricula”.

  • Supporting students’ goals, whatever those may be. We recognize that students at all levels may have a wide range of career goals. We support each other’s goals and help each other reach them!

  • Reflecting on our personal and community practices. This set of values will evolve over time. Every lab member can contribute to shaping the kind of lab they want to be a part of, and we will work together to make it happen!


To learn more, check out the
CoALA Lab PhD Handbook!

People

 
 

Current CoALAs

Ken Holstein
Lab Director

Julia Chadwick
Administrative Coordinator

Devansh Saxena
Presidential Postdoctoral Fellow (HCI + ML + Design)

co-mentors: Jodi Forlizzi & John Zimmerman

Alicia DeVrio
PhD Researcher
(HCI + Critical Algorithm Studies)

co-mentor: Jess Hammer

Anna Kawakami
PhD Researcher
(HCI + Human-AI Interaction + Policy)

co-advisor: Haiyi Zhu

 
Frederic Gmeiner PhD Researcher in HCI + Design + Human-AI Interactionco-advisor: Nik Martelaro (Augmenting Designer Capabilities Lab)

Frederic Gmeiner
PhD Researcher
(HCI + Design + Human-AI Interaction)

co-advisor: Nik Martelaro

Luke Guerdan
PhD Researcher
(ML + HCI + Human-AI Interaction)

co-advisor: Steven Wu

Tzu-Sheng Kuo
PhD Researcher
(HCI + Human-AI Interaction + Social Computing)

co-advisor: Haiyi Zhu

Wesley Deng PhD Researcher in HCI + ML + Art + Human-AI Interactionco-advisor: Motahhare Eslami

Wesley Deng
PhD Researcher
(HCI + Human-AI Interaction)

co-advisor: Motahhare Eslami

Ming Wang
High School Researcher
(HCI + ML)

co-mentor: Luke Guerdan

 
 

Alums

PhD & Postdoctoral

Alex Ahmed
→ Developer at Sassafras Tech Collective

Charvi Rastogi
→ Research Scientist at Google Research, Data Excellence for Evaluating Responsibly (DEER) group

LuEttaMae Lawrence
→ Asst. Professor of Instructional Technologies & Learning Sciences at Utah State University

Masters

Monica Chang
Yu-Jan Chang
Connie Chau
Yang Cheng
David Contreras
Aditi Dhabalia
Harnoor Dhingra
Yvonne Fang
Anushri Gupta
Howard (Ziyu) Han
Gena Hong
Janice Lyu

Ahana Mukhopadhyay
Yunmin Oh
Will Rutter
Harkiran Kaur Saluja
Andrew Sim
Anita Sun
Sophia Timko
Sonia Wu
Shixian Xie
Linda Xue
Yuchen Yao
Jieyu Zhou

Undergraduate & Postbacc

Erica Chiang
Bill Guo
Madeleine Hagar
Alison Hu
Meijie Hu
Karen Kim
Ankita Kundu
Matthew Ok
Lauren Park
Diana Qing

Vianna Seifi
Free S. Bàssïbét
Donghoon Shin
Elena Swecker
Mera Tegene
Lakshmi Tumati
Mahika Varma
Candace Williams
Zac Yu

Highlighted work

 
 

augmenting intelligence + supporting responsible AI + designing for complementarity

 

Ground(less) truth: A causal framework for proxy labels in human-algorithm decision-making [FAccT’23]
Guerdan, L., Coston., A., Wu, Z. S., & Holstein, K.
[link to paper]

Wikibench: Community-driven data curation for AI evaluation on Wikipedia [CHI’24]
Kuo, T., Halfaker, A. L., Cheng, Z., Kim, J., Wu, M., Wu, T., Holstein, K.**, & Zhu, H.**
[link to paper]

Understanding frontline workers’ and unhoused individuals’ perspectives on AI used in homeless services [CHI’23]
Kuo, T.*, Shen, H.*, Geum, J. S., Jones, N., Hong, J. I., Zhu, H.**, Holstein, K.**
[link to paper]
Best Paper Award

How child welfare workers reduce racial disparities in algorithmic decisions [CHI’22]
Cheng, H.*, Stapleton, L.*, Kawakami, A., Sivaraman, V., Cheng, Y., Qing, D., Perer, A., Holstein, K., Wu, Z. S., & Zhu, H.
[link to paper] [extended analysis] [blog post]

Improving human-AI partnerships in child welfare: Understanding worker practices, challenges, and desires for algorithmic decision support [CHI’22]
Kawakami, A., Sivaraman, V., Cheng, H., Stapleton, L., Cheng, Y., Qing, D., Perer, A., Wu, Z. S., Zhu, H., & Holstein, K.
[link to paper]
Best Paper Honorable Mention Award

Co-designing a real-time classroom orchestration tool to support teacher–AI complementarity [JLA’19]
Holstein, K., McLaren, B. M., & Aleven, V.
Presented at the ICML’20 Participatory ML workshop.
[link to paper]

Toward supporting perceptual complementarity in human-AI collaboration via reflection on unobservables [CSCW’23]
Holstein, K.*, De-Arteaga, M.*, Tumati, L., & Cheng, Y.
[link to paper]

Training towards critical use: Learning to situate AI predictions relative to human knowledge [CI’23]
Kawakami, A., Guerdan, L., Cheng, Y., Lee, M., Carter, S., Arechiga, N., Glazko, K., & Zhu, H.**, Holstein, K.** [link to paper]

A taxonomy of human and ML strengths in decision-making to investigate human-ML complementarity [HCOMP’23]
Rastogi, C.*, Liu, L.*, Holstein, K., & Heidari, H.
[link to paper]

“Why do I care what’s similar?” Probing challenges in AI-assisted child welfare decision-making through worker-AI interface design concepts [DIS’22]
Kawakami, A.*, Sivaraman, V.*, Stapleton, L., Cheng, H. F., Perer, A., Wu, Z. S., Zhu, H., & Holstein, K.
[link to paper] [blog post]

Replay Enactments: Exploring possible futures through historical data [DIS’20]
Holstein, K.*, Harpstead, E.*, Gulotta, R., & Forlizzi, J.
[link to paper]

 
 
 
 

augmenting intelligence + designing for complementarity

 

Exploring challenges and opportunities to support designers in learning to co-create with AI-based manufacturing design tools [CHI’23]
Gmeiner, F., Yang, H., Yao, L., Holstein, K., Martelaro, N.
[link to paper]
Best Paper Honorable Mention Award

Student learning benefits of a mixed-reality teacher awareness tool in AI-enhanced classrooms [AIED’18]
Holstein, K., McLaren, B. M., & Aleven, V.
[link to paper]
Best Paper Award

“A second voice”: Investigating opportunities and challenges for interactive voice assistants to support home health aides. [CHI’22]
Bartle, V., Lyu, J., El Shabazz-Thompson, F., Oh, Y., Chen, A., Chang, Y., Holstein, K., Dell, N.
[link to paper]

 
 
 

supporting responsible AI + augmenting intelligence

 

Understanding practices, challenges, and opportunities for user-engaged algorithm auditing in industry practice [CHI’23]
Deng, W. H., Guo, B., DeVrio, A., Shen, H., Eslami, M.**, Holstein, K.**
[link to paper]

Counterfactual prediction under outcome measurement error [FAccT’23]
Guerdan, L., Coston., A., Holstein, K., & Wu, Z. S.
[link to paper]
Best Paper Award

A validity perspective on evaluating the justified use of data-driven decision-making algorithms [SaTML’23]
Coston, A., Kawakami, A., Zhu, H., Holstein, K., & Heidari, H.
[link to paper]
Best Paper Award

Improving fairness in machine learning systems: What do industry practitioners need? [CHI’19]
Holstein, K., Wortman Vaughan, J., Daumé III, H., Dudik, M., & Wallach, H.
[link to paper]  [supplementary materials]

Everyday algorithm auditing: Understanding the power of everyday users in surfacing harmful algorithmic behaviors [CSCW’21]
Shen, H.*, DeVos, A.*, Eslami, M.**, & Holstein, K**
[link to paper]

 

Courses

Interested in working with us?

 
 
 

If you are interested in collaborating or joining the group, please get in touch!


Note that:

  • We are currently not accepting new PhD students for the 2025-2026 academic year. However, beyond PhD advising, feel free to reach out if you are interested in exploring potential collaboration opportunities with folks in the lab.