CMU CoALA Lab

Co- {Augmentation, Learning, & AI}

The CoALA Lab is an interdisciplinary research group at Carnegie Mellon University. We study how humans and AI systems can augment each other’s abilities (co-augmentation) and learn from each other (co-learning). We create new technologies to complement and bring out the best of human ability in fundamentally human endeavors. And we develop new methods and tools to broaden who is able to participate in shaping emerging technologies.

Research areas

 
noun_brainstorming_2056584+%281%29.jpg

Augmenting human intelligence

How can we design technologies that augment and amplify human cognitive abilities — bringing out the best of human ability in fundamentally human endeavors such as social, creative, or care-based work?

Large JPG-20140228_Trade 151_0046.jpg

Participatory and expertise-driven AI

How can we scaffold more responsible & participatory AI development practices — incorporating diverse human expertise across the lifecycle from ideation and problem formulation to design and evaluation?

8b21c958-71ca-4a7a-b122-ff4e125425d7_rw_1200.png

Designing for complementarity

How might we design systems that combine complementary strengths of humans and AI systems — elevating human expertise and on-the-ground knowledge rather than diminishing it?

 

Values

  • Taking an iterative, de-risking approach to research. At each stage of a project, we identify the quickest research methods (e.g., interviews, observations, lo-fi prototypes, or minimal deployments) to get the insights we need to resolve the largest uncertainties. We regularly reflect and re-plan together.

  • Giving credit where it is due. We credit each other and other researchers/designers appropriately! For instance, this set of principles borrows from Glassman Lab @ Harvard SEAS and from the OH Lab @ CMU. This website’s header image and lab logo were designed by Natalia Oskiera and Mary Beth Kery, respectively!

  • Understanding the broader contexts in which we work, and addressing meaningful problems. Many of the issues we study in this lab (e.g., in human-AI interaction, futures of work, and algorithmic justice & fairness) are tied to deep-rooted systemic, societal problems. At times, our expertise in HCI will be best suited towards making meaningful progress on relatively small pieces of these problems. However, we will frequently reflect on how we might more effectively direct our efforts, through new interdisciplinary collaborations, research-practice and community partnerships, or problem reformulation.

  • Taking care of ourselves and each other. We prioritize getting enough sleep and attending to our mental and physical health, so that we can bring our best selves to our work (see “Lab Counterculture” by Jess Hammer, Alexandra To, & Erica P. Cruz). We also actively work to support each other, and lift each other up! …And even if not everyone is the best of friends, we are civil to each other, and show each other respect as colleagues.

  • Making tacit knowledge visible. We believe that each generation of researchers and designers should work to ensure that the next generation faces fewer unnecessary hurdles than they themselves did. With fewer hurdles in their way, more people will have more opportunities to advance our fields. In line with this belief, we value sharing tacit knowledge with newcomers to the lab, and shining a light on “hidden curricula”.

  • Supporting students’ goals, whatever those may be. We recognize that students at all levels may have a wide range of career goals. We support each other’s goals and help each other reach them!

  • Reflecting on our personal and community practices. This set of values will evolve over time. Every lab member can contribute to shaping the kind of lab they want to be a part of, and we will work together to make it happen!


To learn more, check out the
CoALA Lab PhD Handbook!

People

 
 

Current CoALAs

Ken Holstein
Lab Director

Anna Kawakami
PhD Researcher
(HCI + Human-AI Interaction + Policy)

co-advisor: Haiyi Zhu

Frederic Gmeiner PhD Researcher in HCI + Design + Human-AI Interactionco-advisor: Nik Martelaro (Augmenting Designer Capabilities Lab)

Frederic Gmeiner
PhD Researcher
(HCI + Design + Human-AI Interaction)

co-advisor: Nik Martelaro

Luke Guerdan
PhD Researcher
(ML + HCI + Human-AI Interaction)

co-advisor: Steven Wu

Tzu-Sheng Kuo
PhD Researcher
(HCI + Human-AI Interaction + Social Computing)

co-advisor: Haiyi Zhu

 
Wesley Deng PhD Researcher in HCI + ML + Art + Human-AI Interactionco-advisor: Motahhare Eslami

Wesley Deng
PhD Researcher
(HCI + Human-AI Interaction)

co-advisor: Motahhare Eslami

Alicia DeVrio
PhD Researcher
(HCI + Critical Algorithm Studies)

co-mentor: Jess Hammer

Julia Chadwick
Administrative Coordinator

 
 

Alums

PhD & Postdoctoral

Alex Ahmed
→ Developer at Sassafras Tech Collective

Charvi Rastogi
→ Research Scientist at Google DeepMind

Devansh Saxena
→ Asst. Professor of Data & Information Sciences at University of Wisconsin-Madison

LuEttaMae Lawrence
→ Asst. Professor of Instructional Technologies & Learning Sciences at Utah State University

Masters

Monica Chang
Yu-Jan Chang
Connie Chau
Yang Cheng
David Contreras
Aditi Dhabalia
Harnoor Dhingra
Yvonne Fang
Anushri Gupta
Howard (Ziyu) Han
Gena Hong
Janice Lyu

Ahana Mukhopadhyay
Yunmin Oh
Will Rutter
Harkiran Kaur Saluja
Andrew Sim
Anita Sun
Sophia Timko
Sonia Wu
Shixian Xie
Linda Xue
Yuchen Yao
Jieyu Zhou

Undergraduate & Postbacc

Free S. Bàssïbét
Erica Chiang
Bill Guo
Madeleine Hagar
Alison Hu
Meijie Hu
Karen Kim
Ankita Kundu
Matthew Ok
Lauren Park
Diana Qing

Vianna Seifi
Donghoon Shin
Elena Swecker
Mera Tegene
Lakshmi Tumati
Mahika Varma
Ming Wang
Candace Williams
Zac Yu

Highlighted work

 
 

augmenting intelligence + participatory, expertise-driven AI
+ designing for complementarity

 

Ground(less) truth: A causal framework for proxy labels in human-algorithm decision-making [FAccT’23]
Guerdan, L., Coston., A., Wu, Z. S., & Holstein, K.
[link to paper]

AI Policy Projector: Grounding LLM policy design in iterative mapmaking [pre-print ‘24]
Lam, M. S., Hohman, F., Moritz, D., Bigham, J. P., Holstein, K.**, Kery, M. B.**
[link to paper]

Wikibench: Community-driven data curation for AI evaluation on Wikipedia [CHI’24]
Kuo, T., Halfaker, A. L., Cheng, Z., Kim, J., Wu, M., Wu, T., Holstein, K.**, & Zhu, H.**
[link to paper]

PolicyCraft: Supporting collaborative and participatory policy design through case-grounded deliberation [pre-print ‘24]
Kuo, T. S., Chen, Q. Z., Zhang, A. X., Hsieh, J., Zhu, H.**, & Holstein, K.**
[link to paper]

Understanding frontline workers’ and unhoused individuals’ perspectives on AI used in homeless services [CHI’23]
Kuo, T.*, Shen, H.*, Geum, J. S., Jones, N., Hong, J. I., Zhu, H.**, Holstein, K.**
[link to paper]
Best Paper Award

Improving human-AI partnerships in child welfare: Understanding worker practices, challenges, and desires for algorithmic decision support [CHI’22]
Kawakami, A., Sivaraman, V., Cheng, H., Stapleton, L., Cheng, Y., Qing, D., Perer, A., Wu, Z. S., Zhu, H., & Holstein, K.
[link to paper]
Best Paper Honorable Mention Award

How child welfare workers reduce racial disparities in algorithmic decisions [CHI’22]
Cheng, H.*, Stapleton, L.*, Kawakami, A., Sivaraman, V., Cheng, Y., Qing, D., Perer, A., Holstein, K., Wu, Z. S., & Zhu, H.
[link to paper] [extended analysis] [blog post]

Co-designing a real-time classroom orchestration tool to support teacher–AI complementarity [JLA’19]
Holstein, K., McLaren, B. M., & Aleven, V.
Presented at the ICML’20 Participatory ML workshop.
[link to paper] [link to AI Magazine article]

Replay Enactments: Exploring possible futures through historical data [DIS’20]
Holstein, K.*, Harpstead, E.*, Gulotta, R., & Forlizzi, J.
[link to paper]

 
 
 

augmenting intelligence + designing for complementarity

 

Exploring challenges and opportunities to support designers in learning to co-create with AI-based manufacturing design tools [CHI’23]
Gmeiner, F., Yang, H., Yao, L., Holstein, K., Martelaro, N.
[link to paper]
Best Paper Honorable Mention Award

Student learning benefits of a mixed-reality teacher awareness tool in AI-enhanced classrooms [AIED’18]
Holstein, K., McLaren, B. M., & Aleven, V.
[link to paper]
Best Paper Award

“Why do I care what’s similar?” Probing challenges in AI-assisted child welfare decision-making through worker-AI interface design concepts [DIS’22]
Kawakami, A.*, Sivaraman, V.*, Stapleton, L., Cheng, H. F., Perer, A., Wu, Z. S., Zhu, H., & Holstein, K.
[link to paper] [blog post]

Toward supporting perceptual complementarity in human-AI collaboration via reflection on unobservables [CSCW’23]
Holstein, K.*, De-Arteaga, M.*, Tumati, L., & Cheng, Y.
[link to paper]

Training towards critical use: Learning to situate AI predictions relative to human knowledge [CI’23]
Kawakami, A., Guerdan, L., Cheng, Y., Lee, M., Carter, S., Arechiga, N., Glazko, K., & Zhu, H.**, Holstein, K.** [link to paper]

A taxonomy of human and ML strengths in decision-making to investigate human-ML complementarity [HCOMP’23]
Rastogi, C.*, Liu, L.*, Holstein, K., & Heidari, H.
[link to paper]

“A second voice”: Investigating opportunities and challenges for interactive voice assistants to support home health aides. [CHI’22]
Bartle, V., Lyu, J., El Shabazz-Thompson, F., Oh, Y., Chen, A., Chang, Y., Holstein, K., Dell, N.
[link to paper]

 
 
 
 

augmenting intelligence + participatory, expertise-driven AI

 

Understanding practices, challenges, and opportunities for user-engaged algorithm auditing in industry practice [CHI’23]
Deng, W. H., Guo, B., DeVrio, A., Shen, H., Eslami, M.**, Holstein, K.**
[link to paper]

Counterfactual prediction under outcome measurement error [FAccT’23]
Guerdan, L., Coston., A., Holstein, K., & Wu, Z. S.
[link to paper]
Best Paper Award

The Situate AI Guidebook: Co-designing a toolkit to support multi-stakeholder, early-stage deliberations around public sector AI proposals [CHI’24]
Kawakami, A., Coston, A., Zhu, H.**, Heidari, H.**, & Holstein, K.**
[link to paper]

Improving fairness in machine learning systems: What do industry practitioners need? [CHI’19]
Holstein, K., Wortman Vaughan, J., Daumé III, H., Dudik, M., & Wallach, H.
[link to paper]  [supplementary materials]

Everyday algorithm auditing: Understanding the power of everyday users in surfacing harmful algorithmic behaviors [CSCW’21]
Shen, H.*, DeVos, A.*, Eslami, M.**, & Holstein, K**
[link to paper]

A validity perspective on evaluating the justified use of data-driven decision-making algorithms [SaTML’23]
Coston, A., Kawakami, A., Zhu, H., Holstein, K., & Heidari, H.
[link to paper]
Best Paper Award

Courses

Interested in working with us?

 
 
 

If you are interested in collaborating or joining the group, please get in touch!