I am an assistant professor at CMU's LTI department with a courtesy appointment in HCII, and a part-time research scientist and AI safety lead at the Allen Institute for AI (AI2). My research focuses on (1) measuring and improving AI systems' social and interactional intelligence, (2) assessing and combatting social inequality, safety risks, and socio-cultural biases in human- or AI-generated language, and (3) building narrative language technologies for prosocial outcomes.
I received my PhD from the University of Washington where I was advised by
Noah Smith and
Yejin Choi.
[bio for talks]
October 2025 π¨π¦π: Excited to be attending COLM 2025 in Montreal this October! I'll be giving a talk at the Social Sim Workshop on Unlocking Social Intelligence in AI agents. I'm also thrilled that five papers I co-authored will be presented by my amazing collaborators at COLM: HAICOSYSTEM: An Ecosystem for Sandboxing Safety Risks in Human-AI Interactions (led by Xuhui Zhou et al.), ALFA: Aligning LLMs to Ask Good Questions: A Case Study in Clinical Reasoning (co-led by Jimin Mun et al.), PolyGuard: A Multilingual Safety Moderation Tool for 17 Languages, Fluid Language Model Benchmarking, and The Delta Learning Hypothesis: Preference Tuning on Weak Data can Yield Strong Gains.
August 2025 π: Incredibly honored to be one of 7 US recipients of the 2025 Okawa Research Grant from the Okawa Foundation!
August 2025 π§βπ: Welcoming my first postdoc, Vasudha Varadarajan, to the lab!
August 2025 π¨πΌβπ«: Excited to give a (virtual) talk about Responsible AI for Diverse Users and Cultures at the Gender Bias in NLP workshop at ACL 2025!
July 2025 π§ π‘οΈ: Five papers were accepted to COLM 2025! Highlights include HAICOSYSTEM, a framework for sandboxing safety risks in human-AI interaction; ALFA, which aligns LLMs to ask better clinical questions; and PolyGuard, a multilingual moderation tool for unsafe content. Two other papers to be released soon :)
May 2025 π§βπ»π: Super super excited to announce that our paper Rel-A.I.: An Interaction-Centered Approach To Measuring Human-LM Reliance received the Best Paper Runner Up award at NAACL 2025. Huge congratulations to Kaitlyn!
April 2025 ποΈπ: Though I will not be attending NAACL 2025, my students and collaborators will be presenting some exciting papers: Joel Mire on Rejected Dialects: Biases Against African American Language in Reward Models, Akhila Yerukola on NormAd: A Framework for Measuring the Cultural Adaptability of Large Language Models; Kaitlyn Zhou on Rel-A.I.: An Interaction-Centered Approach To Measuring Human-LM Reliance; Xuhui Zhou on AI-LieDar: Examine the Trade-off Between Utility and Truthfulness in LLM Agents.
LTI PhD student
co-advised with Chrysoula Zerva
LTI PhD student
LTI PhD student
co-advised with Mona Diab
LTI PhD student
MIT PhD student
co-advised with Cynthia Breazeal
HCII PhD student
LTI Postdoc
LTI PhD student
LTI PhD student
co-advised with Carolyn RosΓ©
LTI PhD student