Maarten Sap

I am an assistant professor at CMU's LTI department with a courtesy appointment in HCII, and a part-time research scientist and AI safety lead at the Allen Institute for AI (AI2). My research focuses on (1) measuring and improving AI systems' social and interactional intelligence, (2) assessing and combatting social inequality, safety risks, and socio-cultural biases in human- or AI-generated language, and (3) building narrative language technologies for prosocial outcomes. I was named a 2025 Packard Fellow and a recipient of the 2025 Okawa Research Award.

I received my PhD from the University of Washington where I was advised by Noah Smith and Yejin Choi.
[bio for talks]

Recent updates:

December 2025 πŸ…πŸ“ƒ: Very excited to have our paper Artificial Hivemind: The Open-Ended Homogeneity of Language Models (and Beyond) selected for a Best Paper Award at NeurIPS 2025 (Datasets and Benchmarks Track)!! Huge congrats to the first author Liwei Jiang!!!

November 2025 πŸ’ŽπŸš€: Honored to be a Spring 2025 recipient of the Amazon Research Award for our project on measuring AI agentic safety!

October 2025 πŸ…β­: I’m super excited and grateful to announce that I'm part of the 2025 class of Packard Fellows. The Packard Foundation and this fellowship will allow me to explore exciting research directions towards culturally responsible and safe AI 🌍🌈

October 2025 πŸ”πŸ§‘β€πŸŽ“: Due to my lab being quite full already, I'm not taking looking for any new students in this upcoming PhD application cycle 😟.

October 2025 πŸ‡¨πŸ‡¦πŸŽ‰: Excited to be attending COLM 2025 in Montreal this October! I'll be giving a talk at the Social Sim Workshop on Unlocking Social Intelligence in AI agents. I'm also thrilled that five papers I co-authored will be presented by my amazing collaborators at COLM: HAICOSYSTEM: An Ecosystem for Sandboxing Safety Risks in Human-AI Interactions (led by Xuhui Zhou et al.), ALFA: Aligning LLMs to Ask Good Questions: A Case Study in Clinical Reasoning (co-led by Jimin Mun et al.), PolyGuard: A Multilingual Safety Moderation Tool for 17 Languages, Fluid Language Model Benchmarking, and The Delta Learning Hypothesis: Preference Tuning on Weak Data can Yield Strong Gains.

August 2025 🌟: Incredibly honored to be one of 7 US recipients of the 2025 Okawa Research Grant from the Okawa Foundation!

August 2025 πŸ§‘β€πŸŽ“: Welcoming my first postdoc, Vasudha Varadarajan, to the lab!

[older news]


My research group:

Dan Chechelnitsky

CMU Portugal LTI PhD student
co-advised with Chrysoula Zerva

Joel Mire

LTI PhD student

Karina Halevy

LTI PhD student
co-advised with Mona Diab

Jimin Mun

LTI PhD student

Jocelyn Shen

MIT PhD student
co-advised with Cynthia Breazeal

Kynnedy Smith

HCII PhD student
co-advised with Motahhare Eslami

Vasudha Varadarajan

LTI Postdoc

Akhila Yerukola

LTI PhD student

Mingqian Zheng

LTI PhD student
co-advised with Carolyn RosΓ©

Xuhui Zhou

LTI PhD student


Overarching Research Themes

Themes extracted and images generated with the OpenAI API; there may be inconsistencies.

Responsible AI and Ethical Interactions

My research group explores the ethical considerations surrounding AI interactions, focusing on user perceptions and societal impacts. Our paper, [Black LLMirror: User (Self) Perceptions in Black American English Interactions with LLMs](https://arxiv.org/abs/2401.06730), sheds light on how self-perceptions influence interactions with AI language models, highlighting the need for inclusive design. Additionally, we investigate the effects of contextual factors on user preferences in our work, [Let Them Down Easy! Contextual Effects of LLM Guardrails on User Perceptions and Preferences](https://arxiv.org/abs/2506.00195), which emphasizes the implications of AI safety mechanisms. Our framework, [EVALUESTEER: Measuring Reward Model Steerability Towards Values and Preference](https://arxiv.org/abs/2510.06370), further contributes to understanding how reward models can be aligned with societal values.

Exploring Narratives in AI

My research group explores the intersection of narrative understanding and AI, focusing on how stories shape human experience and interaction with technology. We specifically examine conversational strategies and empathy within narratives, as demonstrated in our paper, [HEART-felt Narratives: Tracing Empathy and Narrative Style in Personal Stories with LLMs](https://arxiv.org/abs/2405.17633). We also address the complexities of narrative interpretations in the digital age in [The Empirical Variability of Narrative Perceptions of Social Media Texts](https://aclanthology.org/2024.emnlp-main.1113/), which provides insights into how digital narratives are perceived. Another crucial finding is in the paper [Social Story Frames: Contextual Reasoning about Narrative Intent and Reception](https://arxiv.org/abs/2512.15925), which explores how narrative structures influence user engagement and comprehension.

AI Agents and Social Intelligence

My research group explores the development of AI agents with enhanced social intelligence and their implications for human-AI interaction. Our recent work, [SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents](https://arxiv.org/abs/2310.11667), offers a framework for assessing AI's ability to navigate social interactions effectively. We also investigate how multi-perspective theory of mind can facilitate this understanding in our research on [SoMi-ToM: Evaluating Multi-Perspective Theory of Mind in Embodied Social Interactions](https://arxiv.org/abs/2506.23046). Additionally, the paper [OpenAgentSafety: A Comprehensive Framework for Evaluating Real-World AI Agent Safety](https://arxiv.org/abs/2507.06134) addresses critical safety concerns in the deployment of intelligent agents.

Bias Mitigation in Language Models

My research group explores strategies for identifying and mitigating biases in language models to enhance fairness and inclusivity. In our work, [Mitigating Bias in RAG: Controlling the Embedder](https://arxiv.org/abs/2502.17390), we present methodologies for controlling bias in retrieval-augmented generation systems. Another important contribution is found in [Rejected Dialects: Biases Against African American Language in Reward Models](https://arxiv.org/abs/2502.12858), which exposes biases in model reward functions that affect language recognition. Moreover, we examine societal impacts and potential solutions in the context of online hate through the lens of [Counterspeakers’ Perspectives: Unveiling Barriers and AI Needs in the Fight against Online Hate](https://arxiv.org/abs/2403.00179).