Maarten Sap

I am an assistant professor at CMU's LTI department with a courtesy appointment in HCII, and a part-time research scientist at the Allen Institute for AI (AI2). My research focuses on endowing NLP systems with social intelligence and social commonsense, and understanding social inequality and bias in language.

Before this, I was a Postdoc/Young Investigator at the Allen Institute for AI (AI2), working on project Mosaic. I received my PhD from the University of Washington where I was advised by Noah Smith and Yejin Choi, and have interned at AI2 working on social commonsense reasoning, and at Microsoft Research working on deep learning models for understanding human cognition.
[bio for talks]

Recent updates:

January 2025 ๐Ÿ‘จ๐Ÿผโ€๐Ÿซ๐Ÿง : Happy to give a talk in Artificial Social Intelligence at the Cluster of Excellence "Science of Intelligence" (SCIoI) at the Technische Universitรคt Berlin.

January 2025 ๐Ÿ‘จ๐Ÿผโ€๐Ÿซ๐Ÿ“ข: I'm happy to be giving a talk at the First Workshop on Multilingual Counterspeech Generation at COLING 2025 (remotely)!

December 2024 ๐Ÿ‡จ๐Ÿ‡ฆโ›ฐ๏ธ: Excited to be attending my very first NeurIPS conference in Vancouver BC! I'll be giving a talk at New in ML at 3pm on Tuesday!

November 2024 : I received a Google Academic Research Award for our work on participatory impact assessment of future AI use cases.

November 2024 ๐Ÿซ‚๐Ÿ‘จโ€๐Ÿซ: Very excited that I now have a courtesy appointment in the Human Computer Interaction Institute!

November 2024 ๐Ÿ”๐Ÿง‘โ€๐ŸŽ“: As a reminder, due to my lab being quite full already, I'm not taking any students in this upcoming PhD application cycle ๐Ÿ˜Ÿ.

November 2024 ๐Ÿ–๏ธ๐Ÿ“š: Excited to give a talk at the 6th Workshop on Narrative Understanding on Computational Methods of Social Causes and Effects of Stories.

[older news]


My research group:

Dan Chechelnitsky

LTI PhD student
co-advised with Chrysoula Zerva

Joel Mire

LTI MLT student

Karina Halevy

LTI PhD student
co-advised with Mona Diab

Jimin Mun

LTI PhD student

Jocelyn Shen

MIT PhD student
co-advised with Cynthia Breazeal

Akhila Yerukola

LTI PhD student

Mingqian Zheng

LTI PhD student
co-advised with Carolyn Rosรฉ

Xuhui Zhou

LTI PhD student


Overarching Research Themes

*Extracted by GPT-4, there may be inconsistencies.* #### *Ethics and Human-Centered AI* My research group explores the ethical implications of AI technologies and their impact on human interactions. We focus on understanding diverse perspectives on AI through studies such as [Diverse Perspectives on AI](https://arxiv.org/abs/2502.07287), which examines the acceptability and reasoning surrounding potential AI use cases. Additionally, we investigate biases inherent in AI systems, as shown in the paper [Rejected Dialects: Biases Against African American Language in Reward Models](https://arxiv.org/abs/2401.06730), which highlights discrimination against specific dialects. Lastly, we explore user-driven frameworks for addressing biases in AI companions, contributing to more equitable AI deployment. #### *Narrative Analysis and Empathy* My research group explores the complexities of narrative structures and their emotional impacts in various contexts. We delve into how narratives shape perceptions through works like [The Empirical Variability of Narrative Perceptions of Social Media Texts](https://aclanthology.org/2024.emnlp-main.1113/), that examines the differences in understanding social media narratives. We also look at empathy in storytelling, as illustrated in [HEART-felt Narratives: Tracing Empathy and Narrative Style in Personal Stories with LLMs](https://arxiv.org/abs/2405.17633), which highlights how narrative styles influence empathetic engagement. Furthermore, our research into [Modeling Empathic Similarity in Personal Narratives](https://arxiv.org/abs/2305.14246) combines narrative analysis with AI methodologies to measure empathic responses. #### *AI Agents and Social Intelligence* My research group explores the design and evaluation of AI agents equipped with social intelligence capabilities. We focus on how these agents can interact effectively with humans, as demonstrated in the paper [Is This the Real Life? Is This Just Fantasy? The Misleading Success of Simulating Social Interactions With LLMs](http://arxiv.org/abs/2403.05020), which critiques the efficacy of LLMs in mimicking real human interactions. We also investigate methods to improve AI's truthfulness and utility in social contexts through our work on [AI-LieDar: Examine the Trade-off Between Utility and Truthfulness in LLM Agents](https://arxiv.org/abs/2409.09013). Lastly, our research on [SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents](https://arxiv.org/abs/2310.11667) offers insights into assessing and enhancing the social behaviors of AI agents. #### *Addressing Toxicity in Language* My research group explores methods for mitigating toxic language generated by AI systems. We investigate multilingual challenges in toxicity through papers like [PolygloToxicityPrompts: Multilingual Evaluation of Neural Toxic Degeneration in Large Language Models](https://arxiv.org/abs/2405.09373), which evaluates the performance of models in different linguistic contexts. Additionally, we examine the barriers that counter-speakers face in combating online hate in the paper [Counterspeakersโ€™ Perspectives: Unveiling Barriers and AI Needs in the Fight against Online Hate](https://arxiv.org/abs/2403.00179), focusing on user experiences and challenges. Our work also includes addressing the implications of AI-generated content, as seen in the framework [COBRA Frames: Contextual Reasoning about Effects and Harms of Offensive Statements](http://arxiv.org/abs/2306.01985), which provides insights into the context and consequences of harmful language.