Maarten Sap

I am an assistant professor at CMU's LTI department with a courtesy appointment in HCII, and a part-time research scientist at the Allen Institute for AI (AI2). My research focuses on endowing NLP systems with social intelligence and social commonsense, and understanding social inequality and bias in language.

Before this, I was a Postdoc/Young Investigator at the Allen Institute for AI (AI2), working on project Mosaic. I received my PhD from the University of Washington where I was advised by Noah Smith and Yejin Choi, and have interned at AI2 working on social commonsense reasoning, and at Microsoft Research working on deep learning models for understanding human cognition.
[bio for talks]

Recent updates:

December 2024 πŸ‡¨πŸ‡¦β›°οΈ: Excited to be attending my very first NeurIPS conference in Vancouver BC! I'll be giving a talk at New in ML at 3pm on Tuesday!

November 2024 πŸ«‚πŸ‘¨β€πŸ«: Very excited that I now have a courtesy appointment in the Human Computer Interaction Institute!

November 2024 πŸ”πŸ§‘β€πŸŽ“: As a reminder, due to my lab being quite full already, I'm not taking any students in this upcoming PhD application cycle 😟.

November 2024 πŸ–οΈπŸ“š: Excited to give a talk at the 6th Workshop on Narrative Understanding on Computational Methods of Social Causes and Effects of Stories.

November 2024 πŸ–οΈπŸŠ: Excited to attend EMNLP in Miami, where my students will be presenting their papers: Joel Mire on The Empirical Variability of Narrative Perceptions of Social Media Texts, Jocelyn Shen on HEART-felt Narratives: Tracing Empathy and Narrative Style in Personal Stories with LLMs, and Xuhui Zhou on Is This the Real Life? Is This Just Fantasy? The Misleading Success of Simulating Social Interactions With LLMs.

November 2024 πŸ§‘β€πŸŽ“πŸ‘¨πŸΌβ€πŸ«: Giving a talk at the University of Pittsburgh CS colloquium on Artificial Social Intelligence? On the challenges of Socially Aware and Ethically informed LLMs (Fall 2024 version). Recording is on Youtube.

October 2024 πŸ—½πŸ¦: Giving a talk in the Columbia NLP seminar on Artificial Social Intelligence? On the challenges of Socially Aware and Ethically informed LLMs (Fall 2024 version).

[older news]


My research group:

Dan Chechelnitsky

LTI PhD student
co-advised with Chrysoula Zerva

Joel Mire

LTI MLT student

Karina Halevy

LTI PhD student
co-advised with Mona Diab

Jimin Mun

LTI PhD student

Jocelyn Shen

MIT PhD student
co-advised with Cynthia Breazeal

Akhila Yerukola

LTI PhD student

Mingqian Zheng

LTI PhD student
co-advised with Carolyn RosΓ©

Xuhui Zhou

LTI PhD student


Overarching Research Themes

*Extracted by GPT-4, there may be inconsistencies.* #### *Ethics in AI Development* My research group explores the complex dimensions of ethical AI and its implications for society. One of our pivotal works, [HAICOSYSTEM: An Ecosystem for Sandboxing Safety Risks in Human-AI Interactions](http://arxiv.org/abs/2409.16427), introduces a framework designed to simulate potential safety risks associated with human-AI interactions. We also investigate the balance between usability and ethical considerations through our paper, [AI-LieDar: Examine the Trade-off Between Utility and Truthfulness in LLM Agents](https://arxiv.org/abs/2409.09013), which highlights the dilemmas faced by LLMs in delivering truthful responses. Our efforts culminate in [Particip-AI: A Democratic Surveying Framework](https://arxiv.org/abs/2403.14791) that aims to engage a broader audience in discussions of future AI applications and their impacts. #### *Narrative Dynamics in AI* My research group explores the way narratives are constructed and perceived through AI technologies. We delve into emotional resonance in storytelling with our important study, [HEART-felt Narratives: Tracing Empathy and Narrative Style in Personal Stories with LLMs](https://arxiv.org/abs/2405.17633), which examines how narrative styles can evoke empathy. Additionally, our work, [Modeling Empathic Similarity in Personal Narratives](https://arxiv.org/abs/2305.14246), quantifies connections between audiences and narratives, enriching our understanding of storytelling mechanics. Another significant contribution, [Quantifying the narrative flow of imagined versus autobiographical stories](https://www.pnas.org/doi/10.1073/pnas.2211715119), provides insights into how different types of personal stories affect listener engagement. #### *Social Intelligence and Simulation* My research group explores how social intelligence can be embedded and evaluated in language models. We critically assess the limitations of simulating genuine interpersonal engagements in our study, [Is This the Real Life? Is This Just Fantasy? The Misleading Success of Simulating Social Interactions With LLMs](http://arxiv.org/abs/2403.05020), which reveals the gaps between simulated responses and authentic human communication. To foster more effective models, we research the efficacy of context in understanding social dynamics through [Clever Hans or Neural Theory of Mind? Stress Testing Social Reasoning in Large Language Models](https://arxiv.org/abs/2305.14763). Our ongoing efforts also led to [SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents](https://arxiv.org/abs/2310.11667), which advances the measures of social reasoning capabilities among AI agents. #### *Addressing Toxic Language* My research group explores the pressing challenges in mitigating toxic language generation within AI systems. We highlight multilingual aspects of this issue with our notable study, [PolygloToxicityPrompts: Multilingual Evaluation of Neural Toxic Degeneration in Large Language Models](https://arxiv.org/abs/2405.09373), which scrutinizes how toxic content manifests across languages. Another critical observation is presented in [Counterspeakers’ Perspectives: Unveiling Barriers and AI Needs in the Fight against Online Hate](https://arxiv.org/abs/2403.00179), offering insights into the requirements for combating hate speech effectively. Additionally, our work, [Leftover-Lunch: Advantage-based Offline Reinforcement Learning for Language Models](https://arxiv.org/abs/2305.14718), investigates reinforcement learning strategies to enhance ethical dialogue generation.