Maarten Sap

I am an assistant professor at CMU's LTI department, and a part-time research scientist at the Allen Institute for AI (AI2). My research focuses on endowing NLP systems with social intelligence and social commonsense, and understanding social inequality and bias in language.

Before this, I was a Postdoc/Young Investigator at the Allen Institute for AI (AI2), working on project Mosaic. I received my PhD from the University of Washington where I was advised by Noah Smith and Yejin Choi, and have interned at AI2 working on social commonsense reasoning, and at Microsoft Research working on deep learning models for understanding human cognition.
[bio for talks]

Recent updates:

November 2022 βœ‰οΈπŸ§‘β€πŸŽ“: PhD recruiting info: I will likely be taking at most one student this year, likely to work in the areas of social biases, content moderation, and fairness/ethics/justice in AI/NLP. If you want to work with me, I encourage y'all to apply to CMU directly instead of emailing me. Please see more information here.

November 2022 πŸ‘¨πŸΌβ€πŸ«: Excited to give a talk at the Minnesota NLP seminar, at Amazon, and at the MIT Media lab: Toward Prosocial NLP: Reasoning About And Responding to Toxicity in Language

October 2022 πŸ’­πŸ‘₯: Two papers accepted to πŸ‡¦πŸ‡ͺ EMNLP 2022 πŸ‡¦πŸ‡ͺ! "Neural Theory-of-Mind? On the Limits of Social Intelligence in Large LMs" πŸ€–πŸ’­ and "ProsocialDialog: A Prosocial Backbone for Conversational Agents" πŸ—£πŸ’¬.

October 2022 βœˆπŸ—½: I'm attending Text as Data (TADA2022) in New York City, where my AI2 intern Julia Mendelsohn will be presenting our work on NLP and dogwhistles.

October 2022 πŸ“„πŸ§ : Super excited to have my first PNAS paper accepted: "Quantifying the narrative flow of imagined versus autobiographical stories" out soon!

September 2022 πŸ“„βš–: Excited to have my first NeurIPS paper accepted, and as an oral presentation too, called "Rule-Based but Flexible? Evaluating and Improving Language Models as Accounts of Human Moral Judgment."

August 2022 βœˆπŸ™: I moved to Pittsburgh to officially start at CMU's LTI department as an assistant professorπŸ‘¨πŸΌβ€πŸ«. ‍

[older news]


My research group:

Ji Min Mun

LTI PhD student

Akhila Yerukola

LTI PhD student

Xuhui Zhou

LTI PhD student


Overarching Research Themes

Detecting and Mitigating Social Biases in Language

Language can perpetuate social biases and toxicity against oppressed or marginalized groups. I want to investigate new ways of representing and detecting such harmful content in text (e.g., Social Bias Frames) or in conversations (e.g., with ToxiChat). Additionally, I want to harness NLP systems to combat stereotypical or harmful statements in language, through controllable text generation (e.g., with DExperts) or controllable text debiasing (e.g., with PowerTransformer).

In the future, I want to make this technology more context-aware and human-centric, e.g., by incorporating power differentials between speaker and listener, and studying human-in-the-loop methods for toxicity detection or text debiasing.

Commonsense Reasoning for Socially Aware NLP

Through theory-of-mind, Humans are trivially able to reason about other people's intents and reactions to everyday situations. I am interested in studying how AI systems can do this type of social commonsense reasoning. For example, this requires giving models knowledge of social commensense (e.g., with Event2Mind or ATOMIC, and methods like CoMET) or social acceptibility (Social Chemistry). Additionally, this requires creating benchmarks for measuring models' social commonsense abilities (e.g., with Social IQa, or Story Commonsense).

In the future, I want to keep investigating this elusive goal of machine social commonsense. Additionally, I want to explore positive applications of this research, e.g., for therapeutic setting or for helping people with cognitive disabilities.

Analyzing the Ethics and Transparency of AI models

AI and NLP systems unfortunately encode social biases and stereotypes. I'm passionate about analyzing and diagnosing the potential negative societal impacts of these systems. For example, I've uncovered severe racial bias in hate speech detection datasets and models, and subsequently analyzed whether robustness methods for NLP can mitigate them, as well as understanding the psychological attitudes that cause over- and under-detection of content as toxic. Additionally, I've scrutinized recent pretrained language models and their training data with respect to biases, toxicity, and fake news (e.g., measuring GPT-2 and GPT-3's neural toxic degeneration, and documenting the English C4 Webtext Crawl).

In the future, I plan to keep diagnosing and mitigating the ethical, fairness, and representation issues in AI systems, especially from a human-centric perspective of end-users and other stakeholders.