November 2024 π«π¨βπ«: Very excited that I now have a courtesy appointment in the Human Computer Interaction Institute!
November 2024 ππ§βπ: As a reminder, due to my lab being quite full already, I'm not taking any students in this upcoming PhD application cycle π.
November 2024 ποΈπ: Excited to give a talk at the 6th Workshop on Narrative Understanding on Computational Methods of Social Causes and Effects of Stories.
November 2024 ποΈπ: Excited to attend EMNLP in Miami, where my students will be presenting their papers: Joel Mire on The Empirical Variability of Narrative Perceptions of Social Media Texts, Jocelyn Shen on HEART-felt Narratives: Tracing Empathy and Narrative Style in Personal Stories with LLMs, and Xuhui Zhou on Is This the Real Life? Is This Just Fantasy? The Misleading Success of Simulating Social Interactions With LLMs.
November 2024 π§βππ¨πΌβπ«: Giving a talk at the University of Pittsburgh CS colloquium on Artificial Social Intelligence? On the challenges of Socially Aware and Ethically informed LLMs (Fall 2024 version). Recording is on Youtube.
October 2024 π½π¦: Giving a talk in the Columbia NLP seminar on Artificial Social Intelligence? On the challenges of Socially Aware and Ethically informed LLMs (Fall 2024 version).
October 2024 π: Excited to attend COLM at UPenn in Philadelphia, and re-visiting my old haunts around UPenn (from 10 years ago).
September 2024 ππ§βπ: Due to my lab being quite full already, I'm very likely not taking any students in this upcoming PhD application cycle π.
August 2024 π¨πΌβπ«: Excited to start year three, and welcoming two new members to the Sapling lab (Dan, Mingqian)!
July 2024 ππ: Our paper Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits was accepted to AIES!!
July 2024 πβοΈ: We have two workshops accepted to NeurIPS 2024: Socially Responsible Language Modelling Research (SoLaR) and Pluralistic Alignment workshops! Stay tuned for the CFPs!
July 2024 ππ: Our paper PolygloToxicityPrompts: Multilingual Evaluation of Neural Toxic Degeneration in Large Language Models was accepted to COLM!! We'll see you all in Philly!
June 2024 ππ€: Really happy to participate and give a talk on Developing Computational Analyses of the Social Aspects of Narratives at the Princeton Workshop for Narrative Possibilities
May 2024 π»πΊ: My students Jimin Mun will be presenting our paper Counterspeakers' Perspectives: Unveiling Barriers and AI Needs in the Fight against Online Hate at CHI 2024, and Xuhui Zhou our paper SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents at ICLR 2024!
May 2024 π€π¨πΌβπ«: Excited to talk about social agents and Sotopia at the CMU Agent Workshop 2024!
April 2024 ππ¨πΌβπ«: Honored to be giving a talk at UNC Chapel Hill Symposium on AI and Society, on the Artificial Social Intelligence? On the challenges of Socially Aware and Ethically informed LLMs, and excited about all the interesting discussions around these topics!
March 2024 ππ²πΉ: Excited for Natalie Shapira to present our paper Clever Hans or Neural Theory of Mind? Stress Testing Social Reasoning in Large Language Models at EACL 2024 in Malta!
March 2024 πποΈ: Excited to unveil the camera-ready versions of ICLR and CHI accepted papers: SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents, Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory, and Leftover-Lunch: Advantage-based Offline Reinforcement Learning for Language Models at ICLR 2024, and Counterspeakers' Perspectives: Unveiling Barriers and AI Needs in the Fight against Online Hate at CHI 2024
January 2024 π¨πΌβπ«β¨: I've been prepping and polishing slides for my class 11-830 Ethics, Social Biases, and Positive Impact in Language Technologies which I'll be teaching alone this semester!
December 2023 π¬π: Super excited that we won Outstanding Paper Award at EMNLP 2023 for our paper SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization!!!!
November 2023 π°π¦: Excited to unveil the camera-ready versions of our EMNLP papers! (1) "Don't Take This Out of Context!" On the Need for Contextual Models and Evaluations for Stylistic Rewriting, (2) SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization, (3) FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions, (4) Modeling Empathic Similarity in Personal Narratives, (5) BiasX: "Thinking Slow" in Toxic Language Annotation with Explanations of Implied Social Biases, and (6) Beyond Denouncing Hate: Strategies for Countering Implied Biases and Stereotypes in Language.
August 2023 π¨πΌβπ«: Year two of being a professor has started! I'm excited about this coming year, and teaching the Data Science Seminar!
August 2023 πΆπ½: I was invited to give a remote talk about The Pivotal Role of Social Context in Toxic Language Detection to Spotify's Ethical AI team!
July 2023 π»πΊ: Excited to give a (virtual) keynote talk at the first Workshop on Theory of Mind at ICML 2023: Towards Socially Aware AI with Pragmatic Competence
July 2023 π³οΈβππ: Extremely excited to share that we won an Outstanding Paper award for our ACL 2023 paper NLPositionality: Characterizing Design Biases of Datasets and Models with Sebastin, Jenny, Ronan, and Katharina!
July 2023 βπ¨π¦: Excited to travel to ACL 2023 in Toronto along with my mentees and PhD students! I'll be giving a keynote at the Workshop on Online Abuse and Harms on The Pivotal Role of Social Context in Toxic Language Detection on Thursday at 11:45am (Pier 7 & 8)
June 2023 π³οΈβππ: Super excited that our paper Queer In AI: A Case Study in Community-Led Participatory AI won Best Paper at FAccT 2023!
May 2023 π°π: Really excited to unveil the camera-ready versions of our ACL papers: (1) Detoxifying Text with MaRCo: Controllable Revision with Experts and Anti-Experts, (2) NLPositionality: Characterizing Design Biases of Datasets and Models, (3) From Dogwhistles to Bullhorns: Unveiling Coded Rhetoric with Language Models, (4) COBRA Frames: Contextual Reasoning about Effects and Harms of Offensive Statements, and our demo (5) Riveter: Measuring Power and Social Dynamics Between Entities.
April 2023 π§ π€: Given recent discussion around ChatGPT/GPT-4 and neural ToM, we updated our arXiv paper to quantitatively measure ToM abilities in these new closed-source OpenAI models. TLDR; they still don't have ToM. See Appendix D for new results.
March 2023 ππ°: Super excited to have our EMNLP 2022 Neural ToM paper covered by the New York Times, and our EMNLP 2022 Prosocial Dialogues work covered by the BBC Science Focus!
January 2023 π¨πΌβπ«: Been working really hard at teaching my first class (with my wonderful co-instructor Emma Strubell) on Computational Ethics.
December 2022 βπ¦πͺ: I am attending EMNLP 2022, where I will be presenting our Neural ToM paper and Hyunwoo Kim will be presenting our Prosocial Dialog paper.
November 2022 βοΈπ§βπ: PhD recruiting info: I will likely be taking at most one student this year, likely to work in the areas of social biases, content moderation, and fairness/ethics/justice in AI/NLP. If you want to work with me, I encourage y'all to apply to CMU directly instead of emailing me. Please see more information here.
November 2022 π¨πΌβπ«: Excited to give a talk at the Minnesota NLP seminar, at Amazon, and at the MIT Media lab: Toward Prosocial NLP: Reasoning About And Responding to Toxicity in Language.
October 2022 ππ₯: Two papers accepted to π¦πͺ EMNLP 2022 π¦πͺ! "Neural Theory-of-Mind? On the Limits of Social Intelligence in Large LMs" π€π and "ProsocialDialog: A Prosocial Backbone for Conversational Agents" π£π¬.
October 2022 βπ½: I'm attending Text as Data (TADA2022) in New York City, where my AI2 intern Julia Mendelsohn will be presenting our work on NLP and dogwhistles.
October 2022 ππ§ : Super excited to have my first PNAS paper accepted: "Quantifying the narrative flow of imagined versus autobiographical stories" out soon!
September 2022 πβ: Excited to have my first NeurIPS paper accepted, and as an oral presentation too, called "Rule-Based but Flexible? Evaluating and Improving Language Models as Accounts of Human Moral Judgment."
August 2022 βπ: I moved to Pittsburgh to officially start at CMU's LTI department as an assistant professorπ¨πΌβπ«. β
July 2022 π¨πΌβπ«: I'll be attending NAACL and giving a talk about Annotators with Attitudes during session 5A: "Ethics, Bias, Fairness 1" between 14:15 β 15:45 PST Tuesday July 12.
April 2022 : Giving a keynote talk at the UserNLP: User-centered Natural Language Processing Workshop collocated with the WebConf 2022 on my research! Video coming soon.
April 2022 π¨πΌβπ«: I gave a talk at UPenn's Computational Linguistics Lunch (CLunch) on Detecting and Rewriting Social Biases in Language.
April 2022 π: Excited that we have two papers accepted to NAACL 2022 in β Seattle π: our preprint on annotator variation in toxicity labelling: Annotators with Attitudes: How Annotator Beliefs And Identities Bias Toxic Language Detection, and our new work on steering agents to do the "right thing" in text games with reinforcement learning: Aligning to Social Norms and Values in Interactive Narratives
February 2022 π: Got two papers accepted to ACL 2022 in π Dublin π: our paper on generating hate speech datasets with GPT-3: TOXIGEN: Controlling Language Models to Generate Implied and Adversarial Toxicity, and our paper on distilling reactions to headlines to combat misinformation: Misinfo Reaction Frames: Reasoning about Readers' Reactions to News Headlines
February 2022 π¨πΌβπ«: I gave an invited talk at UIUC's Responsible Data Science seminar on my research.
February 2022 π¨πΌβπ«: I gave guest lectures on Detecting and Rewriting Social Biases in Language at Stanford's Deep Learning for NLP course (CS224N) and at LTI's Computational Ethics course (CS 11-830), and a guest lecture on Positive AI with Social Commonsense Models in UBC's commonsense reasoning course.
December 2021 π§βπ: I will likely be taking students this coming PhD application cycle. If you're interested in working with me on social commonsense, social biases in language, or ethics in AI, please apply to CMU's LTI.
July 2021 π¨πΌβπ: I successfully defended my PhD thesis titled Positive AI with Social Commonsense Models (read the thesis here, or watch the recording here). Thanks to my advisors, committee, and everyone who attended!
May 2021 π₯³: I will be joining CMU's LTI department as an assistant professorπ¨πΌβπ«in Fall 2022. If you wish to work with me, see the "contact" page. Before starting there, I will be a postdoc at AI2 on project Mosaic π¨πΌβπ¬ starting Fall 2021.
January 2021 π°: ...started this list π which I probably should have done sooner π