December 2022 ✈🇦🇪: I will be attending EMNLP 2022, where I will be presenting our Neural ToM paper and Hyunwoo Kim will be presenting our Prosocial Dialog paper.
November 2022 ✉️🧑🎓: PhD recruiting info: I will likely be taking at most one student this year, likely to work in the areas of social biases, content moderation, and fairness/ethics/justice in AI/NLP. If you want to work with me, I encourage y'all to apply to CMU directly instead of emailing me. Please see more information here.
November 2022 👨🏼🏫: Excited to give a talk at the Minnesota NLP seminar, at Amazon, and at the MIT Media lab: Toward Prosocial NLP: Reasoning About And Responding to Toxicity in Language
October 2022 💭👥: Two papers accepted to 🇦🇪 EMNLP 2022 🇦🇪! "Neural Theory-of-Mind? On the Limits of Social Intelligence in Large LMs" 🤖💭 and "ProsocialDialog: A Prosocial Backbone for Conversational Agents" 🗣💬.
October 2022 📄🧠: Super excited to have my first PNAS paper accepted: "Quantifying the narrative flow of imagined versus autobiographical stories" out soon!
September 2022 📄⚖: Excited to have my first NeurIPS paper accepted, and as an oral presentation too, called "Rule-Based but Flexible? Evaluating and Improving Language Models as Accounts of Human Moral Judgment."
August 2022 ✈🏙: I moved to Pittsburgh to officially start at CMU's LTI department as an assistant professor👨🏼🏫.
July 2022 👨🏼🏫: I'll be attending NAACL and giving a talk about Annotators with Attitudes during session 5A: "Ethics, Bias, Fairness 1" between 14:15 – 15:45 PST Tuesday July 12.
April 2022 : Giving a keynote talk at the UserNLP: User-centered Natural Language Processing Workshop collocated with the WebConf 2022 on my research! Video coming soon.
April 2022 👨🏼🏫: I gave a talk at UPenn's Computational Linguistics Lunch (CLunch) on Detecting and Rewriting Social Biases in Language.
April 2022 📄: Excited that we have two papers accepted to NAACL 2022 in ☔ Seattle 🏔: our preprint on annotator variation in toxicity labelling: Annotators with Attitudes: How Annotator Beliefs And Identities Bias Toxic Language Detection, and our new work on steering agents to do the "right thing" in text games with reinforcement learning: Aligning to Social Norms and Values in Interactive Narratives
February 2022 📄: Got two papers accepted to ACL 2022 in 🍀 Dublin 🍀: our paper on generating hate speech datasets with GPT-3: TOXIGEN: Controlling Language Models to Generate Implied and Adversarial Toxicity, and our paper on distilling reactions to headlines to combat misinformation: Misinfo Reaction Frames: Reasoning about Readers' Reactions to News Headlines
February 2022 👨🏼🏫: I gave an invited talk at UIUC's Responsible Data Science seminar on my research.
February 2022 👨🏼🏫: I gave guest lectures on Detecting and Rewriting Social Biases in Language at Stanford's Deep Learning for NLP course (CS224N) and at LTI's Computational Ethics course (CS 11-830), and a guest lecture on Positive AI with Social Commonsense Models in UBC's commonsense reasoning course.
December 2021 🧑🎓: I will likely be taking students this coming PhD application cycle. If you're interested in working with me on social commonsense, social biases in language, or ethics in AI, please apply to CMU's LTI.
July 2021 👨🏼🎓: I successfully defended my PhD thesis titled Positive AI with Social Commonsense Models (read the thesis here, or watch the recording here). Thanks to my advisors, committee, and everyone who attended!
May 2021 🥳: I will be joining CMU's LTI department as an assistant professor👨🏼🏫in Fall 2022. If you wish to work with me, see the "contact" page. Before starting there, I will be a postdoc at AI2 on project Mosaic 👨🏼🔬 starting Fall 2021.
January 2021 📰: ...started this list 😁 which I probably should have done sooner 😅