Recent to less recent updates:

May 2023 πŸ“°πŸ: Really excited to unveil the camera-ready versions of our ACL papers: Detoxifying Text with MaRCo: Controllable Revision with Experts and Anti-Experts, NLPositionality: Characterizing Design Biases of Datasets and Models, From Dogwhistles to Bullhorns: Unveiling Coded Rhetoric with Language Models, COBRA Frames: Contextual Reasoning about Effects and Harms of Offensive Statements, and our demo Riveter: Measuring Power and Social Dynamics Between Entities.

April 2023 πŸ§ πŸ€–: Given recent discussion around ChatGPT/GPT-4 and neural ToM, we updated our arXiv paper to quantitatively measure ToM abilities in these new closed-source OpenAI models. TLDR; they still don't have ToM. See Appendix D for new results.

March 2023 πŸ—žπŸ“°: Super excited to have our EMNLP 2022 Neural ToM paper covered by the New York Times, and our EMNLP 2022 Prosocial Dialogues work covered by the BBC Science Focus!

January 2023 πŸ‘¨πŸΌβ€πŸ«: Been working really hard at teaching my first class (with my wonderful co-instructor Emma Strubell) on Computational Ethics.

December 2022 βœˆπŸ‡¦πŸ‡ͺ: I am attending EMNLP 2022, where I will be presenting our Neural ToM paper and Hyunwoo Kim will be presenting our Prosocial Dialog paper.

November 2022 βœ‰οΈπŸ§‘β€πŸŽ“: PhD recruiting info: I will likely be taking at most one student this year, likely to work in the areas of social biases, content moderation, and fairness/ethics/justice in AI/NLP. If you want to work with me, I encourage y'all to apply to CMU directly instead of emailing me. Please see more information here.

November 2022 πŸ‘¨πŸΌβ€πŸ«: Excited to give a talk at the Minnesota NLP seminar, at Amazon, and at the MIT Media lab: Toward Prosocial NLP: Reasoning About And Responding to Toxicity in Language.

October 2022 πŸ’­πŸ‘₯: Two papers accepted to πŸ‡¦πŸ‡ͺ EMNLP 2022 πŸ‡¦πŸ‡ͺ! "Neural Theory-of-Mind? On the Limits of Social Intelligence in Large LMs" πŸ€–πŸ’­ and "ProsocialDialog: A Prosocial Backbone for Conversational Agents" πŸ—£πŸ’¬.

October 2022 βœˆπŸ—½: I'm attending Text as Data (TADA2022) in New York City, where my AI2 intern Julia Mendelsohn will be presenting our work on NLP and dogwhistles.

October 2022 πŸ“„πŸ§ : Super excited to have my first PNAS paper accepted: "Quantifying the narrative flow of imagined versus autobiographical stories" out soon!

September 2022 πŸ“„βš–: Excited to have my first NeurIPS paper accepted, and as an oral presentation too, called "Rule-Based but Flexible? Evaluating and Improving Language Models as Accounts of Human Moral Judgment."

August 2022 βœˆπŸ™: I moved to Pittsburgh to officially start at CMU's LTI department as an assistant professorπŸ‘¨πŸΌβ€πŸ«. ‍

July 2022 πŸ‘¨πŸΌβ€πŸ«: I'll be attending NAACL and giving a talk about Annotators with Attitudes during session 5A: "Ethics, Bias, Fairness 1" between 14:15 – 15:45 PST Tuesday July 12.

April 2022 : Giving a keynote talk at the UserNLP: User-centered Natural Language Processing Workshop collocated with the WebConf 2022 on my research! Video coming soon.

April 2022 πŸ‘¨πŸΌβ€πŸ«: I gave a talk at UPenn's Computational Linguistics Lunch (CLunch) on Detecting and Rewriting Social Biases in Language.

April 2022 πŸ“„: Excited that we have two papers accepted to NAACL 2022 in β˜” Seattle πŸ”: our preprint on annotator variation in toxicity labelling: Annotators with Attitudes: How Annotator Beliefs And Identities Bias Toxic Language Detection, and our new work on steering agents to do the "right thing" in text games with reinforcement learning: Aligning to Social Norms and Values in Interactive Narratives

February 2022 πŸ“„: Got two papers accepted to ACL 2022 in πŸ€ Dublin πŸ€: our paper on generating hate speech datasets with GPT-3: TOXIGEN: Controlling Language Models to Generate Implied and Adversarial Toxicity, and our paper on distilling reactions to headlines to combat misinformation: Misinfo Reaction Frames: Reasoning about Readers' Reactions to News Headlines

February 2022 πŸ‘¨πŸΌβ€πŸ«: I gave an invited talk at UIUC's Responsible Data Science seminar on my research.

February 2022 πŸ‘¨πŸΌβ€πŸ«: I gave guest lectures on Detecting and Rewriting Social Biases in Language at Stanford's Deep Learning for NLP course (CS224N) and at LTI's Computational Ethics course (CS 11-830), and a guest lecture on Positive AI with Social Commonsense Models in UBC's commonsense reasoning course.

December 2021 πŸ§‘β€πŸŽ“: I will likely be taking students this coming PhD application cycle. If you're interested in working with me on social commonsense, social biases in language, or ethics in AI, please apply to CMU's LTI.

July 2021 πŸ‘¨πŸΌβ€πŸŽ“: I successfully defended my PhD thesis titled Positive AI with Social Commonsense Models (read the thesis here, or watch the recording here). Thanks to my advisors, committee, and everyone who attended!

May 2021 πŸ₯³: I will be joining CMU's LTI department as an assistant professorπŸ‘¨πŸΌβ€πŸ«in Fall 2022. If you wish to work with me, see the "contact" page. Before starting there, I will be a postdoc at AI2 on project Mosaic πŸ‘¨πŸΌβ€πŸ”¬ starting Fall 2021.

January 2021 πŸ“°: ...started this list 😁 which I probably should have done sooner πŸ˜…