Recent to less recent updates:

July 2024 πŸ“„πŸ””: Our paper Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits was accepted to AIES!!

July 2024 πŸβš–οΈ: We have two workshops accepted to NeurIPS 2024: Socially Responsible Language Modelling Research (SoLaR) and Pluralistic Alignment workshops! Stay tuned for the CFPs!

July 2024 πŸ“„πŸ””: Our paper PolygloToxicityPrompts: Multilingual Evaluation of Neural Toxic Degeneration in Large Language Models was accepted to COLM!! We'll see you all in Philly!

June 2024 πŸ“šπŸŽ€: Really happy to participate and give a talk on Developing Computational Analyses of the Social Aspects of Narratives at the Princeton Workshop for Narrative Possibilities

May 2024 🎻🌺: My students Jimin Mun will be presenting our paper Counterspeakers' Perspectives: Unveiling Barriers and AI Needs in the Fight against Online Hate at CHI 2024, and Xuhui Zhou our paper SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents at ICLR 2024!

May 2024 πŸ€–πŸ‘¨πŸΌβ€πŸ«: Excited to talk about social agents and Sotopia at the CMU Agent Workshop 2024!

April 2024 πŸπŸ‘¨πŸΌβ€πŸ«: Honored to be giving a talk at UNC Chapel Hill Symposium on AI and Society, on the Artificial Social Intelligence? On the challenges of Socially Aware and Ethically informed LLMs, and excited about all the interesting discussions around these topics!

March 2024 πŸ“„πŸ‡²πŸ‡Ή: Excited for Natalie Shapira to present our paper Clever Hans or Neural Theory of Mind? Stress Testing Social Reasoning in Large Language Models at EACL 2024 in Malta!

March 2024 πŸ“„πŸοΈ: Excited to unveil the camera-ready versions of ICLR and CHI accepted papers: SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents, Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory, and Leftover-Lunch: Advantage-based Offline Reinforcement Learning for Language Models at ICLR 2024, and Counterspeakers' Perspectives: Unveiling Barriers and AI Needs in the Fight against Online Hate at CHI 2024

January 2024 πŸ‘¨πŸΌβ€πŸ«βœ¨: I've been prepping and polishing slides for my class 11-830 Ethics, Social Biases, and Positive Impact in Language Technologies which I'll be teaching alone this semester!

December 2023 πŸ’¬πŸ†: Super excited that we won Outstanding Paper Award at EMNLP 2023 for our paper SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization!!!!

November 2023 πŸ“°πŸ¦: Excited to unveil the camera-ready versions of our EMNLP papers! (1) "Don't Take This Out of Context!" On the Need for Contextual Models and Evaluations for Stylistic Rewriting, (2) SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization, (3) FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions, (4) Modeling Empathic Similarity in Personal Narratives, (5) BiasX: "Thinking Slow" in Toxic Language Annotation with Explanations of Implied Social Biases, and (6) Beyond Denouncing Hate: Strategies for Countering Implied Biases and Stereotypes in Language.

August 2023 πŸ‘¨πŸΌβ€πŸ«: Year two of being a professor has started! I'm excited about this coming year, and teaching the Data Science Seminar!

August 2023 πŸŽΆπŸ—½: I was invited to give a remote talk about The Pivotal Role of Social Context in Toxic Language Detection to Spotify's Ethical AI team!

July 2023 πŸ’»πŸŒΊ: Excited to give a (virtual) keynote talk at the first Workshop on Theory of Mind at ICML 2023: Towards Socially Aware AI with Pragmatic Competence

July 2023 πŸ³οΈβ€πŸŒˆπŸ†: Extremely excited to share that we won an Outstanding Paper award for our ACL 2023 paper NLPositionality: Characterizing Design Biases of Datasets and Models with Sebastin, Jenny, Ronan, and Katharina!

July 2023 βœˆπŸ‡¨πŸ‡¦: Excited to travel to ACL 2023 in Toronto along with my mentees and PhD students! I'll be giving a keynote at the Workshop on Online Abuse and Harms on The Pivotal Role of Social Context in Toxic Language Detection on Thursday at 11:45am (Pier 7 & 8)

June 2023 πŸ³οΈβ€πŸŒˆπŸ†: Super excited that our paper Queer In AI: A Case Study in Community-Led Participatory AI won Best Paper at FAccT 2023!

May 2023 πŸ“°πŸ: Really excited to unveil the camera-ready versions of our ACL papers: (1) Detoxifying Text with MaRCo: Controllable Revision with Experts and Anti-Experts, (2) NLPositionality: Characterizing Design Biases of Datasets and Models, (3) From Dogwhistles to Bullhorns: Unveiling Coded Rhetoric with Language Models, (4) COBRA Frames: Contextual Reasoning about Effects and Harms of Offensive Statements, and our demo (5) Riveter: Measuring Power and Social Dynamics Between Entities.

April 2023 πŸ§ πŸ€–: Given recent discussion around ChatGPT/GPT-4 and neural ToM, we updated our arXiv paper to quantitatively measure ToM abilities in these new closed-source OpenAI models. TLDR; they still don't have ToM. See Appendix D for new results.

March 2023 πŸ—žπŸ“°: Super excited to have our EMNLP 2022 Neural ToM paper covered by the New York Times, and our EMNLP 2022 Prosocial Dialogues work covered by the BBC Science Focus!

January 2023 πŸ‘¨πŸΌβ€πŸ«: Been working really hard at teaching my first class (with my wonderful co-instructor Emma Strubell) on Computational Ethics.

December 2022 βœˆπŸ‡¦πŸ‡ͺ: I am attending EMNLP 2022, where I will be presenting our Neural ToM paper and Hyunwoo Kim will be presenting our Prosocial Dialog paper.

November 2022 βœ‰οΈπŸ§‘β€πŸŽ“: PhD recruiting info: I will likely be taking at most one student this year, likely to work in the areas of social biases, content moderation, and fairness/ethics/justice in AI/NLP. If you want to work with me, I encourage y'all to apply to CMU directly instead of emailing me. Please see more information here.

November 2022 πŸ‘¨πŸΌβ€πŸ«: Excited to give a talk at the Minnesota NLP seminar, at Amazon, and at the MIT Media lab: Toward Prosocial NLP: Reasoning About And Responding to Toxicity in Language.

October 2022 πŸ’­πŸ‘₯: Two papers accepted to πŸ‡¦πŸ‡ͺ EMNLP 2022 πŸ‡¦πŸ‡ͺ! "Neural Theory-of-Mind? On the Limits of Social Intelligence in Large LMs" πŸ€–πŸ’­ and "ProsocialDialog: A Prosocial Backbone for Conversational Agents" πŸ—£πŸ’¬.

October 2022 βœˆπŸ—½: I'm attending Text as Data (TADA2022) in New York City, where my AI2 intern Julia Mendelsohn will be presenting our work on NLP and dogwhistles.

October 2022 πŸ“„πŸ§ : Super excited to have my first PNAS paper accepted: "Quantifying the narrative flow of imagined versus autobiographical stories" out soon!

September 2022 πŸ“„βš–: Excited to have my first NeurIPS paper accepted, and as an oral presentation too, called "Rule-Based but Flexible? Evaluating and Improving Language Models as Accounts of Human Moral Judgment."

August 2022 βœˆπŸ™: I moved to Pittsburgh to officially start at CMU's LTI department as an assistant professorπŸ‘¨πŸΌβ€πŸ«. ‍

July 2022 πŸ‘¨πŸΌβ€πŸ«: I'll be attending NAACL and giving a talk about Annotators with Attitudes during session 5A: "Ethics, Bias, Fairness 1" between 14:15 – 15:45 PST Tuesday July 12.

April 2022 : Giving a keynote talk at the UserNLP: User-centered Natural Language Processing Workshop collocated with the WebConf 2022 on my research! Video coming soon.

April 2022 πŸ‘¨πŸΌβ€πŸ«: I gave a talk at UPenn's Computational Linguistics Lunch (CLunch) on Detecting and Rewriting Social Biases in Language.

April 2022 πŸ“„: Excited that we have two papers accepted to NAACL 2022 in β˜” Seattle πŸ”: our preprint on annotator variation in toxicity labelling: Annotators with Attitudes: How Annotator Beliefs And Identities Bias Toxic Language Detection, and our new work on steering agents to do the "right thing" in text games with reinforcement learning: Aligning to Social Norms and Values in Interactive Narratives

February 2022 πŸ“„: Got two papers accepted to ACL 2022 in πŸ€ Dublin πŸ€: our paper on generating hate speech datasets with GPT-3: TOXIGEN: Controlling Language Models to Generate Implied and Adversarial Toxicity, and our paper on distilling reactions to headlines to combat misinformation: Misinfo Reaction Frames: Reasoning about Readers' Reactions to News Headlines

February 2022 πŸ‘¨πŸΌβ€πŸ«: I gave an invited talk at UIUC's Responsible Data Science seminar on my research.

February 2022 πŸ‘¨πŸΌβ€πŸ«: I gave guest lectures on Detecting and Rewriting Social Biases in Language at Stanford's Deep Learning for NLP course (CS224N) and at LTI's Computational Ethics course (CS 11-830), and a guest lecture on Positive AI with Social Commonsense Models in UBC's commonsense reasoning course.

December 2021 πŸ§‘β€πŸŽ“: I will likely be taking students this coming PhD application cycle. If you're interested in working with me on social commonsense, social biases in language, or ethics in AI, please apply to CMU's LTI.

July 2021 πŸ‘¨πŸΌβ€πŸŽ“: I successfully defended my PhD thesis titled Positive AI with Social Commonsense Models (read the thesis here, or watch the recording here). Thanks to my advisors, committee, and everyone who attended!

May 2021 πŸ₯³: I will be joining CMU's LTI department as an assistant professorπŸ‘¨πŸΌβ€πŸ«in Fall 2022. If you wish to work with me, see the "contact" page. Before starting there, I will be a postdoc at AI2 on project Mosaic πŸ‘¨πŸΌβ€πŸ”¬ starting Fall 2021.

January 2021 πŸ“°: ...started this list 😁 which I probably should have done sooner πŸ˜…