# [11-830 Computational Ethics in NLP]()
## **Spring 2023**
- **Time**: 11:00-12:20 Tuesdays & Thursdays
- **Place**: WEH 5403 (Wean Hall)
- **Canvas**: [https://canvas.cmu.edu/courses/32412](https://canvas.cmu.edu/courses/32412) (for discussions, assignments, questions, etc.)
- **Zoom**: see [Canvas](https://canvas.cmu.edu/courses/32412)
- For other, e.g., more personal, concerns, email the instructors: [11-830-instructors@cs.cmu.edu](mailto:11-830-instructors@cs.cmu.edu)
-----
### Summary
As language technologies have become increasingly prevalent, there is a growing awareness that decisions we make about our data, methods, and tools are often tied up with their impact on people and societies.
This course introduces students to real-world applications of language technologies and the potential ethical implications associated with them.
We discuss philosophical foundations of ethical research along with advanced state-of-the art techniques. Discussion topics include:
- **Philosophical foundations:** what is ethics, history, medical and psychological experiments, IRB and human subjects, ethical decision making.
- **Misrepresentation and bias:** algorithms to identify biases in models and data and adversarial approaches to debiasing.
- **Civility in communication:** techniques to monitor trolling, hate speech, abusive language, cyberbullying, toxic comments.
- **Democracy and the language of manipulation:** approaches to identify propaganda and manipulation in news, to identify fake news, political framing.
- **Privacy & security :** algorithms for demographic inference, personality profiling, and anonymization of demographic and personal traits.
- **NLP for Social Good:** Low-resource NLP, applications for disaster response and monitoring diseases, medical applications, psychological counseling, interfaces for accessibility.
- **Multidisciplinary perspective:** invited lectures from experts in behavioral and social sciences, rhetoric, etc.
----
## Syllabus
Syllabus is subject to change, this will always be the latest version.
| Week | Date | Theme | Topics | Instructor | Readings | Assignments |
|------|-------|-------------------------------|------------------------------------------------------------|-------------------------------------------------------------------------------|----------|----------------------------------------------|
| 1 | 01/17 | Introduction | Motivation, requirements and overview | Emma | | |
| | 01/19 | Introduction | Project expectations and ideas | Emma & Maarten | | |
| 2 | 01/24 | Foundations & History | Overview: History of ethics, moral theories, etc. | Maarten | | |
| | 01/26 | Foundations & History | *Invited Talk* (virtual) | [Sydney Levine](https://sites.google.com/site/sydneymlevine/) | | Preproposals & project teams due Friday 1/27 |
| 3 | 01/31 | Foundations & History | Human subjects research, IRB, crowdsourcing | Maarten | | |
| | 02/02 | Foundations & History | Discussion | Emma & Maarten | | Proposal due Monday 2/06 |
| 4 | 02/07 | Objectivity & Bias | Overview: Stereotypes, prejudice, and discrimination | Maarten | | [HW 1](/11-830-Spring2023/hw1.html) released |
| | 02/09 | Objectivity & Bias | Bias detection, debiasing, perspectivism | Maarten | | |
| 5 | 02/14 | Objectivity & Bias | Discussion | Emma & Maarten | | |
| | 02/16 | Objectivity & Bias | *Invited Talk* | [David Widder](https://davidwidder.me/) | | HW 1 due on Friday 2/17 |
| 6 | 02/21 | Civility & Toxicity | Overview: Online Toxicity, Hate Speech, Content Moderation | Maarten | | [HW 2](/11-830-Spring2023/hw2.html) released |
| | 02/23 | Civility & Toxicity | Toxicity in conversations & detoxifying large LMs | Maarten | | |
| 7 | 02/28 | Civility & Toxicity | Detoxying LMs & Toxicity in Conversations | Maarten | | |
| | 03/02 | Civility & Toxicity | Discussion | Emma & Maarten | | HW 2 due on Friday 3/03 |
| 8 | 03/07 | No class - spring break | | | | |
| | 03/09 | No class - spring break | | | | |
| 9 | 03/14 | Misinformation & Manipulation | *Invited Talk* (virtual) | [Saadia Gabriel](https://homes.cs.washington.edu/~skgabrie/) | | |
| | 03/16 | Misinformation & Manipulation | Overview: Fake news | Emma | | |
| 10 | 03/21 | Midterm project check-ins | ---- | Emma & Maarten | | Midterm project check ins |
| | 03/23 | Transparency & Open Science | *Invited Talk* | [Jesse Dodge](https://jessedodge.github.io/) | | |
| 11 | 03/28 | Midterm project check-ins | ---- | Emma & Maarten | | [HW 3](/11-830-Spring2023/hw3.html) released |
| | 03/30 | Privacy, Profiling, Security | Overview: Privacy and profiling | Emma | | |
| 12 | 04/04 | Privacy, Profiling, Security | *Invited Talk* (virtual) | [Oluwaseyi Feyisetan](http://seyi.feyisetan.com/) | | |
| | 04/06 | NLP for Social Good | Overview: Challenges and pitfalls of NLP for social good | Emma | | HW 3 due 4/7 |
| 13 | 04/11 | NLP for Social Good | *Invited Talk* | [Su Lin Blodgett](https://sblodgett.github.io/) | | |
| | 04/13 | No class -- Spring Carnival | | | | |
| 14 | 04/18 | NLP for Social Good | Energy considerations of AI & Climate Change | Emma | | |
| | 04/20 | NLP for Social Good | Discussion | Emma & Maarten | | |
| 15 | 04/25 | Final project presentations | | Emma & Maarten | | Final project presentations |
| | 04/27 | Final project presentations | | Emma & Maarten | | Final project presentations |
| 16 | 05/06 | ---| ---| ---| --- | Final project report due (Saturday May 6th) |
----
### Invited Speakers
Sydney Levine - Formal Models of Human Moral Flexibility
One of the most remarkable things about the human moral mind is its flexibility: we can make moral judgments about cases we have never seen before.Yet, on its face, morality often seems like a highly rigid system of clearly defined rules. Indeed, the past few decades of research in moral psychology have revealed that human moral judgment often depends on rules. But sometimes, it is morally appropriate to break the rules. And sometimes, new rules need to be created. The field of moral psychology is just now beginning to explore and understand this kind of flexibility. Meanwhile, the flexibility of the human moral mind poses a challenge for AI engineers. Current tools for building AI systems fall short of capturing moral flexibility and thus struggle to predict and produce human-like moral judgments in novel cases that the system hasn’t been trained on. I will present a series of experiments and models (inspired by theories from moral philosophy) that demonstrate and capture the human capacity for rule making and breaking. I propose that AI systems would benefit from formal models of human moral flexibility.
Sydney Levine is a Research Scientist at the Allen Institute for AI. She studies the cognitive science of moral judgment and thinks about how that can help us build artificial intelligence systems that can interpret, predict, and produce human-like moral decisions.
David Gray Widder - Situating "AI for Social Good" and "Ethical AI" in our Future Careers and at our University
In the first part of this guest lecture, I’ll review some of my favorite concepts for thinking critically about technology, and then present my own (new!) research where we survey software engineers about ethical concerns they’ve faced in their own work - and explain what made them hard to resolve. I’ll show how their ability to do anything about their concerns are limited by factors such as personal precarity and their organization's incentives, such as profit.
We’ll then put some of this into practice: we’ll identify and discuss our own “ethical red lines”: things that, if we were asked to do them in our future career, we would refuse to do. We’ll conclude with a discussion of where we are: thinking about how we are situated at Carnegie Mellon, and the possible ethical concerns we might encounter here, and begin to think about the agency and opportunities we might have to face them.
David Gray Widder (he/him) studies how people creating “Artificial Intelligence” systems think about the downstream harms their systems make possible. He is a Doctoral Student in the School of Computer Science at Carnegie Mellon University, and previously worked at Intel Labs, Microsoft Research, and NASA’s Jet Propulsion Laboratory. He was born in Tillamook, Oregon and raised in Berlin and Singapore. He maintains a conceptual-realist artistic practice, advocates against police terror and pervasive surveillance, and enjoys distance running.
Jesse Dodge (he/him) is a Research Scientist at the Allen Institute for AI, on the AllenNLP team, working on natural language processing and machine learning. He is interested in the science of AI, and he works on reproducibility and efficiency in AI research. His research has highlighted the growing computational cost of AI systems, including the environmental impact of AI and inequality in the research community. He has worked extensively on improving transparency in AI research, including open sourcing and documenting datasets, data governance, and measuring bias in data. He has also worked on developing efficient methods, including model compression and improving efficiency of training large language models. His PhD is from the Language Technologies Institute in the School of Computer Science at Carnegie Mellon University. He created the NLP Reproducibility Checklist, which has been used by five main NLP conferences, including EMNLP, NAACL, and ACL, totaling more than 10,000 submissions, and he has been an organizer for the ML Reproducibility Challenge since 2020. His research has won awards including a Best Student Paper at NAACL 2015 and a ten-year Test of Time award at ACL 2022, and is regularly covered by the press, including by outlets like The New York Times, Nature, MIT Tech Review, Wired, and others.
Saadia Gabriel - Socially Responsible and Factual Reasoning for Equitable AI Systems
Understanding the implications underlying a text is critical to assessing its impact. This requires endowing artificial intelligence (AI) systems with pragmatic reasoning, for example to infer that the statement "Epidemics and cases of disease in the 21st century are 'staged'" relates to unfounded conspiracy theories. In this talk, I discuss how shortcomings in the ability of current AI systems to reason about pragmatics leads to inequitable detection of false or harmful language. I demonstrate how these shortcomings can be addressed by imposing human-interpretable structure on deep learning architectures using insights from linguistics.
In the first part of the talk, I describe how adversarial text generation algorithms can be used to improve model robustness. I then introduce a pragmatic formalism for reasoning about harmful implications conveyed by social media text. I show how this pragmatic approach can be combined with generative neural language models to uncover implications of news headlines. I also address the bottleneck to progress in text generation posed by gaps in evaluation of factuality. I conclude with an interdisciplinary study showing how content moderation informed by pragmatics can be used to ensure safe interactions with conversational agents, and my future vision for development of context-aware systems.
Saadia Gabriel is a final-year PhD candidate in the Paul G. Allen School of Computer Science & Engineering at the University of Washington, where she is advised by Prof. Yejin Choi and Prof. Franziska Roesner. Her work focuses on measuring factuality and intent of human-written language. Specifically, she is interested in designing generalizable end-to-end modeling frameworks based upon objectives that are directly aligned with the underlying motivations of a task. Two key dimensions of machine reasoning that excites her are social commonsense reasoning and fairness in NLP. Previously she interned at SRI, in the Mosaic group at AI2 and MSR.
Large Language Models (LLMs) have been heralded as the next big thing in the field of Natural Language Processing. In this talk we explore privacy questions that have been raised around Large Language Models (LLMs). We take a 3 part journey in this talk: first we detail some privacy concerns -- some of which are common to most generative NLP models such as the ability to memorize PII, while others are more unique to LLMs such as the risks associated with hallucination. Next, we take a step back to consider a model for privacy. The model defines what needs to be protected to guarantee privacy, then proffers options on how to go about it. Finally, we discuss some technical solutions that might meet these definitions. At the end of the talk, we revisit the risks to probe if indeed the definitions and solutions considered might be sufficient to address the risks, or if we still are some ways out to solving the issues.
Oluwaseyi Feyisetan is a research scientist at Meta working on differential privacy and multiparty computation. Prior to that, he led privacy research at Amazon, working on Alexa. He sits on the research advisory board of the IAPP and he is the lead organizer of PrivateNLP workshop series (WSDM 2020, EMNLP 2020, NAACL 2021).
Su Lin Blodgett - Language, Society, and Language Technologies
Su Lin Blodgett is a researcher in the Fairness, Accountability, Transparency, and Ethics (FATE) group at Microsoft Research Montréal. Her research focuses on the ethical and social implications of language technologies, focusing on the complexities of language and language technologies in their social contexts, and on supporting NLP practitioners in their ethical work. She completed her Ph.D. in computer science at the University of Massachusetts Amherst, where she was supported by the NSF Graduate Research Fellowship, and has been named as one of the 2022 100 Brilliant Women in AI Ethics.
----
### Grading
Grades are based on a combination of homeworks and participation (individual) and a semester-long class project (group). All assignments are due at 11:59pm Eastern on the specified date.
- **Homework assignments.** (3 assignments; 15% each) Each assignment contains a combination of coding, analysis, and discussion. For each assignment, completing the baseline requirements will obtain a passing (B-range) grade. A-range grades can be obtained through completing the open-ended “Advanced Analysis” part of the assignment. Assignments are not necessarily designed to focus on technical solutions, but instead to encourage students to think critically about the course material and understand how to approach ethical problems in NLP, while also allowing for exploration of various methodologies.
- **Project.** (30%) a semester-long 3-person team project (more details below and in class).
- **Participation.** (25%) classes will include discussions of reading assignments. Students will be expected to read relevant papers and participate in class discussions. Participation points can also be earned by posting interesting questions and useful answers on Canvas/Slack.
----
### Projects
A major component of this course is a team project. It will be a substantial research effort carried out by each group of students (expected group size = 3; 2-4 is acceptable). You can find some project ideas and resources [here](https://docs.google.com/document/d/1PfUU79IyiQUlbENwthkjbg4pTE-hAhXX9p7PwQLTNhg/edit?usp=sharing). Please use the "Find project groups" discussion in Canvas to find classmates with complementary interests and form groups. There will be a number of project milestones throughout the semester:
- **Pre-proposal:** (2%) Brainstorming phase: pick two or three project ideas and flesh them out, in a 1-2 paragraph describing the focus area of the project along with quick-and-dirty method descriptions and stretch goals. Also, define team members.
- **Proposal:** (8%) Pick a project: A 2-3 page document ([ACL format](https://github.com/acl-org/acl-style-files)) containing a literature review, concrete problem definition, identification of baseline models, and ideas for final models. Sections should include Introduction, Related Work, Data, Baseline, Proposed Approach. Baselines should be clearly defined but do not need to be implemented yet.
- **Midterm check-ins:** (2%) An in-class presentation of project and current progress. Presentation should include problem definition, baseline models and results, and description of proposed models.
- **Final Presentations:** (6%) In-class presentations of the project will be held during the last week of classes. See here for [rubric and tips on how to make your final presentation](presentingTips.html).
- **Final Report:** (12%) A final project report will be due the following week. We will use the 8-page ACL Rolling Review [format](https://github.com/acl-org/acl-style-files), [author guidelines](https://aclrollingreview.org/authors), and [Code of Ethics](https://www.aclweb.org/portal/content/acl-code-ethics). See here for [rubric](presentingTips.html).
----
### Discussion format
Some of the lectures will be discussion-based.
To make the discussion more lively and engaging, we will adopt reading roles, inspired by [Alec Jacobson and Colin Raffel' seminar](https://colinraffel.com/blog/role-playing-seminar.html) and Aditi Raghunathan's [15-884 class](https://www.cs.cmu.edu/~aditirag/teaching/15-884F22.html). Each student will play one of the following roles **
(updated Feb 2023)**:
- *Reviewer*: who discusses the strengths and weaknesses of the paper, either advocates for the paper to be accepted at a conference (e.g., ACL, EMNLP, FAccT) or advocates for its rejection.
- *Archaeologists I & II*: who determines where this paper sits in the context of previous and subsequent work. Keep an eye out for work that contradicts the takeaways in the current paper.
- *Forensic archaeologist*: find and report on at least one older paper cited within the current paper that substantially influenced the current paper and at least one newer paper that cites this current paper. Do not choose any of the other papers assigned for this week.
- *Community archaeologist*: choose two of the other papers assigned for this week, and discuss connections or conflicts with respect to your assigned paper. **This role will be responsible for sharing out in class**.
- *Academic researcher*: who proposes potential follow-up projects not just based on the current paper but also only possible due to the existence and success of the current paper.
- *Visitor from the past*: who is a researcher from the early 2000s. They must discuss how they comprehend the results of the paper, what they like or dislike about the settings and benchmarks considered, and what surprises them the most about presented results.
Discussion grading:
- You will be automatically assigned a role for each discussion class
- **Before each discussion class**, you will post your talking points on Canvas (due the day before class). These notes should include your role, and 2-4 talking points. Make sure to go into enough depth that others can understand what you're saying (simply adding more shallow talking points will not lead to a better grade).
- **After each discussion class**, you will reply to your notes and briefly discuss one reflection or point that you had not previously thought about. This can be significantly shorter than the initial post.
----
### Policies
**Late policy.** Each student will have 4 total late days that may be used for HW assignments at any point during the semester. Once the 4 days have been used up, you can still submit your assignments late, but at a loss of 20% (i.e., maximum grade is 80%), and not after the next assigment is due / after finals week.
*Note: late days may not be used for project benchmarks.*
**Academic honesty.** Homework assignments are to be completed individually. Verbal collaboration on homework assignments is acceptable, as well as re-implementation of relevant algorithms from research papers, but everything you turn in must be your own work, and you must note the names of anyone you collaborated with on each problem and cite resources that you used to learn about the problem. The project is to be completed by a team. You are encouraged to use existing NLP components in your project; you must acknowledge these appropriately in the documentation. Suspected violations of academic integrity rules will be handled in accordance with the [CMU guidelines on collaboration and cheating](http://www.cmu.edu/policies/documents/Cheating.html).
**AI-assisted writing policy.** We will follow the [ACL 2023 policy](https://2023.aclweb.org/blog/ACL-2023-policy/) on the use of AI assistants for writing. This means you may use AI tools for checking your writing and correcting your grammar; any usage beyond that (idea generation, literature review, etc.) is strongly discouraged or forbidden, see the policy for details.
**Accommodations for students with disabilities.** If you have a disability and have an accommodations letter from the Disability Resources office, we encourage you to discuss your accommodations and needs with the instructors as early in the semester as possible. We will work with you to ensure that accommodations are provided as appropriate. If you suspect that you may have a disability and would benefit from accommodations but are not yet registered with the Office of Disability Resources, we encourage you to contact them at [access@andrew.cmu.edu](mailto:access@andrew.cmu.edu).
----
### Note to students
**Take care of yourself!** As a student, you may experience a range of challenges that can interfere with learning, such as strained relationships, increased anxiety, substance use, feeling down, difficulty concentrating and/or lack of motivation. This is normal, and all of us benefit from support during times of struggle. There are many helpful resources available on campus and an important part of a healthy life is learning how to ask for help. Asking for support sooner rather than later is almost always helpful. CMU services are available free to students, and treatment does work. You can learn more about confidential mental health services available on campus through [Counseling and Psychological Services (CaPS)](https://www.cmu.edu/counseling/). Support is always available (24/7) at: 412-268-2922.
**Take care of your classmates and instructors!** In this class, every individual will and must be treated with respect. The ways we are diverse are many and are fundamental to building and maintaining an equitable and an inclusive campus community. These include but are not limited to: race, color, national origin, caste, sex, disability (visible or invisible), age, sexual orientation, gender identity, religion, creed, ancestry, belief, veteran status, or genetic information.
Research shows that greater diversity across individuals leads to greater creativity in the group. We at CMU work to promote diversity, equity and inclusion not only because it is necessary for excellence and innovation, but because it is just. Therefore, while we are imperfect, we ask you all to fully commit to work, both inside and outside of our classrooms to increase our commitment to build and sustain a campus community that embraces these core values. It is the responsibility of each of us to create a safer and more inclusive environment. Incidents of bias or discrimination, whether intentional or unintentional in their occurrence, contribute to creating an unwelcoming environment for individuals and groups at the university. If you experience or observe unfair or hostile treatment on the basis of identity, we encourage you to speak out for justice and offer support in the moment and/or share your experience using the following resources:
- [Center for Student Diversity and Inclusion](https://www.cmu.edu/student-diversity/): [csdi@andrew.cmu.edu](mailto:csdi@andrew.cmu.edu), (412) 268-2150
- [CMU anonymous reporting hotline](https://secure.ethicspoint.com/domain/media/en/gui/81082/index.html), (844) 587-0793