top of page
ABV_DiversityInclusion_1044x588.jpg

CS 1699: AI for Good: Ethics and Implications
 

Instructor: Malihe Alikhani

Time and Place: Tuesday and Thursday 9:25-10:40 am, Slack & Zoom

Office hours: Friday 9-10 am & by appointment

Week 1: Prospects on Ethics of AI

 

 


Benjamin Kuipers. 2020.
Perspectives on Ethics of AI: Computer Science.
To appear in Markus Dubber, Frank Pasquale, and Sunit Das (Eds.),
Oxford Handbook of Ethics of AI, Oxford University Press, 2020.

image001-2.png

Welcome!

I'm very excited to welcome you to this new advanced AI ethics seminar. We will cover issues surrounding AI, such as ethics and fairness. You will be introduced to real-world applications of AI and the potential ethical implications associated with them. We will discuss the philosophical foundations of ethical research along with advanced state-of-the-art techniques.  Discussion topics include:

  • Prospects on ethics of AI

  • Ethical dilemmas of AI

  • The case against AI

  • Fairness in machine learning

  • Bias in data

  • Ethical machine learning in health

  • NLP as a tool for detecting stereotypes

  • Blackbox algorithms and epistemic opacity

  • Fairness in RL

Students are required to demonstrate AI for good in action with a mini-project and write a critique on current codes of ethics for the machine learning community. I've drawn on related classes such as Emily Bender's Ethics in NLP.

Week 2: Ethical dilemmas of AI 

 

 

 




 

Artificial Intelligence: examples of ethical dilemmas, UNESCO

Hagendorff, T. (2020). The Ethics of AI Ethics: An Evaluation of Guidelines. Minds and Machines.

26798396924_941bd031d1_z.jpg

Week 3: Never mind: the case against artificial intelligence

"Understanding, Orientations, and Objectivity", Winograd, 2002

ed610705fcab5f7013f64a2b310ad0ca.png

Week 4: Fairness in machine learning–classification

Barocas, Solon, Moritz Hardt, and Arvind Narayanan. "Fairness in machine learning." NIPS Tutorial 1 (2017), Page 49-56

Screen Shot 2021-01-02 at 3.19.32 PM.png

Week 5: Fairness in machine learning–causality

Barocas, Solon, Moritz Hardt, and Arvind Narayanan. "Fairness in machine learning." NIPS Tutorial 1 (2017), Page 79-90.

1_bYM4q1DuQrSZNLKUJDafOg.jpeg

Week 6: Where does the data come from?




 

Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé III, Kate Crawford. 2020.  Datasheets for Datasets.  

oprah-winfrey.png

Week 7: Exclusion/discrimination/bias in data





Casey Fiesler and Nicholas Proferes. 2018. “Participant” Perceptions of Twitter Research Ethics. Social Media + Society, 4(1). 22

2020-05-26_data-bias-ML.jpg

Week 8: Sometimes it's unethical *not* to run an experiment

 

 




Chen, Irene Y., et al. "Ethical machine learning in health." arXiv preprint arXiv:2009.10576 (2020).

Screen Shot 2021-01-03 at 10.51.12 AM.pn

Week 9: Value sensitive design 





Friedman, B., Hendry, D. G., Borning, A., et al. (2017). 
A survey of value-sensitive design methods. Foundations and Trends in Human-Computer Interaction, Chapter 3.

Screen Shot 2021-01-02 at 6.08.19 PM.png

Week 10: NLP as a tool for detecting stereotypes 

 

 

 

 

 

Elliott Ash, Daniel L. Chen, Arianna Ornaghi. 2020. Stereotypes in High-Stakes Decisions: Evidence from U.S. Circuit Courts. NBER Manuscript.

Gonen, Hila, and Yoav Goldberg. 2019. "Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them." Proceedings of NAACL-HLT 2019.

Stereotypes.png

Week 11: Ethical issues in chatbots

Curry, Amanda Cercas, and Verena Rieser. 2018. "# MeToo Alexa: How Conversational Systems Respond to Sexual Harassment." In Proceedings of the Second ACL Workshop on Ethics in Natural Language Processing, pp. 7-14. 2018

chatbots-data-privacy-CONTENT-2019.jpg

Week 12: Black-box algorithms and epistemic opacity

Carabantes, Manuel. "Black-box artificial intelligence: an epistemological and critical analysis." AI & SOCIETY (2019): 1-9.

causability-research.png

Week 13: Fairness in reinforcement learning



Jabbari, Shahin, et al. "Fairness in reinforcement learning." International Conference on Machine Learning. PMLR, 2017.

Screen Shot 2021-01-02 at 8.23.33 PM.png

Week 14: Student Presentations

Minimalist Staircase

Course Requirements and Grading
 

Research paper presentations

Students will be required to give one paired (given by a team of two students) research paper presentation. The class has a Slack workspace that can help you get in touch with your classmates easily. We will read one paper every week. 

A paper presentation should be about 20 minutes and will be followed by 15 minutes of class discussion led by the presenters. The presentation should first summarize the content of the paper, clearly presenting: 1) What problem or task does the paper study? 2) What is the motivation for this problem/task, i.e., why is it important? 3) What is the novel algorithm/approach to this problem that the paper proposes? 4) How is this method evaluated, i.e. what is the experimental methodology, what data is utilized, and what performance metrics are used? 5) What are the basic results and conclusions of the paper? Finally, the presentation should conclude with the presenter's critique of the paper including 1) Are there any reasons to question the motivation and importance of the problem studied? 2) Are there any limitations/weaknesses to the proposed approach to this problem? 3) Are there any limitations/weaknesses to the evaluation methodology and/or results? 4) What are some promising future research directions following up on this work? For the paired presentation, the two presenters can decide how to divide the work of the presentation among themselves. 

Paper critiques

Each student should submit three paper critiques on Slack. Pick three of the papers discussed in class and write a 1/2 page critique of the paper. Use the critique questions listed above for the oral presentations to guide your discussion, but there is no need to address all of these questions in a given critique. Submit a nicely formated 11pt font PDF. 

Programming assignment

Two programming assignments are posted on Canvas. You are required to turn in one assignment (of your choice) by April 15.

Final research project

Final projects should ideally be done by teams of two students. Projects done by one or three students are possible on rare occasions with prior approval of the instructor. Each student is responsible for finding a partner for their final project. Feel free to post a message on Slack about your interests in order to find a partner with similar interests. 

Submit a one-page project proposal by February 16 that briefly covers the first 4 questions for research paper presentations (see above). I will provide feedback on these proposals, but they will not be graded. Students are encouraged to discuss project proposals with me during office hours before submitting them. Groups are required to give a 10-minute presentation about their progress on the final project in the 6th week of the class. During the last two weeks of class, each team will be required to give a 15-minute presentation on the current state of their project, and lead a 10-minute discussion of their project. Use the same format as the paired research paper presentations (see above), but you may have only preliminary actual results at that time to present. Submit the code to the GitHub repo for the class at midnight on April 23. 

Final grade

The final grade will be computed as follows:

30% Class participation

5% Paired research paper presentation

5% Midterm

20% Programming assignment

35% Final project 

5% critique on current codes of ethics for the machine learning

Academic integrity
All assignment submissions must be the sole work of each individual student. Students may not read or copy another student's solutions or share their own solutions with other students. Students may not review solutions from students who have taken the course in previous years. Submissions that are substantively similar will be considered cheating by all students involved, and as such, students must be mindful not to post their code publicly. The use of books and online resources is allowed, but must be credited in submissions, and material may not be copied verbatim. Any use of electronics or other resources during an examination will be considered cheating.

If you have any doubts about whether a particular action may be construed as cheating, ask the instructor for clarification before you do it. The instructor will make the final determination of what is considered cheating.

Cheating in this course will result in a grade of F for the course and may be subject to further disciplinary action.

Using an open-source codebase is accepted, but you must explicitly cite the source, especially following the owner's guideline if it exists. For any writing involved in the project, plagiarism is strictly prohibited. If you are unclear whether your work will be considered as plagiarism, ask the instructor before submitting or presenting the work

Students with disabilities

If you have a disability for which you are or may be requesting an accommodation, you are encouraged to contact both your instructor and Disability Resources and Services, 216 William Pitt Union, 412-648-7890 or 412-383-7355 (TTY) as early as possible in the term. DRS will verify your disability and determine reasonable accommodations for this course. More info at www.drs.pitt.edu.

Audio/video recordings

To ensure the free and open discussion of ideas, students may not record classroom lectures, discussions, and/or activities without the advance written permission of the instructor, and any such recording properly approved in advance can be used solely for the student's own private use. Since this is a seminar class and meetings are all online, students are required to use their cameras during the class.

 

Copyrighted materials

All material provided through this web site is subject to copyright. This applies to class/recitation notes, slides, assignments, solutions, project descriptions, etc. You are allowed (and expected!) to use all the provided material for personal use. However, you are strictly prohibited from sharing the material with others in general and from posting the material on the Web or other file sharing venues in particular.

Diversity and Inclusion

The University of Pittsburgh does not tolerate any form of discrimination, harassment, or retaliation based on disability, race, color, religion, national origin, ancestry, genetic information, marital status, familial status, sex, age, sexual orientation, veteran status or gender identity or other factors as stated in the University’s Title IX policy. The University is committed to taking prompt action to end a hostile environment that interferes with the University’s mission. For more information about policies, procedures, and practices, see:http://diversity.pitt.edu/affirmative-action/policies-procedures-and-practices.3I ask that everyone in the class strive to help ensure that other members of this class can learn in a supportive and respectful environment. If there are instances of the aforementioned issues, please contact the Title IX Coordinator, by calling 412-648-7860, or e-mailingtitleixcoordinator@pitt.edu. Reports can also be filed online:https://www.diversity.pitt.edu/make-report/report-form. You may also choose to report this to a faculty/staff member; they are required to communicate this to the University’s Office of Diversity and Inclusion. If you wish to maintain complete confidentiality, you may also contact the University Counseling Center (412-648-7930)

Gender Inclusive Language Statement (from Pitt GSWS)

Language is gender-inclusive and non-sexist when we use words that affirm and respect how people describe, express and experience their gender. Just as sexist language excludes women’s experiences, non-gender-inclusive language excludes the experiences of individuals whose identities may not fit the gender binary, and/or who may not identify with the sex they were assigned at birth. Identities including trans, intersex, and genderqueer reflect personal descriptions, expressions, and experiences. Gender-inclusive/non-sexist language acknowledges people of any gender (for example, first-year student versus freshman, chair versus chairman, humankind versus mankind, etc.). It also affirms non-binary gender identifications and recognizes the difference between biological sex and gender expression. Students may share their preferred pronouns and names, and these gender identities and gender expressions should be honored.

bottom of page