Fairness in Artificial Intelligence

This module explores fairness in Artificial Intelligence (AI), focusing on identifying and mitigating biases in AI systems. It covers the sources of bias, methods for assessing and improving fairness, and the ethical implications of AI decisions on different populations, aiming to promote equality and prevent discrimination in AI applications.

Portal > Artificial Intelligence > Fairness in Artificial Intelligence

Curriculum Builder

Mehrabi, Ninareh, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. “A Survey on Bias and Fairness in Machine Learning.” ACM Computing Surveys 54, no. 6 (2021): 1–35. doi:10.1145/3457607.

Zhao, Jieyu, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. “Men Also Like Shopping: Reducing Gender Bias Amplification Using Corpus-Level Constraints,” 2017. doi:10.48550/arxiv.1707.09457.

Fortunato, Santo, and Darko Hric. “Community Detection in Networks: A User Guide.” Physics Reports 659 (2016): 1–44. doi:10.1016/j.physrep.2016.09.002.

Mehrabi, Ninareh, Fred Morstatter, Nanyun Peng, and Aram Galstyan. “Debiasing Community Detection: The Importance of Lowly-Connected Nodes,” 2019. doi:10.48550/arxiv.1903.08136.

Olteanu, Castillo & Kiciman “Social data: Biases, methodological pitfalls, and ethical boundaries.” Angwin et al. Machine bias. Propublica (2016) https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

Grira, Nizar, Crucianu Michel, and Nozha Boujemaa. “Unsupervised and semi-supervised clustering: a brief survey.” A review of machine learning techniques for processing multimedia content 1 (2004): 9-16