In the second part of this series on Algorithmic Bias and Fairness, we’re looking at how we can make artificial intelligence and algorithms fairer. If you’re interested in learning about the math and statistics behind bias, go to http://brilliant.org/jordan and sign up for free. The first 200 people to sign up will also get 20% off the annual Premium subscription.
Twitter – http://twitter.com/jordanbharrod
Instagram – http://www.instagram.com/jordanbharrod
Bender, E. M., & Friedman, B. (2018). Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science. https://doi.org/10.1162/tacl_a_00041
Bolukbasi, T., Chang, K.-W., Zou, J., Saligrama, V., & Kalai, A. (2016). Debiasing Word Embedding. Retrieved from https://code.google.com/archive/p/word2vec/
Chouldechova, A. (2017). Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments. https://doi.org/10.1089/big.2016.0047
DeVries, T., Misra, I., W*ng, C., & van der Maaten, L. (2019). Does Object Recognition Work for Everyone? Retrieved from http://arxiv.org/abs/1906.02659
Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumé, H., & Crawford, K. (2018). Datasheets for Datasets. Retrieved from http://arxiv.org/abs/1803.09010
Guo, C., Pleiss, G., Sun, Y., & Weinberger, K. Q. (2017). On calibration of modern neural networks.
Hardt, M., Price, E., & Srebro, N. (2016). Equality of opportunity in supervised learning.
Hoffmann, A. L. (2019). Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse. https://doi.org/10.1080/1369118X.2019.1573912
Hovy, D., & Spruit, S. L. (2016). The social impact of natural language processing.
Hu, L., & Chen, Y. (2020). Fair classification and social welfare. https://doi.org/10.1145/3351095.3372857
Jia, S., Meng, T., Zhao, J., & Chang, K.-W. (2020). Mitigating Gender Bias Amplification in Distribution by Posterior Regularization. Retrieved from http://arxiv.org/abs/2005.06251
Jo, E. S., & Gebru, T. (2020). Lessons from archives: Strategies for collecting sociocultural data in machine learning. https://doi.org/10.1145/3351095.3372829
Kasy, M., & Abebe, R. (n.d.). Fairness, equality, and power in algorithmic decisionmaking, 1–14.
Kleinberg, J., Mullainathan, S., & Raghavan, M. (2017). Inherent trade-offs in the fair determination of risk scores. https://doi.org/10.4230/LIPIcs.ITCS.2017.43
Lecun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444. https://doi.org/10.1038/nature14539
Mitchell, S., Potash, E., Barocas, S., D’Amour, A., & Lum, K. (2018). Prediction-Based Decisions and Fairness: A Catalogue of Choices, Assumptions, and Definitions, 1–22. Retrieved from http://arxiv.org/abs/1811.07867
Olteanu, A., Castillo, C., Diaz, F., & Kıcıman, E. (2019). Social Data: Biases, Methodological Pitfalls, and Ethical Boundaries. https://doi.org/10.3389/fdata.2019.00013
Raji, I. D., & Buolamwini, J. (2019). Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial AI products. https://doi.org/10.1145/3306618.3314244
Raji, I. D., Gebru, T., Mitchell, M., Buolamwini, J., Lee, J., & Denton, E. (2020). Saving Face: Investigating the ethical concerns of facial recognition auditing. https://doi.org/10.1145/3375627.3375820
Shankar, S., Halpern, Y., Breck, E., Atwood, J., Wilson, J., & Sculley, D. (2017). No Classification without Representation: Assessing Geodiversity Issues in Open Data Sets for the Developing World, Retrieved from http://arxiv.org/abs/1711.08536
Stock, P., & Cisse, M. (2018). ConvNets and imagenet beyond accuracy: Understanding mistakes and uncovering biases. 11210 LNCS, 504–519. https://doi.org/10.1007/978-3-030-01231-1_31
Suresh, H., & Guttag, J. V. (2019). A Framework for Understanding Unintended Consequences of Machine Learning. Retrieved from http://arxiv.org/abs/1901.10002
Verma, S., & Rubin, J. (2018). Fairness definitions explained. Proceedings – International Conference on Software Engineering, 1–7. https://doi.org/10.1145/3194770.3194776
W*ng, T, Zhao, J., Yatskar, M., Chang, K. W., & Ordonez, V. (2019). Balanced datasets are not enough: Estimating and mitigating gender bias in deep image representations. https://doi.org/10.1109/ICCV.2019.00541
Wilson, B., Hoffman, J., & Morgenstern, J. (2019). Predictive Inequity in Object Detection. Retrieved from http://arxiv.org/abs/1902.11097
Zhao, J., W*ng, T., Yatskar, M., Ordonez, V., & Chang, K. W. (2017). Men also like shopping: Reducing gender bias amplification using corpus-level constraints. https://doi.org/10.18653/v1/d17-1323
Some interesting Twitter threads containing these resources and more:
Good papers I like on this:
— Deb Raji (@rajiinio) June 22, 2020
One of my research topics in grad school was fairness, accountability and transparency (FAT) in NLP systems. I've kept up with the literature.
Here's a quick thread of papers I'd recommend reading on the topic if you want to get up to speed.
— Rachael Tatman (@rctatman) June 22, 2020
This is the first thing I also thought of after seeing this thread!
If you're interested in computer vision x fairness, here are some good introductory papers I like: https://t.co/rWE8wDyz9U
— Deb Raji (@rajiinio) June 23, 2020
New paper by @red_abebe & me:
Fairness, Equality, and Power in Algorithmic Decision-Makinghttps://t.co/wzFjj0wPpR
Standard def of fairness: Absence of discrimination for individuals with the same "merit."
We argue: Such definitions have three key limitations.
— Maximilian Kasy (@maxkasy) June 8, 2020