Algorithmic bias in machine learning is a well known and well studied fundamental problem. The emerging field of AI fairness aims to ensure that decisions driven by algorithms are fair. Due to the fact that AI algorithms are now ubiquitous and can be found in virtually all areas of life and work (agriculture, banking, criminal justice, medicine, etc.), fairness is becoming an increasingly important attribute to ensure user trust in current and future AI-based systems. Please take part in our survey which we perform together with our colleagues at the Human-Centered AI Lab in Sydney.

Participants in this survey are asked to assess whether an output provided by an AI-based application is justified or not.

Disclaimer: This experimental survey is voluntarily, anonymous, for scientific study only and does not raise any ethical issues.

Note: If you used an identifying access code to access this survey, rest assured that this code will not be stored with your responses. It is maintained in a separate database and updated only to indicate whether you have completed (or not completed) this survey. There is no way to match identification access codes with survey responses.

Please take the following very short survey – it will take you about 10 minutes:

https://experiments.human-centered.ai/survey/143791?lang=en


Background information:

Description: Within our FWF project we also investigate a currently interesting and relevant question: whether different machine learning (ML) explanations [1], [2] affect user’s perception of fairness in ML-informed decision making? If so, how do machine learning explanations affect user’s perception of fairness [3].

References:

[1] Vijay Arya, Rachel Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss & Aleksandra Mojsilovic (2019). One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques. https://arxiv.org/abs/1909.03012

[2]  AI Explainability 360: extensible open source toolkit to help to comprehend how machine learning models predict labels by various means throughout the AI application lifecycle:  http://aix360.mybluemix.net/

[3] J. A. Colquitt and J. B. Rodell (2015). Measuring Justice and Fairness. In: The Oxford Handbook of Justice in the Workplace, R. S. Cropanzano and M. L. Ambrose, Eds. Oxford University Press, 2015. Online available: https://www.academia.edu/39980020/Measuring_Justice_and_Fairness

[4] James Zou & Londa Schiebinger (2018). AI can be sexist and racist—it’s time to make it fair. Nature, 559, doi:10.1038/d41586-018-05707-8

Note: This topic is now being taken seriously in the AI/ML community, a good sign is that this is being taught in a regular Machine Learning class at MIT: https://www.youtube.com/watch?v=wmyVODy_WD8