•  
  •  
 
West Virginia Law Review

Document Type

Article

Abstract

Artificial intelligence is based, in part, on learning algorithms that can continually monitor and embed new data from large numbers of sources to create ever "improved" decisions, whether the decisions are applied to the physical world like the operation of smart cars, or whether the decision is about the extension of credit to a loan applicant. Yet, it is well known that algorithms and resulting decision making AI models are plagued by unintended bias and discriminatory results. Data science scholars have attempted to address bias and discrimination through the imposition of multiple mathematically represented options to measure fairness. However, these mathematical measures can conflict, and they can be incompatible with legal concepts of fairness, especially those found in non-discrimination laws. This article provides an introduction to the multiple meanings of fairness in both the data science and legal disciplines and uses a case study of AI and alternative data credit scoring to illustrate how the use of different disciplinary meanings of fairness will significantly affect societal outcomes. In conclusion, it is proposed that a socio-technical approach to AI fairness, which incorporates legal concepts, will increase the acceptance, legitimacy, and trust of those systems.

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.