Interest Groups

New Research about Using Automated Decision Making to Mitigate Bias in Employment Decisions - Labor and Employment Law News

Get the news you want the way you want it: click the RSS button in the right corner to add this feed to your RSS reader, or click here to subscribe to this content. By subscribing, you’ll find this news on your Member Account page, and the latest articles will be emailed to you in your customized IndyBar E-Bulletin e-newsletter.

Labor and Employment Law News

Posted on: Dec 16, 2019

By Allison Martin, Indiana University Robert H. McKinney School of Law

An interesting article has been posted on SSRN entitled Stretching Human Laws to Apply to Machines: The Dangers of a 'Colorblind' Computer (forthcoming in the Florida Law Review). The authors, Zach Harned, a Stanford student, and Hanna Wallach, with Microsoft Research, use antidiscrimination laws under Title VII as a case study.

The authors state, “A common strategy for mitigating bias in employment decisions is to ‘blind’ human decision makers to the sensitive attributes of the applicants, such as race.” However, according to their research, this strategy does not work well for machine learning systems because they “are adroit at using proxies for race if available.”  Alternatively, a second strategy, to not blind the machine learning system to race, would violate Title VII.  The authors then offer a third strategy of partially blinding the machine learning system, which, they contend, would mitigate bias in employment decisions and comply with Title VII. Read more here.

If you would like to submit content or write an article for the Labor & Employment Law Section, please email Kara Sikorski at


Indianapolis Bar Association (IndyBar) est. 1878 | 4,536 Members (as of 2.11.21)