By Meghan Lehner, Cleveland Lehner Cassidy
The use of artificial intelligence (AI) can be an efficient and cost-effective means for employers to handle tasks such as hiring, compensation analysis and administrative and clerical functions, but it is not without its problems, especially the risk of prejudice.
AI software for hiring is often marketed as eliminating or reducing biased human decision-making. That is not the case. In practice, AI can create more bias because these systems depend on large collections of data, which can be biased themselves. Specifically, when companies train computer programs to filter out the best candidates for interviews, the learning is often based on prior resumes or attributes of previously hired successful candidates. Given the disparity between genders or even races in certain professions, using past data will only perpetuate the problem, as algorithms are taught to favor specific characteristics or experiences.
As more people turn to the internet to find jobs, it is almost certain that AI, specifically ad targeting, will imitate and even aggravate enduring biases. Old laws, like Title VII of the Civil Rights Act of 1964 which prohibits race and sex discrimination, or the Americans with Disabilities Act, are not equipped to protect against the types of discrimination that result from AI.
For example, an employer may choose to display ads for engineering jobs only to men. In that case, not only will a man be selected for the position, prospective female employees will not even know that the job existed and therefore, be unable to pursue legal action for being excluded from the hiring pool.
ProPublica reported in 2016 that Facebook allowed advertisers to exclude black, Hispanic, and other “ethnic affinities” from seeing housing ads. In September 2018, the ACLU filed a charge with the Equal Employment Opportunity Commission, alleging that Facebook violated labor and civil rights laws by allowing employers to target ads to (mostly younger) men, to the exclusion of (mostly older) women and gender-nonbinary job-seekers. And, Amazon recently abandoned a multiyear project to develop an AI hiring program because it kept discriminating against women. Amazon’s AI “taught itself that male candidates were preferable” to female candidates. To make matters worse, the AI emphasized the use of “macho verbs” on resumes as opposed to looking at actual technical qualifications. To its credit, Amazon eventually abandoned the algorithm altogether.
Employers use AI to Monitor Employees’ web searches
Even after the hiring process, AI can cause problems for employees. Companies employ AI to practically stalk their employees’ movements on the web. Facebook “likes” are used to infer sexual orientation, political affiliations and religious beliefs. Facebook isn’t employees’ only problem; employers are also using AI to track employees’ google searches to extrapolate data which may be used to motivate employment decisions. For example, one healthcare analytics firm, whose clients include some of the biggest employers in the country, already uses workers’ internet search histories and medical insurance claims to predict who is at risk of getting diabetes or considering becoming pregnant.
Employer use Games to Compile AI
Some employers are bringing games into the hiring process. The specially designed video games are supposed to give employers a better indication of how well candidates think on their feet than a résumé or other list of accomplishments would.
For example, Unilever asks some of its candidates to spend roughly a half-hour playing games on a computer or mobile device. The 13 games are designed to reveal the candidates’ personalities, problem-solving skills and communication style. Applicants who do well advance to the next step in the hiring process.
According to a report from the consulting firm Deloitte Insights, games are one of the approaches recruiters are using to attract young candidates.
If you would like to submit content or write an article for the Labor & Employment Law Section, please email Kara Sikorski at firstname.lastname@example.org.