Artificial intelligence used to evaluate job candidates must not become a tool that exacerbates discrimination.

American democracy depends on everyone having equal access to work. But in reality, people of color, women, those with disabilities and other marginalized groups experience unemployment or underemployment at disproportionately high rates, especially amid the economic fallout of the Covid-19 pandemic. Now the use of artificial intelligence technology for hiring may exacerbate those problems and further bake bias into the hiring process.

At the moment, the New York City Council is debating a proposed new law that would regulate automated tools used to evaluate job candidates and employees. If done right, the law could make a real difference in the city and have wide influence nationally: In the absence of federal regulation, states and cities have used models from other localities to regulate emerging technologies.

Over the past few years, an increasing number of employers have started using artificial intelligence and other automated tools to speed up hiring, save money and screen job applicants without in-person interaction. These are all features that are increasingly attractive during the pandemic. These technologies include screeners that scan résumés for key words, games that claim to assess attributes such as generosity and appetite for risk, and even emotion analyzers that claim to read facial and vocal cues to predict if candidates will be engaged and team players.

In most cases, vendors train these tools to analyze workers who are deemed successful by their employer and to measure whether job applicants have similar traits. This approach can worsen underrepresentation and social divides if, for example, Latino men or Black women are inadequately represented in the pool of employees. In another case, a résumé-screening tool could identify Ivy League schools on successful employees’ résumés and then downgrade résumés from historically Black or women’s colleges.

In its current form, the council’s bill would require vendors that sell automated assessment tools to audit them for bias and discrimination, checking whether, for example, a tool selects male candidates at a higher rate than female candidates. It would also require vendors to tell job applicants the characteristics the test claims to measure. This approach could be helpful: It would shed light on how job applicants are screened and force vendors to think critically about potential discriminatory effects. But for the law to have teeth, we recommend several important additional protections.

The measure must require companies to publicly disclose what they find when they audit their tech for bias. Despite pressure to limit its scope, the City Council must ensure that the bill would address discrimination in all forms — on the basis of not only race or gender but also disability, sexual orientation and other protected characteristics.

These audits should consider the circumstances of people who are multiply marginalized — for example, Black women, who may be discriminated against because they are both Black and women. Bias audits conducted by companies typically don’t do this.

The bill should also require validity testing, to ensure that the tools actually measure what they claim to, and it must make certain that they measure characteristics that are relevant for the job. Such testing would interrogate whether, for example, candidates’ efforts to blow up a balloon in an online game really indicate their appetite for risk in the real world — and whether risk-taking is necessary for the job. Mandatory validity testing would also eliminate bad actors whose hiring tools do arbitrary things like assess job applicants’ personalities differently based on subtle changes in the background of their video interviews.

In addition, the City Council must require vendors to tell candidates how they will be screened by an automated tool before the screening, so candidates know what to expect. People who are blind, for example, may not suspect that their video interview could score poorly if they fail to make eye contact with the camera. If they know what is being tested, they can engage with the employer to seek a fairer test. The proposed legislation currently before the City Council would require companies to alert candidates within 30 days if they have been evaluated using A.I., but only after they have taken the test.

Finally, the bill must cover not only the sale of automated hiring tools in New York City but also their use. Without that stipulation, hiring-tool vendors could escape the obligations of this bill by simply locating sales outside the city. The council should close this loophole.

With this bill, the city has the chance to combat new forms of employment discrimination and get closer to the ideal of what America stands for: making access to opportunity more equitable for all. Unemployed New Yorkers are watching.

Resource:

Givens, Alexandra Reeve, Hilke Schellmann, and Julia Stoyanovich. “Opinion | We Need Laws to Take On Racism and Sexism in Hiring Technology.” The New York Times, March 17, 2021, sec. Opinion. https://www.nytimes.com/2021/03/17/opinion/ai-employment-bias-nyc.html.

Personal Analysis:

Numerous studies and observations demonstrate that AI operates with bias in numerous areas. Individuals may assume that machines are neutral, despite the fact that they are created by people who consciously or accidentally hold a variety of biases; therefore, it is not unexpected that the outcome is the same. This motivated me to investigate further examples of this phenomenon, which I found in search engines, human figure detection, beauty contests, managing the population health, police facial recognition and more importantly in many different social media algorithms. As my study relates to technology and social media, I feel this is one of the factors I should keep an eye on, as it might provide me with fresh ideas.

Related Articles:

– Artificial Intelligence Has a Problem With Gender and Racial Bias. Here’s How to Solve It

https://time.com/5520558/artificial-intelligence-racial-gender-bias/

– A social media app just for ‘females’ intentionally excludes trans women — and some say its face-recognition AI discriminates against women of color, too

https://www.businessinsider.com/giggle-app-uses-ai-to-exclude-trans-women-ceo-says-2022-1?_gl=1*ebkkau*_ga*OTQxODY2MDMwLjE2NzI4MzQ4ODQ.*_ga_E21CV80ZCZ*MTY3NDg1MjI3MC4zLjEuMTY3NDg1MjI3Ny4wLjAuMA..