A study has found that New York City’s law requiring companies to disclose how their use of artificial intelligence tools affects their hiring decisions has not been successful so far, The Wall Street Journal reported.
According to the Journal, the rule issued by the Department of Consumer and Worker Protection to implement the law requires employers using software to assist with hiring and promotion decisions to audit those tools annually for potential race and gender bias, and then to publish the results on the career section of their websites.
But only 18 of 391 companies analyzed by researchers at Cornell University—many of which have headquarters or large operations in New York—disclosed information. In some instances when audit results were posted, the notices were “challenging, time-consuming and frustrating,” the researchers wrote.
The results demonstrated that the law has very limited value for job seekers and, at best, might offer a road map for how to improve future regulations, according to the researchers. “We now know more of the failure points,” said Jacob Metcalf, a co-author of the Cornell study and leader of an AI initiative for the nonprofit technology research group Data & Society, in an interview with the Journal.
The law gives employers “almost unlimited discretion” to decide whether they are within the scope of the rule, Metcalf said, because it applies only to companies whose tools "substantially assist or replace human decision-making." The Journal noted that the original rule proposed that employers using most automated employment decision tools were required to audit them, but the final version limited the scope.
Many employers are saying, “ ‘We use this tool but it doesn’t entirely replace human decision-making so it doesn’t fall under the definition of an AEDT [automated employment decision tool],’ ” said Ben Fichter, co-founder of ConductorAI, a firm that has completed audits for a few dozen technology vendors, in an interview with the Journal.
A 2022 report by SHRM, a trade group for human-resources executives, found that 64 percent of companies were using artificial intelligence or other forms of automation to review or screen applicant résumés.
Employers that hire for New York City-based roles and are subject to the rule have to publish, on their websites, adverse-impact ratios, which show whether a process has a disparate effect on any race or gender. They must also include a notice that they are using the tools and "instructions for how an individual can
request an alternative selection process or a reasonable accommodation under other laws, if
available." The law does not, however, require an employer or employment agency to provide an
alternative selection process.
The Journal reported that companies that fail to comply are subject to penalties of up to $1,500 per violation a day. It noted that the law is a disclosure law, so it doesn’t require employers to change their hiring procedures.
A spokesman for New York City’s Department of Consumer and Worker Protection, which oversees the law, said enforcement is driven by complaints, and the agency has received no complaints since the rule has been active.
“Unless there’s some enforcement action, employers will probably ignore it,” said Adam Klein, managing partner of Outten & Golden, a law firm that represents workers in employment litigation, in an interview with the Journal. If companies do conduct and disclose their bias audits, he added, “there’s no way to verify the information.”
Last February, a Morehouse University graduate sued Workday, one of the largest vendors of HR technology, alleging that its software discriminates on the basis of race, disability and age. The plaintiff said he had applied for 80 to 100 jobs with employers that used Workday’s hiring tools and had been rejected by all of them. Workday moved to dismiss the suit, which was granted by a federal judge who gave the plaintiff until Feb. 20 to file a new complaint with additional evidence.
“We believe this lawsuit is without merit and completely devoid of factual allegations and assertions that support the claim,” Workday said in a statement. Workday also said that it hasn’t audited its tools for the purposes of the New York City law, something other HR technology vendors have done.
Lawyers and technology companies interviewed by the Journal said that employers will have to get used to greater regulation of their systems. “This is just the beginning of what will be a larger swath of obligations that’s coming,” said Shea Brown, founder of BABL AI, which audits AI risk and systems for companies, in an interview with the Journal. The European Union will soon release a comprehensive set of rules of its own governing artificial intelligence, Brown said.