I’ve seen several people mention this article regarding a new NYC law regarding holding the developers of artificial intelligence (AI) algorithms accountable for how they work. According to an article in Ars Technica, New York’s City Council last month adopted a law requiring audits of algorithms used by employers in hiring or promotion.
According to the article (The movement to hold AI accountable gains more steam, written by Khari Johnson), The NYC law, the first of its kind in the nation, requires employers to bring in outsiders to assess whether an algorithm exhibits bias based on sex, race, or ethnicity. Employers also must tell job applicants who live in New York when artificial intelligence plays a role in deciding who gets hired or promoted.
And there’s probably more to come. In Washington, DC, members of Congress are drafting a bill that would require businesses to evaluate automated decision-making systems used in areas such as health care, housing, employment, or education, and report the findings to the Federal Trade Commission; three of the FTC’s five members support stronger regulation of algorithms. An AI Bill of Rights proposed last month by the White House calls for disclosing when AI makes decisions that impact a person’s civil rights, and it says AI systems should be “carefully audited” for accuracy and bias, among other things.
Elsewhere, European Union lawmakers are considering legislation requiring inspection of AI deemed high-risk and creating a public registry of high-risk systems. Countries including China, Canada, Germany, and the UK have also taken steps to regulate AI in recent years.
Julia Stoyanovich, an associate professor at New York University who served on the New York City Automated Decision Systems Task Force, says she and students recently examined a hiring tool and found it assigned people different personality scores based on the software program with which they created their résumé. Other studies have found that hiring algorithms favor applicants based on where they went to school, their accent, whether they wear glasses, or whether there’s a bookshelf in the background.
Stoyanovich supports the disclosure requirement in the NYC law, but she says the auditing requirement is flawed because it only applies to discrimination based on gender or race. She says the algorithm that rated people based on the font in their résumé would pass muster under the law because it didn’t discriminate on those grounds.
Stoyanovich says she hopes the disclosure provision in the NYC law starts a movement toward meaningful empowerment of individuals, especially when it comes to instances when a person’s livelihood or freedom are at stake. She advocates public input in audits of algorithms.
The article goes on to discuss a lot more regarding AI, bias and auditing algorithms and there’s a lot more to say about this topic, especially regarding our industry and factors related to AI as evidence. That’s a topic for tomorrow! Stay tuned! 🙂
So, what do you think of the new NYC law? Please share any comments you might have or if you’d like to know more about a particular topic.
Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.