enterprise

3 things businesses need to know as NYC begins enforcing its AI hiring law


Head over to our on-demand library to view sessions from VB Transform 2023. Register Here


In July, New York City officially began cracking down on companies that run afoul of its first-in-the-nation law (NYC Law 144) governing the use of artificial intelligence in employment decisions.

Even companies that are not based in New York City but have operations and employees there — particularly global enterprises — must be compliant with this new regulation. The law doesn’t explicitly prohibit AI, but provides guidelines for how the technology should be used when making hiring decisions.

That’s an important distinction. Organizations across industries (healthcare, manufacturing, retail and countless others) already use intelligent technology in a multitude of ways. Examples include oncologists using AI to help diagnose cancer with a high degree of precision, manufacturing and retail predicting buying patterns to improve logistics and the consumer experience, and nearly all music recorded today utilizes auto-tune to correct or enhance a singer’s pitch.

When it comes to personnel matters, companies currently use AI to match relevant candidates with the right jobs — and this is NYC 144’s focus. After multiple delays, the new law has many companies a bit jittery at a time when job openings remain elevated and unemployment is near historic lows.

Event

VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.

 


Register Now

Regulate, yes

Boldface tech names such as Microsoft’s president, Brad Smith, and Google’s CEO, Sundar Pichai, have endorsed a regulatory framework. Transparency is always a good thing. “I still believe A.I. is too important not to regulate and too important not to regulate well,” Pichai wrote in the Financial Times.

Conversely, if not done well, regulations could negatively impact job seekers and hiring managers by restricting the insightful information and tailored experiences that form the crux of a positive employment process. 

Thirty years ago, recruiters sifted through stacks of resumes sitting on their desks. Candidates were often selected based on inconsistent criteria, including Ivy League education, location within the pile and a bit of luck based on how high in the pile their resume was placed — over which they had no control. Humans’ unconscious biases add another untraceable filter when technology isn’t involved.

AI delivered scalability and accuracy to help level the playing field by matching individuals with the required skills and experience to the right roles, regardless of where they sit within the proverbial pile of resumes. AI also helps recruiters see the whole person and skills that the individual may not have thought to highlight within their resume. AI can’t prevent a recruiter or hiring manager from taking shortcuts. But it can make them less necessary by surfacing relevant resumes that might otherwise be lost in the pile.

Readers Also Like:  LETTER: What is Shabbat? - Park Rapids Enterprise

The combination of human control and AI support is a good counter against bias in two ways. First, one cause of bias in human decision-making is that people often look for shortcuts to solving problems, like focusing only on candidates from Ivy League schools rather than investing time and effort to source and evaluate candidates from non-traditional backgrounds.

Second, bias detection with adverse-impact reporting can expose such bias in real time, allowing the organization to take action to stop such biased decisions.

There are potential laws being debated in Europe that might restrict the use of any personalization in the talent acquisition lifecycle. That could hamper employment prospects not only for external candidates, but for employees already in the company who are looking to move into a new role.

Pulling back hard on the reins of these technologies could actually lead to more bias because an imperfect human would then be solely in charge of the decision-making process. That could lead to a fine under the New York law and additional federal penalties since the Equal Employment Opportunity Commission has warned companies that they are on the hook for any discrimination in hiring, firing or promotions — even if it was unintentional and regardless of whether it is AI-assisted.

Looking past the fear

No law is perfect and NYC’s new legislation is no different. One requirement is to notify candidates that AI is being used — like cookie notifications on websites or end-user license agreements (EULAs) that most people click on without reading or truly understanding them.

Words matter. When reading AI-use notifications, individuals could easily conjure doomsday images portrayed in movies of technology overtaking humanity. There are countless examples of new technology evoking fear. Electricity was thought to be unsafe in the 1800s, and when bicycles were first introduced, they were perceived as reckless, unsightly and unsafe.

Readers Also Like:  New Faculty: Cabrales Arriaga helping build mechatronics department - CSUMB

Explainability is a key requirement of this regulation, as well as just being good practice. There are ways to minimize fear and improve notifications: Make them clear and succinct, and keep legal jargon to a minimum so the intended audience can consume and understand the AI that’s in use.

Get compliant now with AI regulation

No one intentionally wants to run afoul of New York’s law. So here are three recommendations for business leaders as you work with your legal counsel:

  1. Examine your notification content and user experience. How well are you explaining in plain English the use of these technologies to job seekers? Einstein said, “If you can’t explain it simply, you don’t understand it well enough.” Let people know you’re using an algorithm on the career site. Examples include, “Here’s what we’re collecting, here’s how we’re going to use it (and how we’re not) and here’s how you can control its use.”
  2. Participate in the regulatory process and engage immediately. The only way to stay ahead of regulation and ensure compliance is if you know what’s coming. This was a challenge with the General Data Protection Regulation (GDPR) in Europe. The compliance period for GDPR started in May 2018. Most businesses were not ready. The penalties were pretty significant. Apply those lessons learned to New York’s law by engaging with like-minded organizations and government bodies at a leadership and executive level. This not only opens your organization to the conversation, but allows for input and alignment on policy, procedures and practices.
  3. Be audit-ready. Look at your entire process, work with your technology providers to identify where these tools are making recommendations and ensure that fairness and responsibility are being applied. New York requires companies to have independent AI auditors. Audits have long been part of the business landscape, such as in accounting, IT security, and federal health information privacy. The next question is: Who’s auditing the auditors? This is going to come down to whether there should be a body made up of not just government, but also private and public entities that have expertise in these fields to set reasonable guidelines. 

So know your process, have an internal audit ready to go and train your employees on all of this.

One country, one law

My final word of caution to business leaders is to watch their state lawmakers, who may follow New York’s lead with regulations of their own. We can’t have 50 different versions of AI anti-bias legislation. The federal government needs to step in and bring states together. There are already differences between New York and California. What is going to happen in Nevada and Colorado and other states? If state lawmakers create a patchwork of laws, businesses will find it difficult to operate, not just to comply.

Readers Also Like:  Lisa Newton Obituary (2023) - Bardstown, KY - The News-Enterprise - Legacy.com

State legislators and regulators would be wise to connect with colleagues in bordering states and ask how they’re handling AI in HR. Because if states share a border, they had better be aligned with one another because they are sharing job seekers.

Capitol Hill lawmakers have signaled an interest in working on an AI law, though what that would look like and whether it would include language about employment is not known at this time. 

Disruptive technologies move lightning-fast in comparison to the legislative process. The concern is that by the time the House and Senate act, the technology will have far surpassed the requirements of whatever bill is passed. Then it becomes a hamster wheel of legislation. “It’s a very difficult issue, AI, because it’s moving so quickly,” said New York Senator Chuck Schumer. He’s exactly right. All the more reason why federal lawmakers need to get ahead of the states.

The hiring and promotion process will only improve if there is more, not less, data and user input for AI systems. Why would we ever go back?

Cliff Jurkiewicz is the vice president of global strategy at Phenom.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.