Companies work to navigate operational, legal challenges associated with AI in HR systems

As AI deployment continues to accelerate, companies must address challenges associated with the use of AI in HR systems.

Published
Subscribe to IAPP Newsletters

Contributors:

Lexie White

Staff Writer

IAPP

Organizations continue to explore the many ways artificial intelligence can streamline and automate existing processes, with human resources being a particular area of focus. Hiring, performance management and employment analytics are a few examples of where employers are trying to use AI to simplify operations.

With most HR tasks requiring a data governance element, stakeholders are wary of additional compliance obligations and challenges stemming from the introduction of AI. Despite their potential business opportunities, AI-driven HR tools may prompt a rethink around organizational safeguards, vendor management and internal governance processes as global regulators begin to examine how these systems affect employees and job candidates.

At a breakout session during the IAPP AI Governance Global North America 2025, Workday Responsible AI Principal Program Manager Veena Calambur noted, "HR is a very people-driven system, so responsible AI can’t just be about the technology. It has to involve the people implementing it and the people affected by it."

The regulatory environment

The main concerns with AI in HR focus on the potential impact tools can have on employees when organizations rely on automated decision-making without meaningful human oversight.

The EU AI Act aims to impose transparency and human oversight obligations aiming to combat the use of these tools for employee tracking and prevent potential bias. The EU General Data Protection Regulation also provides individuals the right to "obtain human intervention on the part of the controller, to express his or her point of view and to contest" automated decisions.

Last May at the IAPP AI Governance Global Europe 2025, Irish Data Protection Commission Deputy Commissioner Cian O'Brien described how GDPR compliance experiences should ultimately help organizations work toward compliant AI-powered HR tools. He said GDPR data protection impact assessments and subsequent documentation on why the assessment was necessary are applicable in the AI context.

"I think that from the first seven years of GDPR enforcement, you're probably going to see the value of that contemporaneous documentation that assesses risk, that assesses how you responded to risk, regardless of whether it's formally under Article 35 or other formal documentation that is required," he said.

U.S. states, including California, Colorado, and Illinois, have also made similar efforts to prevent organizations from making decisions about individuals based solely on AI.

Hintze co-Managing Partner Jennifer Ruehr, CIPP/US, said the U.S. compliance stakes for organizations have increased significantly as regulatory frameworks develop and litigation accelerates. She said while there are many "great use cases" for AI-powered HR tools, "nuanced guardrails" can help make businesses move faster while dealing with increased regulation.

"I think the risks are definitely much higher than they've ever been," Reuhr added, noting companies that haven't been working on a global scale are left to "retrofit U.S. programs and educate U.S. HR and IT teams."

Organizational considerations and vendor concerns

At the forefront of compliance considerations, according to Workday's Calambur, is ensuring full adoption of a given AI framework.

"Principles are great in theory, but we have to actually operationalize them," she said. "We have to implement AI review processes that allow us to systematically identify and mitigate risk in our AI development."

Organizations also face practical governance challenges when AI procurement decisions occur without early legal involvement. Littler Mendelson Shareholder Zoe Argento, CIPP/US, said the legal department is "often brought in too late" to matters, meaning the company "may lose the opportunity to add important protections and assurances in the contract."

Compliance is not limited to in-house HR practices. Vendor oversight is particularly critical for companies that outsource recruiting, screening and workforce analytics.

"I think a key point for companies to understand is that HR tends to outsource many functions to vendors, but the employer still retains responsibility for many decisions and functions executed by those vendors, and the employer could be liable for how the AI is used," Argento said.

She also pointed to insufficient understanding of algorithms and how they arrive at their outputs as a "red flag." The general human knowledge base applies to internal HR as much as it does outsourced services. 

"They should understand the limitations of the technology, including what information it does and doesn't have and what other factors might figure into its decision-making capabilities, so that they can provide meaningful oversight," Argento said.

CPE credit badge

This content is eligible for Continuing Professional Education credits. Please self-submit according to CPE policy guidelines.

Submit for CPEs

Contributors:

Lexie White

Staff Writer

IAPP

Tags:

AI and machine learningEmployment and HRRisk managementAI governance

Related Stories