Three out of five people managers now depend on AI for significant decisions regarding their direct reports, extending beyond basic administrative tasks.
According to Resume Builder, 78% use AI for raises, 77% for promotions, 66% for layoffs, and 64% for terminations. Furthermore, about 20% of managers regularly allow AI to make final decisions without human input.
The swift integration of AI in people management signals a major change in workplace decision-making, but as indicated, it’s largely happening without adequate safeguards. A survey by Resume Builder of 1,342 U.S. people managers showed that two-thirds using AI for managing employees had no formal AI training, despite almost half assessing if AI could replace direct reports.
“It’s essential not to lose the ‘people’ in people management,” says Stacie Haller, chief career advisor at Resume Builder. “AI can aid with data-driven insights, but it lacks context, empathy, and judgment. AI outcomes are based on its input data, which can be flawed, biased, or manipulated.”
Companies have promoted AI use among people managers to enhance efficiency, expedite decision-making, reduce overhead, and support data-driven insights that improve productivity and scalability. Yet, this rapid move to automation may be exposing new risks that organizations haven’t fully anticipated.
Cleo Valeroso, VP of people at AI Squared, a company aiding organizations in AI integration, has experienced this issue firsthand. “When managers use AI without understanding its workings or potential errors, they tend to trust the process,” she says.
A common problem Valeroso has observed is an unwarranted trust in resume screening or ranking. For example, an AI tool might generate a top 10 candidate list, becoming the shortlist, yet “no one questions how the list was generated or what data it prioritized,” she explains.
Such practices might perpetuate biases subtly. Valeroso notes hiring algorithms often prefer candidates with specific job titles or employers, excluding those with alternative but valid career paths. “These tools are as effective as the data they’re trained on,” she says. “If the historical data defines a strong performer, the system will replicate that, including its flaws.”
Experts suggest the solution isn’t abandoning AI but applying critical thinking to its use, similar to any business tool. “You wouldn’t give someone a financial model to approve a budget without understanding its assumptions,” Valeroso says. “The same logic applies. Without investing in training, we outsource judgment, which rarely ends well.”
While employers, worried about legal risk or employee opposition, might avoid transparency regarding AI usage, Valeroso suggests this tactic could fail. “Greater risks arise from silence,” she says. “Silence breeds confusion, leading to mistrust. Once trust is lost, rebuilding it is tough and costly.”
Effective communication requires outlining basic AI usage and assuring employees that humans oversee crucial decisions. Valeroso cautions, “If companies don’t communicate proactively, employees will fill in the gaps, likely assuming worst-case scenarios.”
Haller explains, “Organizations are obliged to implement AI ethically to avoid legal issues, protect their culture, and maintain employee trust.”