A new survey reveals that 60% of managers are now using AI to make critical decisions about their employees, including promotions and terminations. The findings highlight growing concerns about workplace AI implementation, as two-thirds of these managers lack formal AI training and 43% have already replaced human roles with AI technology.
Key findings: The Resume Builder survey of 1,342 US managers shows widespread AI adoption in human resources decisions across multiple areas.
- 78% use AI to determine salary raises, while 77% rely on it for promotion decisions.
- 66% use AI for layoff decisions and 64% for termination choices.
- More than 20% “frequently let AI make final decisions without human input.”
- 91% reported using AI to assess employee performance, though the survey didn’t clarify what these assessments entail.
The replacement factor: Nearly half of managers surveyed were tasked with evaluating whether AI could replace their direct reports.
- 46% of managers assessed if AI could take over positions on their teams.
- Of those who conducted assessments, 57% found AI could replace a human role.
- 43% actually went ahead and replaced a human position with AI.
- The survey didn’t specify which types of positions were being replaced.
Popular tools: ChatGPT dominates AI usage among managers, followed by Microsoft’s enterprise offerings.
- 53% use ChatGPT most frequently for management decisions.
- 29% prefer Microsoft Copilot.
- 16% use Google’s Gemini.
- The remaining 3% rely on other AI tools.
Training gap concerns: The majority of managers using AI lack proper preparation for such consequential decisions.
- Two-thirds of managers using AI to manage employees have no formal AI training.
- No agreed-upon standards exist for adequate AI training in management contexts.
- The lack of regulation compounds the problem of untrained AI usage in workplace decisions.
What experts are saying: AI researchers warn about the risks of blind AI implementation in sensitive HR functions.
- “While AI can support data-driven insights, it lacks context, empathy, and judgment,” said Resume Builder’s chief career advisor Stacie Haller.
- “Organizations have a responsibility to implement AI ethically to avoid legal liability, protect their culture, and maintain trust among employees.”
- AI expert Hilke Schellmann, author of “The Algorithm,” told ZDNET that “ethical usage of AI in management would need radical transparency for the employees, giving them a voice in the decision-making of what system should be used.”
Regulatory landscape: Current protections for workers remain limited, with New York City leading early efforts to address AI bias.
- New York’s Local Law 144 requires automated employment decision tools to be audited for bias annually.
- The law has been criticized for defining these tools too narrowly, allowing companies to avoid compliance.
- Without explicit worker protections, AI use in personnel decisions remains largely at company discretion.
Privacy implications: The survey raises questions about data handling and employee consent that weren’t fully explored.
- Managers may be sharing performance details, salary information, and sensitive employee data with chatbots.
- It’s unclear how many managers have informed their reports about AI usage in evaluations.
- The Society for Human Resource Management notes employees should have the right to know when AI is being used and be able to opt out where applicable.
How employees can protect themselves: Workers have limited but important options for advocating against unchecked AI surveillance and decision-making.
- Schellmann recommends workers “band together and work with their unions and write in their bargaining agreements that surveillance technology has to be disclosed.”
- Employees should ask managers for transparency about how AI tools are being used in their evaluations.
- Workers should push for co-decision making with union representatives when surveillance technology is implemented.
60% of managers use AI to make decisions now, including whom to promote and fire