1,722 HR professionals. One major survey. The results are more nuanced than the headlines suggest. Here’s what SHRM actually found, and what it means if you’re leading a people team this year.
SHRM’s 2026 State of AI in HR report found that while 92% of CHROs expect more AI adoption, over half of organisations haven’t implemented any AI in HR yet. The biggest use cases are in recruiting and L&D. Governance is a mess at most organisations. And the gap between what executives expect and what front-line HR teams are actually doing is significant. This article breaks down the findings and tells you what to actually do with them.
SHRM’s 2026 report surveyed 1,722 HR professionals. The headline finding: AI adoption in HR is growing but still limited, with more than half of organisations having no AI in their HR function at all. The most mature use cases are in recruiting. Executive expectations are high but front-line adoption is slow. Governance policies are mostly inadequate. This article breaks down what it all means and what HR leaders should prioritise.
The Society for Human Resource Management (SHRM) surveyed 1,722 HR professionals between December 5 and 23, 2025, for their State of AI in HR 2026 report. [1] It’s one of the most comprehensive data sets on AI adoption in people functions available right now, and the headline numbers are more sobering than you might expect given all the coverage AI has been getting.
Here’s the core finding: the majority of organisations are not using AI in HR in any meaningful way. More than half (54%) have not adopted any form of AI in their HR function and have no plans to do so in 2026. [1] That’s not a fringe minority. That’s most companies.
At the same time, among organisations that are using AI, the results are concentrated in specific, well-defined areas. It’s not scattered experimentation; it’s targeted adoption. And the gap between what CHROs are expecting to happen and what’s actually being implemented is significant enough to warrant its own section.
When AI is in use, it’s most commonly found in recruiting (27%), HR technology management (21%), learning and development (17%), and employee experience (14%). [1] These four areas account for the bulk of current AI deployment in people teams.
Recruiting makes sense as the lead use case. It’s the part of HR with the highest volume of repetitive tasks, the most data to process, and the clearest efficiency gains. Writing job descriptions, screening CVs against criteria, scheduling interviews, drafting outreach messages. All of these are areas where AI provides genuine time savings without introducing significant risk at the task level.
Learning and development is the other interesting one. AI is being used to personalise training content, generate quiz and assessment materials, and summarise long-form learning resources into digestible formats for busy employees. If you’re in L&D and not using AI to build learning materials yet, you’re operating at a real disadvantage compared to teams that are.
The areas with virtually no AI adoption are telling too. Inclusion and diversity work, C-suite advisory functions, ESG, and ethics functions are all at 2% or below. [1] This partly reflects appropriate caution (you probably shouldn’t be using AI to make decisions about inclusion strategy), but it also reflects a lack of clear use cases and governance frameworks in those areas.
Here’s the tension at the heart of the SHRM findings: 92% of CHROs expect more AI integration in the workforce this year, and 87% expect greater adoption within HR processes. [2] That’s near-universal executive optimism. But more than half of organisations have implemented nothing.
That gap doesn’t resolve itself on its own. It usually means one of three things: the investment and resource allocation hasn’t followed the expectation, HR teams don’t know where to start, or the governance and approval processes are too slow. Often, it’s all three at once.
If you’re a CHRO or HR director reading this, this is actually a useful data point. The expectation is set. The tools are available. The question is execution. And the data suggests most organisations are stuck in the gap between intention and action.
The SHRM data is a useful counter to the breathless coverage AI gets. Yes, the tools exist. Yes, they’re capable. But the majority of HR teams aren’t using them yet. You’re not behind if you haven’t transformed your entire function. You are behind if you haven’t started experimenting in at least one specific area.
This is where it gets uncomfortable. SHRM asked organisations with AI policies how effective those policies are. Only a quarter said they feel their policies are clear and future-proof. More than half (54%) reported that their policies are too restrictive and too tied to specific currently available tools. Another 23% said their policies are too broad to be useful. [1]
So you’ve got organisations that either have no AI policy, have a policy that’s already out of date because it was written for last year’s tools, or have something so vague it doesn’t actually guide behaviour. That’s a governance gap that creates real risk, particularly in HR, where AI decisions can touch employment law, privacy, and discrimination.
What does a better AI governance framework for HR look like? At minimum, it should cover: which use cases are approved for AI assistance, which decisions must remain human-made, how AI-assisted outputs are reviewed before acting on them, how employee data is handled, and who’s accountable when something goes wrong. That’s not a policy document that takes months to write. Most of it can be a two-page internal memo. The point is to have it.
If you’re in an organisation that’s in the 54% with no AI implementation, here’s the honest advice: don’t try to do everything at once. Pick one use case, the one with the most repetitive time cost and the lowest risk, and start there.
For most HR teams, that’s writing. Job descriptions, interview question banks, offer letter templates, onboarding documents, training materials, manager communication guides. All of these are high-volume, time-consuming writing tasks where AI can do a strong first draft in seconds. You review, you edit, you publish. That alone can save a medium-sized HR team several hours a week.
The second easiest starting point is summarisation. Performance review summaries, exit interview transcripts, engagement survey free-text responses. Pasting these into a tool like Claude or ChatGPT and asking for a structured summary of themes is exactly the kind of task these tools are reliable at. It doesn’t replace your analysis; it gives you a starting point.
For a broader framework on how to structure AI into your work processes, the guide on building an AI workflow for 2026 covers how to think about integration in practical terms.
Since recruiting is the most mature AI use case in HR, it’s worth going deeper on what the leading teams are doing. Gartner identified the AI revolution and cost pressures as the two main forces shaping talent acquisition trends in 2026, with AI being applied at every stage of the funnel. [3]
The most common applications in progressive recruiting teams right now:
Korn Ferry’s 2026 talent acquisition research notes that 73% of talent acquisition leaders say critical thinking and problem-solving is the most important skill they need in their teams this year. [4] That’s partly a response to AI: as tools handle more of the volume work, human judgment and strategic thinking become more valuable, not less.
There are areas where HR teams are either already making mistakes or where the risk of mistakes is high enough that it’s worth naming clearly.
Fully automated hiring decisions. AI screening tools that make or heavily influence hiring decisions without human review create serious legal and ethical risk around discrimination. The tools aren’t accurate enough, and the regulatory landscape isn’t settled. A human should always make the final call.
Using generic AI tools for sensitive employee conversations. A manager shouldn’t be pasting an employee’s performance concerns into a public AI tool to generate a conversation script. The privacy implications are significant, and it’s the kind of thing that erodes trust in HR if it comes to light.
Skipping governance because it’s slower. The temptation when a useful tool exists is to just start using it and figure out the rules later. In most functions that’s manageable. In HR, where you’re dealing with employee data and employment decisions, the rules matter. Five minutes building a basic framework before you start is worth weeks of remediation if something goes wrong.
If you want to understand how AI capabilities are actually structured and what they can and can’t do reliably, the guide on what AI agents actually are is a useful grounding read before you go further with any automation.
How many HR teams are using AI in 2026?
According to SHRM’s 2026 State of AI in HR report, which surveyed 1,722 HR professionals, more than half of organisations (54%) have not adopted any form of AI in HR and have no plans to do so in 2026. AI adoption is concentrated in specific practice areas, most commonly recruiting, HR technology, and learning and development.
What HR tasks is AI most used for?
According to SHRM, AI is used most frequently in recruiting (27% of organisations), HR technology (21%), learning and development (17%), and employee experience (14%). It is least common in inclusion and diversity, C-suite relations, and ESG or compliance functions.
What do CHROs think about AI in 2026?
92% of CHROs surveyed by SHRM anticipate that AI will be further integrated into the workforce in 2026, and 87% forecast greater adoption of AI within HR processes specifically. Despite this optimism, most organisations are still in early stages of adoption, with significant gaps between executive expectation and front-line implementation.
What are the biggest AI risks in HR?
The main risks flagged in the 2026 SHRM report include unclear or outdated AI governance policies, potential bias in AI-assisted hiring decisions, privacy concerns with employee data, and over-reliance on AI tools that have not been validated for specific HR use cases. SHRM recommends that HR teams build explicit governance frameworks before scaling AI adoption.
How should HR teams start using AI practically?
Start with the tasks that are high-volume and low-risk: writing job descriptions, drafting interview questions, summarising feedback, and creating onboarding documents. These are areas where AI saves significant time without introducing major risk. Avoid using AI for final hiring decisions or performance evaluations until your organisation has clear governance in place.
This article draws primarily on the SHRM State of AI in HR 2026 report, which surveyed 1,722 HR professionals in December 2025. All data cited is sourced directly from named research organisations. This guide is written for HR managers, CHROs, and people leaders who are non-technical and want a clear-eyed view of where AI adoption actually stands.
Sources