AI Literacy · Enterprise AI

McKinsey Now Has 25,000 AI Agents. Here’s What That Means for the Rest of Us.

In 18 months, McKinsey went from 3,000 to 25,000 AI agents working alongside 40,000 humans. They’ve added AI to job interviews. They’re tying fees to outcomes. This is the preview.

Sana Mian

By Sana Mian, Co-Founder of Future Factors AI

Share This Article
25,000AI Agents at McKinsey
18 moGrowth from 3,000
40,000Human Employees
2026Target: Agent Parity

TL;DR

McKinsey CEO Bob Sternfels confirmed the firm went from 3,000 to 25,000 AI agents in 18 months, with a target of matching its 40,000 human headcount before end of 2026. They’ve added an AI collaboration stage to job interviews. They’re moving to outcomes-based fees. None of this is unique to McKinsey. It’s a blueprint that any knowledge work business can adapt. [1,2]

Eighteen months ago, McKinsey had 3,000 AI agents and 40,000 human consultants. The humans outnumbered the agents more than 13 to 1. Today, McKinsey CEO Bob Sternfels says the firm has 25,000 agents, and the stated goal is to reach parity with the human headcount before 2026 is out. [1]

That’s not incremental. That’s a fundamental reorganization of how one of the world’s most influential professional services firms does its work. And if you think this is just a McKinsey story, you’re looking at it wrong.

The numbers that should get your attention

Let’s be concrete about what “25,000 AI agents” actually means. These aren’t chatbots sitting in a corner waiting to be asked a question. McKinsey’s agents, powered largely by an internal platform called Lilli, handle research, document synthesis, comparative analysis, and first drafts of client reports. [2,3] They run continuously. They pull from proprietary databases. They produce structured outputs that human consultants then review, refine, and present.

The growth rate is what’s striking. McKinsey went from 3,000 agents 18 months ago to 20,000 agents by January 2026, then to 25,000 by the time of Sternfels’ most recent Harvard Business Review interview. [1,4] That’s more than 500% growth in a year and a half, at one of the most cautious, reputation-sensitive firms on the planet.

Worth noting: McKinsey’s agent count isn’t the same as replacing humans. They’ve been careful to frame this as augmenting 40,000 human consultants, not replacing them. But the skill profile they’re hiring for is already shifting. More on that below.

What Lilli actually does day to day

Lilli is McKinsey’s internal AI platform. Think of it as what happens when you give every consultant a brilliant, always-available research assistant that has read every client report, case study, and industry analysis the firm has ever produced, plus live access to external research databases.

In practice, Lilli handles the front-end grunt work of knowledge consulting: reading and synthesizing large volumes of documents, pulling relevant precedents from past client work, running comparative analyses across industries, and producing structured first drafts. [2] The human consultant’s role then shifts toward judgment: what questions to ask, which findings matter, how to frame a recommendation for a specific client’s context.

This is not magic. It’s a very sophisticated version of what you can do right now with ChatGPT’s Agent Mode or Claude’s Projects feature. The difference is scale, proprietary data, and institutional tuning. The underlying principle is identical: use AI for the information-intensive first pass, use humans for the contextual, relational, judgment-intensive layers.

The new AI job interview stage

This is the part that caught a lot of people off guard. McKinsey has added a stage to its graduate recruitment process that specifically tests candidates’ ability to work with Lilli. Candidates are given exercises that resemble real client scenarios and asked to complete them using the AI tool. [3]

What McKinsey says it’s measuring: reasoning and communication, not specialist AI knowledge. Fortune reported that this shift has made liberal arts majors suddenly relevant again, because the skill that matters most isn’t coding or prompt engineering. It’s the ability to think clearly, ask the right questions, direct AI toward a useful output, and critically evaluate what comes back. [5]

That’s a significant signal for anyone hiring or being hired in the next two to three years. The firms that matter are no longer testing for information retrieval or raw analytical horsepower. They’re testing for the ability to collaborate with AI effectively.

What this means for your team (not McKinsey)

Here’s the translation. McKinsey has 40,000 people and a proprietary AI platform with 18 months of development behind it. You probably don’t. But the underlying logic scales to any size:

  1. Identify your repetitive knowledge tasks. What does your team spend time on that is information-heavy but not judgment-heavy? Research synthesis, first-draft reports, data compilation, meeting prep, email drafting. These are the agent-addressable tasks.
  2. Deploy AI for first-pass work, not final output. McKinsey doesn’t publish Lilli’s drafts directly to clients. Humans review, refine, and apply contextual judgment. The same should be true for your team. AI handles the time-consuming starting point. Humans handle the parts that require relationships, nuance, and accountability.
  3. Measure the shift in where your team’s time goes. If AI is working, your team should be spending less time on information gathering and more time on interpretation and decision-making. If the ratio hasn’t changed, the integration isn’t working yet.

A 10-person team can run this playbook with ChatGPT Business or Claude for Work, a shared project space, and consistent prompt templates. It doesn’t require a custom platform. It requires a change in how work gets started.

The business model shift no one talks about enough

McKinsey’s CEO has also confirmed that the firm is migrating away from pure fee-for-service consulting toward an outcomes-based model. Fees are increasingly tied to the measurable impact of their work, not the hours delivered. [4]

This matters because it reflects a broader shift that AI is accelerating: when AI can handle volume and speed at near-zero marginal cost, the value is no longer in the delivery of information or analysis. It’s in the judgment, the relationship, and the accountability for results. Firms that charge by the hour for knowledge work are already facing pressure. The ones that charge for outcomes have a more durable model.

If you’re in consulting, professional services, or any knowledge-intensive business, this is worth thinking about now. Not because you need to match McKinsey’s AI infrastructure, but because the question of where you create genuine value (versus where AI can replicate it) is one that clients will start asking more directly.

Three questions to answer this week

Rather than telling you to “embrace AI” in a vague and useless way, here are three specific questions worth sitting with:

  1. Which 3 tasks in your team’s week are information-intensive but not judgment-intensive? Research compilation, data entry, first-draft documents, meeting summaries. These are your starting points for AI delegation.
  2. Are you testing AI ability in your hiring? You don’t need to run a Lilli-style assessment. But asking candidates to complete a work sample using an AI tool (with full transparency that this is the exercise) tells you something important about how they’ll actually function in an AI-augmented environment.
  3. What do your clients pay you for that AI can’t replicate? Your judgment, your relationships, your accountability, your network. These are the parts of your value proposition to protect and emphasize. Everything else is worth automating.

McKinsey’s 25,000 agents aren’t the point. The point is that one of the world’s most conservative professional services firms moved that fast, that deliberately. The question isn’t whether this is coming for your industry. It already has.

Frequently Asked Questions

What is Lilli, McKinsey’s internal AI tool?

Lilli is McKinsey’s internal AI platform that powers the majority of its AI agents. It handles research, document synthesis, comparative analysis, and first drafts of client reports. Candidates applying to McKinsey are now tested on their ability to work with it.

How many AI agents does McKinsey have in 2026?

As of early 2026, McKinsey has approximately 25,000 AI agents, up from 20,000 in January 2026 and just 3,000 eighteen months prior. The stated goal is to reach parity with its 40,000 human employees before end of 2026.

Is McKinsey replacing staff with AI agents?

McKinsey has not announced staff cuts tied to its AI agent deployment. The agents are framed as augmenting human consultants. However, the firm’s hiring criteria are shifting toward candidates who can direct AI effectively, which signals a change in the skills it values.

What does McKinsey’s AI job interview test?

The AI interview stage asks candidates to complete applied exercises using Lilli, McKinsey’s internal AI tool. The exercises resemble real client scenarios. McKinsey says it’s measuring reasoning and communication ability, not specialist AI knowledge.

Should other businesses follow McKinsey’s AI agent approach?

The principle scales to any size. Identify repetitive knowledge tasks, deploy AI to handle them first-pass, and free humans for higher-judgment work. A 10-person team can apply the same logic using ChatGPT, Claude, or similar tools without building custom infrastructure.

Sources

  1. [1] BusinessHonor. McKinsey Integrates 25,000 AI Agents into Global Workforce. January 2026.
  2. [2] Final Round AI. 20,000 McKinsey Workforce is Actually AI Agents. 2026.
  3. [3] HRD Asia. McKinsey trials AI-led job interviews as AI agents reshape its workforce. 2026.
  4. [4] HR Grapevine. McKinsey goes all-in on AI with interview testing, workforce of 20,000 agents. January 2026.
  5. [5] Fortune. How to get hired at McKinsey: AI tools, liberal arts, creativity. January 2026.
Sana Mian
Sana Mian – Co-Founder, Future Factors AI

Sana is an AI educator and learning designer specialising in making complex ideas stick for non-technical professionals. She has trained 2,000+ learners across corporate teams, bootcamps, and keynote stages. Future Factors offers AI Bootcamps, Corporate Workshops, and Speaking & Consulting for businesses ready to adopt AI without the overwhelm.

More about Sana →

Psst, Hey You!

(Yeah, You!)

Want helpful AI tips flying Into your inbox?

Weekly tips. Real examples. Practical help for busy professionals.

We care about your data, check out our privacy policy.