AI Is Now Part of the Job Interview
CAREER · AI LITERACY

AI Is Now Part of the Job Interview: What McKinsey’s Hiring Test Means for You

One of the world’s most prestigious consulting firms just added an AI test to its final-round interviews. That’s not a fluke. Here’s what it means for your career, and what to do about it.

Sana Mian

By Sana Mian , Co-Founder of Future Factors AI

Share This Article
56% AI skills wage premium in 2026
20,000 AI agents at McKinsey
1.5M Hours McKinsey saved with AI
Jan 2026 When AI interviews launched
TL;DR

McKinsey added an AI test to final-round interviews in January 2026, requiring candidates to work alongside its AI tool, Lilli. The skills being assessed aren’t technical: they’re judgment, critical thinking, and the ability to improve AI output. Workers with AI fluency now earn 56% more than peers without it. If you’re not building these skills actively, you’re falling behind in the job market right now.

What McKinsey is actually doing (and why the world noticed)

In January 2026, McKinsey announced something that sent ripples through the professional world: a new component in its final-round interviews, where candidates must collaborate with Lilli, the firm’s proprietary AI platform, to complete a structured problem-solving exercise. [1]

Let’s be clear about what this is. It’s not a technical coding test. There are no algorithms to write, no software to configure. What McKinsey is doing is putting candidates in a room (virtually or physically) with an AI tool and watching how they use it. Can you give it good instructions? Can you look at what it produces and tell whether it’s actually right? Can you take the output, critique it, and turn it into something a client would trust?

Those are professional skills. And until now, most hiring processes didn’t have a systematic way to test for them.

McKinsey CEO Bob Sternfels framed it clearly: the firm isn’t looking for people who defer to AI or people who ignore it. They want people who can work with it the way a sharp analyst works with a junior colleague: trusting the output up to a point, verifying where it matters, and applying judgment that the AI genuinely can’t replicate. [3]

What the McKinsey AI interview looks like in practice: Candidates receive a scenario, prompt Lilli to help analyze it, review the AI’s output, and then produce a structured recommendation. Evaluators focus on whether the candidate questioned the AI’s assumptions, caught any errors, and communicated the final answer clearly. No coding. No technical background required.

It’s not about being technical (here’s what they actually test)

Here’s the thing that most coverage of this story misses: McKinsey isn’t testing whether you can build AI tools. They’re testing whether you can use them like a senior professional, not like a curious beginner.

The four skills they assess are: [2]

1
Judgment

Can you look at what the AI produced and decide whether it’s good enough, whether it needs work, or whether it’s fundamentally off? This is about calibrated skepticism, not blind trust or blanket rejection.

2
Structure

Can you give the AI a well-formed prompt that produces useful output on the first or second try? Vague instructions produce vague results. The ability to structure a request clearly is a skill in itself.

3
Iteration

When the AI’s first response isn’t quite right, can you diagnose what went wrong and push it toward better output? This is the difference between people who get useful results from AI and people who give up after one try.

4
Communication

Can you take whatever the AI produced and shape it into something clear, accurate, and appropriate for the actual audience? This is still a fundamentally human skill, and it’s not going anywhere.

Notice what’s absent from that list. No mention of model architectures, token limits, fine-tuning, or API calls. If you can do those four things, you can pass this interview. And you can learn them without a single technical course.

The practical test you can try today: Open ChatGPT or Claude, give it a real work task you’re actually dealing with, and evaluate the output as if you were going to send it to your most demanding colleague. What’s right? What’s wrong? What would you change before trusting it? That’s the skill McKinsey is hiring for.

The 20,000-agent workforce behind the decision

McKinsey’s decision to test AI skills in hiring didn’t come out of nowhere. It’s the natural consequence of an internal transformation that’s already well underway. [3]

Sternfels revealed in early 2026 that McKinsey now runs a workforce of approximately 20,000 AI agents alongside its 40,000 human employees. These agents handle tasks like initial research synthesis, document drafting, data structuring, and client communication prep. They don’t replace the humans. They support them, at scale, around the clock.

In another 18 months, Sternfels expects every single McKinsey employee to be enabled by at least one dedicated AI agent. That’s not a prediction about some distant future. That’s a transition that’s already in motion, and the hiring criteria are changing to reflect it.

McKinsey’s AI by the numbers

  • 20,000 AI agents running alongside 40,000 human employees [3]
  • 1.5 million hours saved using AI tools across the firm [4]
  • Every employee expected to be AI-agent-enabled within 18 months
  • AI interview now part of final-round assessments for US graduate candidates

Why does this matter for people who don’t work at McKinsey? Because McKinsey often signals where the broader professional world is heading, particularly in consulting, finance, and corporate strategy. When a firm of their status formalizes AI fluency as a hiring criterion, others follow. Quickly.

The 56% wage premium: what the data actually says

Let’s talk about money for a moment, because the numbers here are striking. Workers with documented AI skills now command a 56% wage premium over peers without those skills. That’s up from 25% just one year ago. [5]

This isn’t just for engineers and data scientists. The premium shows up across non-technical roles in marketing, HR, operations, and consulting. The pattern is consistent: people who can use AI tools to do their job better are worth meaningfully more to employers than people who can’t.

That 56% figure also suggests we’re still early in the adoption curve. When AI fluency becomes universally expected (the same way basic spreadsheet skills became universal in the 1990s), the premium will compress. Right now, it exists because the supply of AI-capable professionals is still limited relative to demand. That window won’t stay open forever.

The analogy that might sharpen this: In the mid-1990s, being proficient in Excel was a genuine competitive advantage in most professional roles. By 2005, it was a baseline expectation. AI fluency is at the Excel-in-1995 moment right now. The people who learn it now will benefit from the premium while it lasts and won’t be scrambling when it becomes table stakes.

How this trend is spreading fast

McKinsey made headlines because they’re a brand-name employer with a formal, documented hiring change. But the underlying shift is happening at companies across every sector, just less visibly.

Job postings requiring AI skills grew 7.5% in 2026, even as total job postings fell by more than 11%. [5] That’s not a coincidence. Companies are restructuring roles around AI capability, and the roles that don’t require it are being restructured or eliminated.

In practical terms, this shows up in a few ways you might already be noticing:

  • Job descriptions that previously listed “proficiency in Microsoft Office” now list specific AI tools (ChatGPT, Copilot, Gemini, Midjourney)
  • Interview questions like “tell me about a time you used AI to solve a problem” are becoming standard in sectors like consulting, marketing, and finance
  • Performance reviews at some firms now include an “AI adoption” component
  • Liberal arts and humanities graduates are being reconsidered by firms like McKinsey, precisely because their critical thinking and communication skills complement AI tools well [1]

That last point is worth dwelling on. McKinsey’s head of recruiting noted that candidates with strong reasoning, writing, and analytical judgment are actually well-positioned for AI-augmented roles, even if they have no technical background. The technical part of working with AI is increasingly easy. The judgment part is still hard.

The AI skills you actually need before your next interview

I’ve taught more than 2,000 non-technical professionals how to work with AI, and the pattern I see repeatedly is this: people overthink the technical side and underprepare the practical side. You don’t need to know how GPT-4 works. You do need to be able to use it competently on real tasks.

Here’s what I’d prioritize:

1
Build a daily AI habit now, not before your next interview

Use ChatGPT or Claude for at least three real work tasks per week. Drafting emails, summarizing documents, researching a topic, outlining a report. The goal is building fluency through repetition, not theory. You can’t fake this in an interview if you’ve never done it.

2
Practice structured prompting

The Role + Task + Format structure works for most professional tasks. Example: “You are a senior consultant. Summarize the following research findings in 3 bullet points suitable for a C-suite audience. [paste text].” Practice this until it’s instinctive. For a deeper guide, see our piece on building AI workflows.

3
Develop your critical review process

Every time AI produces something for you, go through it with the same eye you’d use on work from a smart but junior colleague. What’s accurate? What needs verification? What’s missing? This habit is exactly what interviewers are watching for.

4
Keep an AI wins log

Document concrete examples of how AI helped you work better: saved two hours on a research task, produced a first draft that needed minimal editing, surfaced an insight you’d have missed. These are your interview stories. You need at least three before your next application.

5
Understand the limits as well as the capabilities

Knowing when NOT to use AI, and being able to explain why, is itself a signal of sophistication. Interviewers are more impressed by “I tested it for X and it wasn’t reliable for that specific task, so I verified it manually” than by breathless claims about AI doing everything. Our Anti-Hallucination Toolkit is a good place to start on this.

The skills you’re building here are transferable across any AI tool, any employer, any industry. They’re also skills that compound: the earlier you start, the further ahead you get.

What you can put on your CV right now: Prompt engineering AI-assisted research ChatGPT / Claude AI output review Microsoft Copilot Workflow automation with AI

Be specific rather than generic. “Used Claude to reduce first-draft time for client reports by 60%” is far more useful than “familiar with AI tools.”

Your Monday morning action plan

Let’s make this concrete. If you read this on a Sunday night and want to do something useful with it tomorrow, here’s the specific plan:

  1. Pick one real task from your work week and complete it with AI assistance. A report, an email, a research summary, anything substantial. Don’t pick something trivial.
  2. Write one paragraph describing what you asked the AI to do, what it produced, and how you improved the output. This is the start of your AI wins log.
  3. Review your CV or LinkedIn for any mention of AI. If there’s none, add at least one concrete example this week.
  4. Practice one structured prompt using the Role + Task + Format structure on a real problem you’re currently working on.
  5. Read one article about a limitation or failure mode of AI tools (our why AI hallucinates guide is a solid start). Understanding the risks signals professional maturity, not fear of the technology.
One honest caveat: Building AI fluency isn’t a one-week project. The professionals who stand out in AI-integrated hiring processes in 2026 are the ones who started using these tools for real work six months ago. But today is a better starting point than next month. Start with one task. Build from there.

Frequently asked questions

What is McKinsey’s AI interview?

McKinsey’s AI interview is a new final-round assessment where candidates must collaborate with Lilli, McKinsey’s proprietary AI platform, to complete a structured problem-solving exercise. Candidates are evaluated on their judgment, reasoning, and ability to review and improve AI-generated output, not on technical AI knowledge. It was introduced for US graduate candidates in January 2026 and is expected to expand.

Do I need to be technical to pass an AI job interview?

No. McKinsey and most companies testing AI skills are assessing judgment, critical thinking, and communication, not coding or technical expertise. The ability to give AI good instructions, evaluate its output skeptically, and adapt results for a specific situation matters far more than any technical knowledge. Strong liberal arts and humanities graduates are reportedly performing well in McKinsey’s new assessment.

Which companies besides McKinsey are testing AI skills in hiring?

McKinsey is the most visible and documented example, but the trend is spreading across consulting, finance, marketing, and operations roles broadly. Job postings requiring AI skills grew 7.5% in 2026 even as total postings fell 11.3%. Most companies are incorporating AI task components into interviews or weighting AI experience heavily in job descriptions, even if they haven’t formalised it as a named assessment.

How can I build AI skills for job interviews?

Focus on practical use, not theory. Use ChatGPT, Claude, or Gemini daily for real work tasks: drafting emails, summarizing documents, analyzing data, writing reports. Practice giving structured prompts using the Role + Task + Format approach, evaluate AI output critically as you would from a junior colleague, and keep a log of specific examples where AI improved your work. Those examples become your interview stories.

How much does AI fluency affect salary in 2026?

Workers with documented AI skills command a 56% wage premium in 2026, up from 25% the previous year. This premium applies across non-technical roles including marketing, HR, consulting, and operations, not just engineering positions. The premium exists because demand for AI-capable professionals currently outpaces supply, but experts expect it to compress as fluency becomes a universal baseline expectation.

About this article

This article was written by Sana Mian, Co-Founder of Future Factors AI. Sana has trained 2,000+ non-technical professionals in practical AI skills across corporate workshops, bootcamps, and online courses. Future Factors AI helps managers, executives, and business professionals use AI confidently and effectively in their work. Explore our AI courses and bootcamps.

Sources

  1. Fortune. McKinsey challenges graduates to master AI tools as it shifts hiring hunt toward liberal arts majors. January 2026.
  2. Management Consulted. McKinsey AI Interview Being Piloted As A Part of Final Round Interviews. 2026.
  3. HR Grapevine USA. McKinsey goes all in on AI with interview testing, workforce of 20,000 agents. January 2026.
  4. Inc. McKinsey Says It Saved 1.5 Million Hours With AI. 2026.
  5. McKinsey.org. The Human Skills You’ll Need to Thrive in 2026’s AI-Driven Workplace. 2026.
  6. CFO.com. McKinsey’s New AI-Powered Interview: The Future of Consulting Recruiting?. 2026.
Sana Mian

Sana Mian

Co-Founder, Future Factors AI

Sana is a co-founder of Future Factors AI and has trained 2,000+ non-technical professionals in practical AI skills. She runs AI bootcamps, corporate workshops, and online courses designed for people in HR, marketing, finance, and consulting roles. Future Factors offers AI courses and bootcamps for non-technical teams.

Psst, Hey You!

(Yeah, You!)

Want helpful AI tips flying Into your inbox?

Weekly tips. Real examples. Practical help for busy professionals.

We care about your data, check out our privacy policy.