One of the world’s most prestigious consulting firms just added an AI test to its final-round interviews. That’s not a fluke. Here’s what it means for your career, and what to do about it.
McKinsey added an AI test to final-round interviews in January 2026, requiring candidates to work alongside its AI tool, Lilli. The skills being assessed aren’t technical: they’re judgment, critical thinking, and the ability to improve AI output. Workers with AI fluency now earn 56% more than peers without it. If you’re not building these skills actively, you’re falling behind in the job market right now.
In January 2026, McKinsey announced something that sent ripples through the professional world: a new component in its final-round interviews, where candidates must collaborate with Lilli, the firm’s proprietary AI platform, to complete a structured problem-solving exercise. [1]
Let’s be clear about what this is. It’s not a technical coding test. There are no algorithms to write, no software to configure. What McKinsey is doing is putting candidates in a room (virtually or physically) with an AI tool and watching how they use it. Can you give it good instructions? Can you look at what it produces and tell whether it’s actually right? Can you take the output, critique it, and turn it into something a client would trust?
Those are professional skills. And until now, most hiring processes didn’t have a systematic way to test for them.
McKinsey CEO Bob Sternfels framed it clearly: the firm isn’t looking for people who defer to AI or people who ignore it. They want people who can work with it the way a sharp analyst works with a junior colleague: trusting the output up to a point, verifying where it matters, and applying judgment that the AI genuinely can’t replicate. [3]
Here’s the thing that most coverage of this story misses: McKinsey isn’t testing whether you can build AI tools. They’re testing whether you can use them like a senior professional, not like a curious beginner.
The four skills they assess are: [2]
Can you look at what the AI produced and decide whether it’s good enough, whether it needs work, or whether it’s fundamentally off? This is about calibrated skepticism, not blind trust or blanket rejection.
Can you give the AI a well-formed prompt that produces useful output on the first or second try? Vague instructions produce vague results. The ability to structure a request clearly is a skill in itself.
When the AI’s first response isn’t quite right, can you diagnose what went wrong and push it toward better output? This is the difference between people who get useful results from AI and people who give up after one try.
Can you take whatever the AI produced and shape it into something clear, accurate, and appropriate for the actual audience? This is still a fundamentally human skill, and it’s not going anywhere.
Notice what’s absent from that list. No mention of model architectures, token limits, fine-tuning, or API calls. If you can do those four things, you can pass this interview. And you can learn them without a single technical course.
McKinsey’s decision to test AI skills in hiring didn’t come out of nowhere. It’s the natural consequence of an internal transformation that’s already well underway. [3]
Sternfels revealed in early 2026 that McKinsey now runs a workforce of approximately 20,000 AI agents alongside its 40,000 human employees. These agents handle tasks like initial research synthesis, document drafting, data structuring, and client communication prep. They don’t replace the humans. They support them, at scale, around the clock.
In another 18 months, Sternfels expects every single McKinsey employee to be enabled by at least one dedicated AI agent. That’s not a prediction about some distant future. That’s a transition that’s already in motion, and the hiring criteria are changing to reflect it.
Why does this matter for people who don’t work at McKinsey? Because McKinsey often signals where the broader professional world is heading, particularly in consulting, finance, and corporate strategy. When a firm of their status formalizes AI fluency as a hiring criterion, others follow. Quickly.
McKinsey made headlines because they’re a brand-name employer with a formal, documented hiring change. But the underlying shift is happening at companies across every sector, just less visibly.
Job postings requiring AI skills grew 7.5% in 2026, even as total job postings fell by more than 11%. [5] That’s not a coincidence. Companies are restructuring roles around AI capability, and the roles that don’t require it are being restructured or eliminated.
In practical terms, this shows up in a few ways you might already be noticing:
That last point is worth dwelling on. McKinsey’s head of recruiting noted that candidates with strong reasoning, writing, and analytical judgment are actually well-positioned for AI-augmented roles, even if they have no technical background. The technical part of working with AI is increasingly easy. The judgment part is still hard.
I’ve taught more than 2,000 non-technical professionals how to work with AI, and the pattern I see repeatedly is this: people overthink the technical side and underprepare the practical side. You don’t need to know how GPT-4 works. You do need to be able to use it competently on real tasks.
Here’s what I’d prioritize:
Use ChatGPT or Claude for at least three real work tasks per week. Drafting emails, summarizing documents, researching a topic, outlining a report. The goal is building fluency through repetition, not theory. You can’t fake this in an interview if you’ve never done it.
The Role + Task + Format structure works for most professional tasks. Example: “You are a senior consultant. Summarize the following research findings in 3 bullet points suitable for a C-suite audience. [paste text].” Practice this until it’s instinctive. For a deeper guide, see our piece on building AI workflows.
Every time AI produces something for you, go through it with the same eye you’d use on work from a smart but junior colleague. What’s accurate? What needs verification? What’s missing? This habit is exactly what interviewers are watching for.
Document concrete examples of how AI helped you work better: saved two hours on a research task, produced a first draft that needed minimal editing, surfaced an insight you’d have missed. These are your interview stories. You need at least three before your next application.
Knowing when NOT to use AI, and being able to explain why, is itself a signal of sophistication. Interviewers are more impressed by “I tested it for X and it wasn’t reliable for that specific task, so I verified it manually” than by breathless claims about AI doing everything. Our Anti-Hallucination Toolkit is a good place to start on this.
The skills you’re building here are transferable across any AI tool, any employer, any industry. They’re also skills that compound: the earlier you start, the further ahead you get.
Be specific rather than generic. “Used Claude to reduce first-draft time for client reports by 60%” is far more useful than “familiar with AI tools.”
Let’s make this concrete. If you read this on a Sunday night and want to do something useful with it tomorrow, here’s the specific plan:
What is McKinsey’s AI interview?
McKinsey’s AI interview is a new final-round assessment where candidates must collaborate with Lilli, McKinsey’s proprietary AI platform, to complete a structured problem-solving exercise. Candidates are evaluated on their judgment, reasoning, and ability to review and improve AI-generated output, not on technical AI knowledge. It was introduced for US graduate candidates in January 2026 and is expected to expand.
Do I need to be technical to pass an AI job interview?
No. McKinsey and most companies testing AI skills are assessing judgment, critical thinking, and communication, not coding or technical expertise. The ability to give AI good instructions, evaluate its output skeptically, and adapt results for a specific situation matters far more than any technical knowledge. Strong liberal arts and humanities graduates are reportedly performing well in McKinsey’s new assessment.
Which companies besides McKinsey are testing AI skills in hiring?
McKinsey is the most visible and documented example, but the trend is spreading across consulting, finance, marketing, and operations roles broadly. Job postings requiring AI skills grew 7.5% in 2026 even as total postings fell 11.3%. Most companies are incorporating AI task components into interviews or weighting AI experience heavily in job descriptions, even if they haven’t formalised it as a named assessment.
How can I build AI skills for job interviews?
Focus on practical use, not theory. Use ChatGPT, Claude, or Gemini daily for real work tasks: drafting emails, summarizing documents, analyzing data, writing reports. Practice giving structured prompts using the Role + Task + Format approach, evaluate AI output critically as you would from a junior colleague, and keep a log of specific examples where AI improved your work. Those examples become your interview stories.
How much does AI fluency affect salary in 2026?
Workers with documented AI skills command a 56% wage premium in 2026, up from 25% the previous year. This premium applies across non-technical roles including marketing, HR, consulting, and operations, not just engineering positions. The premium exists because demand for AI-capable professionals currently outpaces supply, but experts expect it to compress as fluency becomes a universal baseline expectation.
This article was written by Sana Mian, Co-Founder of Future Factors AI. Sana has trained 2,000+ non-technical professionals in practical AI skills across corporate workshops, bootcamps, and online courses. Future Factors AI helps managers, executives, and business professionals use AI confidently and effectively in their work. Explore our AI courses and bootcamps.
Sources