McKinsey is requiring final-round candidates to collaborate with its AI tool Lilli as part of the hiring process. Here’s what that assessment actually looks like and how to walk in prepared.
TL;DR
McKinsey added an AI-collaboration component to its final-round interviews in early 2026. Candidates are given access to Lilli, McKinsey’s internal AI tool, and assessed on how well they prompt it, interpret its output, and apply professional judgment. The bar isn’t “AI expert.” It’s “competent professional who can work with AI.” A broader rollout across interview cycles is expected for Spring and Summer 2026. The practical preparation: spend 30 minutes a week working through business scenarios with ChatGPT or Claude using a structured prompting approach.
In January 2026, McKinsey CEO Bob Sternfels made two announcements in quick succession. First: the firm now runs a “virtual workforce” of 20,000 AI agents alongside its 40,000 human consultants. [1] Second: final-round job interviews would include an AI-collaboration component where candidates are assessed on their ability to work with Lilli, McKinsey’s internal AI platform. [2]
These weren’t separate stories. They were the same story told twice. McKinsey has restructured how it works around AI. Naturally, it wants to hire people who can function effectively in that environment.
The reaction was swift. LinkedIn posts about “the McKinsey AI interview” circulated across consulting and MBA forums. Recruiting coaches added AI prep to their packages. Business schools quietly updated their career guidance. Because everyone in consulting understood the implication: if McKinsey is doing this, others will follow.
And they will. But let’s start with what McKinsey’s version actually looks like, because the reality is considerably less intimidating than the hype.
Lilli is McKinsey’s proprietary AI platform. It’s built on large language model foundations and integrated with McKinsey’s internal knowledge base: decades of case documents, research reports, client frameworks, and industry analysis. When a consultant asks Lilli a question, it doesn’t just generate a generic response. It draws from proprietary McKinsey knowledge as well as its underlying model’s training. [3]
In practice, Lilli works similarly to how ChatGPT or Claude works: you type a question or task, it responds, you refine, it responds again. The main difference is the proprietary knowledge layer. The interface itself isn’t exotic.
This matters for how you think about preparation. You don’t need to learn Lilli specifically. You need to develop the underlying skill: working with a conversational AI to analyse a complex business situation. That skill is completely transferable from tools you already have access to.
McKinsey’s final round typically includes three separate interviews. The AI component is the third. Here’s the format as reported by candidates who’ve been through it: [4]
The key detail: the interviewer is watching how you interact with the AI, not just what you produce. Candidates who prompt once, copy-paste the output, and present it as their own analysis perform poorly. Candidates who treat Lilli as a thinking partner, pushing back on weak analysis, asking follow-up questions, and applying their own judgment to structure a coherent response, perform well.
This is a crucial distinction. The assessment isn’t “can you get AI to write your answer.” It’s “can you collaborate with AI effectively and still demonstrate your own expertise.”
Who this applies to right now: The AI interview is currently being piloted primarily for Business Analyst roles (McKinsey’s entry-level position) in the US market. Candidates at senior levels or outside the US may not encounter it yet, but the broader Spring/Summer 2026 rollout is expected to expand the scope significantly.
McKinsey has been clear that this isn’t a test of advanced prompt engineering. The bar is professional competence, not technical expertise. [4] Here’s what the assessment is actually looking for:
Can you ask clear, specific questions? Vague prompts produce vague outputs. The ability to frame a precise question, give the AI relevant context, and specify the format of the response you need is a real skill. It’s also directly analogous to the skill of running a good client interview or delegating work to a junior team member. Both require you to be specific about what you want.
AI produces plausible-sounding responses that aren’t always accurate or complete. Can you read an AI output and identify what’s solid, what needs verification, and what’s missing? This is arguably the most important skill. A candidate who takes everything Lilli says at face value and builds their analysis on it will produce a weaker response than one who interrogates the output.
If you want to understand why AI sometimes gets things wrong, our anti-hallucination toolkit covers exactly how to spot and correct those errors in practice.
The AI provides information and analysis. You provide the judgment. The strongest candidates use Lilli to accelerate their research and stress-test their thinking, then build a structured, opinionated response that reflects their own expertise. The AI is a tool. The consultant is still responsible for the quality of the answer.
You’re working in a timed environment. Candidates who spend ten minutes crafting the perfect prompt and then panic when the output isn’t what they expected don’t perform well. The skill here is confident, iterative use: try something, assess it quickly, adjust, move on.
The good news is that you can prepare for this using tools you already have access to. ChatGPT, Claude, and Gemini all work on the same underlying principles as Lilli. The proprietary knowledge layer in Lilli doesn’t change the fundamental skill being tested.
Pick a business case scenario (McKinsey publishes examples on their website; there are also hundreds on case interview prep sites). Open ChatGPT or Claude. Work through the case using a three-step loop:
Your goal isn’t to get a perfect answer from the AI. It’s to practise the loop of prompt, evaluate, refine. That’s the skill being tested.
For business case analysis, this three-part prompt structure works reliably:
Example Prompt Structure for Business Case Analysis
Role: Act as a management consultant analysing this business problem. Context: [Describe the scenario: company, industry, challenge, relevant facts] Task: Identify the 3 most likely root causes of [the specific problem]. For each, explain the evidence that would confirm it and the questions I should investigate further. Format: Use numbered points with a one-sentence summary followed by supporting reasoning.
This structure works because it gives the AI a clear role, full context, a specific task, and a defined output format. It also builds in the expectation that you’ll evaluate and investigate further, which mirrors exactly what a good consultant would do.
After you get an AI response, spend 60 seconds verbally narrating your assessment: “The first point is solid. The second is plausible but I’d want data to confirm it. The third is missing a key consideration.” This verbal habit builds the skill that the interviewer is looking for: demonstrating that you’re using AI as a thinking partner, not a crutch.
The AI interview is a component of a three-interview final round. It doesn’t replace the case interview or the personal experience interview. McKinsey is explicit: master those foundations first, then layer in AI preparation. Don’t spend 80% of your prep time on AI fluency at the expense of case structuring.
93% of recruiters plan to increase AI usage in their hiring processes in 2026. [5] 52% of talent acquisition leaders say they’re planning to add AI agents to their teams. [6] And workers with demonstrable AI skills now command a 56% wage premium over comparable peers without them. [7]
McKinsey is the most visible firm to formalise AI collaboration as an interview criterion, but it won’t be the last. Financial services firms are already testing similar approaches. Technology companies have been evaluating AI tool use in technical interviews for a while. Consulting firms that compete with McKinsey are watching closely.
The pattern is clear enough: hiring for AI fluency is moving from a nice-to-have to a tested requirement. The firms doing it first are the ones signalling where the market is going. By the time AI collaboration assessments are standard across consulting and finance, the candidates who started building the skill now will be years ahead.
We’ve covered the broader implications of this shift in our AI skills jobs piece, which goes into the wage premium data in more detail.
If you’re actively interviewing, or planning to within the next 12 months, here’s the concrete action: spend 30 minutes this week working through a business scenario with ChatGPT or Claude using the prompt structure above. Not as a prep exercise you might do one day. Actually do it this week. Track where you felt confident and where you felt lost. That gap is your preparation target.
If you’re not interviewing but manage a team: pay attention to how you’re assessing AI fluency in your own hiring. McKinsey’s formalisation of this assessment is going to accelerate the conversation across industries. The firms that build AI collaboration skills into their hiring criteria now will attract candidates who are already ahead of the curve.
What is the McKinsey AI interview?
McKinsey added an AI-collaboration component to its final-round interview process in 2026. Candidates are asked to work with Lilli, McKinsey’s internal AI platform, to analyse a business case. The assessment evaluates how well candidates prompt the AI, review its output, and apply professional judgment to produce a structured response.
What is Lilli, McKinsey’s AI tool?
Lilli is McKinsey’s proprietary AI platform built on large language models and integrated with McKinsey’s internal knowledge base. It functions similarly to ChatGPT or Claude, allowing conversational back-and-forth, but draws from McKinsey’s proprietary research and case documents to provide more firm-specific context.
Is the McKinsey AI interview already live?
The AI interview component is currently being piloted for Business Analyst roles in the United States. A broader rollout across other seniority levels and geographies is expected in Spring and Summer 2026. Candidates at other levels or outside the US may not encounter it in every cycle yet.
Do I need advanced prompt engineering skills for the McKinsey AI interview?
No. McKinsey has indicated that the bar is baseline competence, not technical expertise. The assessment focuses on whether you can ask clear questions, interpret AI output critically, and apply judgment to produce a structured professional answer. You don’t need to know what “prompt engineering” means technically to perform well.
Are other companies doing something similar?
McKinsey is the most prominent example, but the trend is broader. 93% of recruiters plan to increase AI usage in hiring in 2026, and firms in consulting, finance, and technology are incorporating AI collaboration assessments into their processes. McKinsey’s AI interview is the leading edge of a wider shift that will likely reach most major employers within the next 12 to 18 months.
Sources