OpenAI crossed $25 billion in annualized revenue. Anthropic is testing new pricing structures. Here is what professionals who rely on these tools daily actually need to know.
OpenAI hit $25 billion in annualized revenue in early 2026, and Anthropic is approaching $19 billion. The AI tool market is maturing fast, which brings real changes: Anthropic is already shifting enterprise customers to usage-based billing, OpenAI is reportedly testing ads in ChatGPT, and there are signs that free tiers will become increasingly restricted. This article explains what these financial shifts mean for professionals who depend on these tools, and what to do about it.
OpenAI crossed $25 billion in annualized run rate in February 2026. [1] To put that in perspective: Salesforce, one of the most successful software companies in history, took about 18 years to reach $25 billion in annual revenue. OpenAI did it in 39 months. [2]
Anthropic is not far behind. The company reached approximately $19 billion in annualized revenue as of early March 2026, representing close to a tenfold increase in a single year. [3]
These are not venture-backed startups burning cash on promises anymore. These are real businesses generating real revenue at scale. And that changes how they will behave toward the people paying for their tools.
The specific number that caught my attention: Claude Code, Anthropic’s coding assistant, crossed $2.5 billion in annualized revenue on its own. That is a single product line generating more revenue than most mid-size software companies in their entirety.
When a tech company reaches this kind of revenue at this kind of speed, a few things tend to happen. The startup-phase generosity starts to narrow. Free tiers become less generous. Enterprise pricing gets more structured. And companies start looking for additional revenue streams.
This is not speculation. We are already seeing it play out.
OpenAI is reportedly preparing for an IPO. [4] Public markets have a different patience level than private investors. A public OpenAI will face pressure to demonstrate sustainable margins, which almost certainly means tighter restrictions on free usage and more aggressive expansion of paid tiers.
Anthropic, meanwhile, has been even more direct. The company has moved away from flat-rate enterprise contracts and toward per-token usage billing, meaning enterprise clients now pay based on how much they actually use the tools. [5] For light users, this can be a saving. For teams that have embedded AI deeply into daily workflows, usage bills can be significant.
What this means in practical terms: If your team is on a flat enterprise AI contract negotiated in 2024 or early 2025, the renewal conversation will likely look different. Usage-based pricing is coming for most enterprise customers, and it rewards teams who are intentional about when and how they use AI rather than using it for every task.
Anthropic made a move that surprised a lot of people in April 2026: testing whether Claude Code would be included in the $20/month Pro plan. [6] The short version: the company was weighing whether to move Claude Code to a higher-tier or standalone product, reflecting the uncomfortable truth that the most powerful AI features cost significantly more to run than flat subscriptions cover.
This is not unique to Anthropic. OpenAI launched ChatGPT Pro at $200 per month for its most demanding users, and the gap in capability between free, Plus ($20), and Pro has widened significantly. The features that matter most for professional work are increasingly concentrated in the higher tiers.
A few specific shifts you should know about:
Probably, but in a more restricted form than today. Free tiers serve a real purpose: they lower the barrier to adoption and fill the top of the pipeline with future paying customers. Neither OpenAI nor Anthropic has indicated they will eliminate free access entirely.
What we are more likely to see is a gradual narrowing of what free users can actually do. Slower model access. Lower daily usage caps. No access to the most capable reasoning models. Think of how Spotify’s free tier still exists but is noticeably worse than it was five years ago, compared to its paid offering.
For professionals who have been relying on free ChatGPT for work tasks, this is the nudge to honestly evaluate whether a paid subscription is worth it. At $20 a month, ChatGPT Plus almost certainly is if you are using it more than a few times a week for real work.
This one is harder to talk about because it has not been confirmed, but the financial math is pretty clear. OpenAI is generating $25 billion a year but is still reportedly not profitable at that scale because of infrastructure costs. [7] One of the most obvious ways to bridge that gap is advertising.
Reports in early 2026 suggest OpenAI has been exploring ad models for ChatGPT, potentially in the form of sponsored responses or branded context. Anthropic faces similar pressures.
For professionals, this raises a legitimate concern: if AI answers are influenced by sponsored content, how do you know whether the recommendation to use a particular software tool or vendor is based on your prompt or on a paid relationship? The measuring AI ROI conversation becomes more complicated if the tools themselves have commercial incentives that are not fully disclosed.
For now, neither company has rolled out advertising. But it is worth watching, and worth forming a view on whether ad-supported AI is something you are comfortable embedding in your professional workflow.
None of this means AI tools are about to become unusable or unaffordable. But it does mean that the freewheeling early-adopter phase, where everything was generous and pricing was almost an afterthought, is ending.
Here is how to get ahead of it:
Evaluate what you actually need. You probably do not need ChatGPT Pro ($200/month) and a Claude Pro subscription ($20/month) and Gemini Advanced and three other tools. Pick one or two that genuinely fit your workflow and go deeper with them. Spreading spend across every tool in the market is the least efficient approach.
Track what your team uses. If you manage a team with AI tool subscriptions, pull usage data before renewal. Usage-based pricing will punish teams that subscribed to enterprise seats but only have a fraction of staff actively using the tools.
Document your AI workflows. If your process for a specific task (market analysis, proposal drafting, meeting prep) depends on a specific AI tool, document it clearly enough that you could switch tools in two hours if pricing changed dramatically. This is basic risk management.
Learn what the AI IPO means for you. We covered this in depth in the OpenAI IPO analysis, but the short version is: a public OpenAI will optimize more explicitly for revenue, which means the product decisions will be more visible and more predictable based on financial incentives than they were when the company was private.
Here is the thing I keep coming back to when I talk to professionals who have embedded AI deeply into their work: almost none of them have thought seriously about what happens if their preferred tool raises prices significantly, restricts key features, or gets acquired.
At the scale these companies are operating now, acquisitions are not far-fetched. And corporate owners have different priorities than founder-led startups. The features you depend on today may not exist in the same form in 18 months.
The practical mitigation is not to avoid AI tools. It is to build workflows that are portable. Use AI for drafts, summaries, and analysis that you then take into your own systems. Avoid workflows that only work in one specific chat interface or depend on a provider’s specific memory features.
That is not pessimism. It is the same vendor risk assessment you would apply to any critical software tool. AI is just particularly new, so the habit of applying that assessment has not caught up with the rate of adoption.
Honest bottom line: The AI tools you use for work are about to become more like traditional SaaS products: tiered, priced for value capture, and shaped by investor return expectations. The window to explore and experiment freely is not closed, but it is narrowing. The professionals who come out ahead are the ones who learn deeply now and build durable workflows, rather than hopping between tools without establishing real proficiency in any of them.
Frequently Asked Questions
Will ChatGPT Plus still be $20 per month?
As of April 2026, ChatGPT Plus remains $20 per month. There has been no public announcement of a price increase. However, OpenAI has been pushing its higher-tier ChatGPT Pro plan at $200 per month and experimenting with advertising, suggesting the $20 tier may be maintained as a volume play while premium features are pushed upmarket.
Is Anthropic more expensive than OpenAI?
For direct consumer access, both are similar: Claude Pro and ChatGPT Plus both cost $20 per month. For API and enterprise users, pricing varies by model and usage tier. Anthropic recently shifted enterprise customers to per-token usage billing, which can be more or less expensive depending on how intensively your team uses the tools.
Will AI tools always have free tiers?
There is no guarantee. Both OpenAI and Anthropic offer free tiers as a customer acquisition strategy, but the costs of running these models are enormous. Free tiers are a subsidy that makes sense during the growth phase. As the market matures and investor pressure increases, free tier limitations are likely to tighten gradually rather than disappear overnight.
What does usage-based AI pricing mean for my team?
It means you pay based on how much your team actually uses the tools, measured in tokens processed. For light users, this is often cheaper than a flat subscription. For heavy users running large document analyses or using AI constantly throughout the day, costs can be significantly higher. Always model your expected usage before committing to consumption-based pricing.
Should I be worried about lock-in if I build workflows around one AI tool?
This is a real and underappreciated risk. If your team builds processes tightly around a single provider’s specific features, a price increase or feature change can be disruptive. Build workflows that use standard outputs (text, documents, data) that work with any AI tool rather than depending on provider-specific integrations wherever possible.
Sources
About this article: Written for business professionals navigating AI tool decisions in 2026. Sources include public financial reporting, industry analysis, and direct company announcements from OpenAI and Anthropic.