Skip to content
RiverCore
The Anthropic vs OpenAI Revenue Story We Cannot Verify Yet
LLM revenue shareAnthropic OpenAIAI enterpriseAnthropic overtakes OpenAI enterprise revenueunverified AI revenue claims analysis

The Anthropic vs OpenAI Revenue Story We Cannot Verify Yet

1 May 20267 min readSarah Chen

Zero. That is the number of verifiable data points currently extractable from the article headlined "Anthropic tops OpenAI in LLM revenue share" on letsdatascience.com as of this writing. The URL resolves to a browser verification interstitial, not the underlying analysis, which means the claim sitting in the slug, that Anthropic has overtaken OpenAI on enterprise LLM revenue share, cannot be independently checked from this source today.

That sounds like a non-story. It is not. When a headline this consequential to AI infrastructure planning is locked behind a CAPTCHA wall, the responsible move is to write about what we actually know and what we explicitly do not, rather than synthesize a narrative around a slug.

The Numbers

Here is the full inventory of facts available from the source as Let's Data Science currently serves it: a page title ("Vercel Security Checkpoint"), a verification message ("We're verifying your browser"), and a site-owner remediation link. There are no revenue figures, no time periods, no segment breakdowns (API vs ChatGPT subscriptions vs enterprise contracts), no methodology notes, and no named analyst firm behind the comparison. Zero of the standard inputs an engineering or finance team would need to act on this.

For context on what would normally be in such a piece: a credible LLM revenue-share comparison needs at minimum four things. First, a definition of "LLM revenue", which can mean API tokens, Copilot-style seat licenses, model-provider passthrough from hyperscalers, or all of the above. Second, a time window, monthly run-rate versus trailing twelve months produces wildly different rankings in a market growing this fast. Third, a treatment of the Microsoft and Amazon channels, since a meaningful share of both vendors' revenue flows through Azure OpenAI Service and AWS Bedrock respectively, and double-counting is the default failure mode. Fourth, a clear handling of the consumer line, ChatGPT subscriptions, which is large for OpenAI and effectively non-existent for Anthropic at comparable scale.

The source does not disclose any of these, which matters because the same underlying market data can support either "Anthropic now leads" or "OpenAI still leads by 2x" depending on which slicing you choose. The bound on my confidence in the headline claim is therefore wide: directionally plausible (Anthropic's enterprise momentum through Claude on Bedrock has been visible for quarters), specifically unverifiable (we have no number to attach to "tops", by one point, by ten, by fifty?).

If this report is real and gets re-surfaced cleanly within the next two weeks, we should see secondary citations from Bloomberg, The Information, or Menlo Ventures' enterprise LLM survey series within fourteen days of original publication. If those citations do not appear by mid-May, that itself is signal that the claim is either misreported or geographically narrow.

What's Actually New

Strip the unverifiable headline away and ask the harder question: would an Anthropic-over-OpenAI flip on enterprise revenue share actually be new information, or is it already the consensus trajectory among people building on these APIs?

My read, and this is editorial rather than sourced from the blocked article, is that it would be partially new. The qualitative shift, that Claude has become the default for code generation and long-context document workflows in regulated industries, is not new. Engineering leads at fintechs and iGaming platforms have been quietly defaulting to Claude for backend agent work for most of the past year, while keeping OpenAI for consumer-facing chat and voice. What would be new is the crossover point, the specific quarter in which that workflow preference translated into a revenue-share lead.

The reason this matters to the audience reading this: vendor concentration risk in AI infrastructure is now an active board-level question, not a theoretical one. If you are a CTO at a payments company who picked OpenAI as your sole LLM provider in 2023, a credible report that the market leadership has flipped is a trigger to revisit that decision. Not necessarily to switch, but to ensure your abstraction layer can route to either, and to renegotiate.

The genuinely new technical context is that the cost of dual-vendor support has collapsed. Two years ago, supporting both Claude and GPT meant maintaining two prompt libraries, two evaluation harnesses, and two sets of tool-calling conventions. Today, with the Model Context Protocol stabilizing as a shared agent integration surface and most serious shops running model-agnostic eval frameworks, the switching cost is closer to a sprint than a quarter. Revenue share leadership matters less when lock-in has weakened.

What's Priced In for AI Development

The market, by which I mean the rough consensus among senior infra engineers and platform leads I talk to, has already priced in the following: Anthropic will continue to take enterprise share through 2026, OpenAI will continue to dominate consumer and developer-tooling mindshare, and Google will remain a strong third with a pricing advantage on long-context workloads through the Gemini API.

What is not priced in, and what a credible revenue-share flip would actually change, is the implicit assumption baked into a lot of 2025 procurement decisions that OpenAI is the "safe default" enterprise choice. The safe-default position carries a premium. Vendors who hold it can charge more, ship breaking changes more aggressively, and demand longer commit terms. If the revenue numbers actually show Anthropic in the lead, the safe-default premium transfers, and OpenAI's pricing power on enterprise contracts compresses within two to three quarters.

For teams in fintech, iGaming, and ad-tech specifically, the pricing implication is concrete: a competitive enterprise LLM market with no clear single leader is the best possible state for buyers. Multi-year commits become negotiable. Custom rate limits become negotiable. Data residency terms become negotiable. The unanswered question I cannot resolve from the source is whether the gap, if it exists, is wide enough to actually shift Anthropic's negotiating posture from hungry-challenger to confident-incumbent. The bound: if Anthropic's enterprise discount aggressiveness drops noticeably in Q3 2026 RFPs, that is the tell that they believe the lead is real and durable.

Contrarian View

The opposite reading is that revenue share among frontier labs is the wrong metric to track, and that obsessing over which lab "leads" in 2026 will look as dated by 2028 as ranking database vendors by license revenue looked by 2015.

The contrarian case: the value is migrating to the application and orchestration layer, not the model provider. If your retrieval pipeline, eval harness, agent framework, and prompt library are vendor-neutral, you genuinely do not care which lab booked more revenue last quarter. You care about latency, cost-per-task on your specific workload, and whether the model passes your evals. Open-weights models served through Hugging Face infrastructure or self-hosted on dedicated GPUs are quietly absorbing workloads where the frontier labs were charging too much for too little marginal quality.

Under that view, an Anthropic-tops-OpenAI headline is a vanity metric for the labs and their investors, not actionable intelligence for the people actually shipping AI products. The signal to watch instead is what percentage of new AI features in production at mid-market SaaS companies are using a frontier API at all, versus a fine-tuned open model. We do not have that number from this source either, but it is the one that would actually predict the next phase.

Key Takeaways

  • The originating article for the Anthropic-tops-OpenAI revenue claim is currently inaccessible behind a browser verification page, so the specific number, time window, and methodology are unverified as of this writing.
  • A credible LLM revenue-share comparison requires at least four disclosures (revenue definition, time window, hyperscaler-channel handling, consumer-line treatment); absent these, directional claims are plausible but not actionable.
  • Switching costs between Claude and GPT have dropped substantially with maturing agent protocols and model-agnostic eval tooling, which weakens the strategic weight of any single quarter's revenue leadership.
  • The real signal to watch is whether Anthropic's enterprise discounting tightens in mid-2026 RFPs; that would indicate they believe a revenue lead is durable.
  • Testable prediction: if the underlying report is solid, expect secondary citations from at least one tier-one outlet (Bloomberg, The Information, or a named analyst firm) within fourteen days. If that does not happen by 15 May 2026, treat the headline as unconfirmed.

Frequently Asked Questions

Q: Has Anthropic actually overtaken OpenAI in LLM revenue share?

As of this writing, the source article making that claim is gated behind a browser verification page, so the specific figures, methodology, and time window cannot be independently verified. The directional trend of Anthropic gaining enterprise share has been visible for several quarters, but the specific crossover claim needs corroboration from a tier-one outlet before it should drive procurement decisions.

Q: How should engineering teams react to LLM vendor revenue-share news?

Treat it as a prompt to audit your abstraction layer rather than a switching trigger. If your application code can route to either Claude or GPT (or Gemini, or an open-weights model) without major refactoring, vendor leadership shifts become a pricing negotiation lever instead of a migration crisis. The teams that get hurt by these shifts are the ones with hardcoded provider assumptions in their prompt libraries and tool-calling logic.

Q: What metrics matter more than frontier-lab revenue share?

Cost-per-task on your specific workload, eval pass rates against your own test suite, and the percentage of your AI features that could be served by a fine-tuned open model at acceptable quality. Revenue share between OpenAI and Anthropic tells you about lab valuations and fundraising use; it tells you very little about which model your specific use case should be running on next quarter.

SC
Sarah Chen
RiverCore Analyst · Dublin, Ireland
SHARE
// RELATED ARTICLES
HomeSolutionsWorkAboutContact
News06
Dublin, Ireland · EUGMT+1
LinkedIn
🇬🇧EN▾