The Three-Company Problem
Three companies control 88% of the enterprise AI market. That concentration creates vendor risk, pricing uncertainty, and regulatory exposure that most due diligence processes are not designed to capture.
Enterprise LLM Market Share
The shift happened fast. In 2023, OpenAI held 50% of enterprise large language model (LLM) API usage. By late 2025, that dropped to 27%. Anthropic took the lead with 40%, driven largely by its dominance in coding tools. Google grew from 7% to 21%. The remaining 12% is split among Meta's Llama, Cohere, Mistral, and smaller providers.
Three companies now control nearly nine-tenths of how enterprises access AI. For comparison, the three largest cloud providers (Amazon Web Services, Microsoft Azure, Google Cloud) control about 63% of cloud infrastructure. The AI market is more concentrated than cloud.
How Concentration Emerged
The concentration reflects several factors. First, enterprise AI requires significant integration work, creating high switching costs once a vendor is embedded. Second, the leading models have pulled ahead on benchmarks that matter for business use, particularly coding and reasoning tasks. Third, the infrastructure requirements for training competitive models have grown faster than startup capital can follow.
NVIDIA controls 90% of AI chip revenue. Its next-generation Blackwell GPUs are backordered for over a year. Two customers account for nearly 40% of NVIDIA's data center sales. Companies outside the major cloud providers face 12-18 month waits for training hardware at scale. This effectively caps how many serious competitors can emerge.
Regulatory Exposure
The Federal Trade Commission (FTC) has taken active interest in how the AI market is structured. In January 2024, the FTC issued compulsory orders to Alphabet, Amazon, Anthropic, Microsoft, and OpenAI, demanding information about their investment and partnership arrangements.
The Commission voted 5-0 to release findings on cloud provider partnerships with AI developers. The report flagged concerns about revenue-sharing arrangements, compute resource access, and engineering talent acquisition. The unanimous vote included commissioners appointed by both parties.
The FTC is examining whether Microsoft's $10 billion investment in OpenAI was structured to avoid merger disclosure requirements. The agency is also reviewing whether Microsoft canceled internal AI projects after the deal, which could indicate competitive harm. The investigation continued under the new administration, with the FTC Chair confirming "big tech is one of the main priorities."
The UK Competition and Markets Authority (CMA) opened investigations into Amazon-Anthropic, Microsoft-Mistral, and Microsoft-OpenAI partnerships in 2024-2025. This suggests coordinated regulatory interest across jurisdictions, increasing the likelihood of structural remedies if violations are found.
Copyright litigation adds another layer of risk. Over 50 lawsuits are pending in US federal courts between content owners and AI developers. Outcomes have been mixed: in June 2025, Meta won a fair use ruling for using copyrighted books in training, but in November 2025, OpenAI lost a German copyright case. The New York Times lawsuit against OpenAI remains active, with discovery requiring OpenAI to preserve all ChatGPT conversation logs.
Political Positioning
The leading AI companies have rapidly expanded their Washington presence. OpenAI increased its federal lobbying spend from $260,000 in 2023 to $1.76 million in 2024. More significantly, it grew from 3 registered lobbyists to 18, including a former five-year staffer to Senator Lindsey Graham. Anthropic more than doubled its spend and hired its first in-house lobbyist, a former Department of Justice official.
The combined lobbying spend of the three leading AI labs reached $2.71 million in 2024, up from $610,000 the year before. That remains small compared to established tech companies (Meta spent $24 million in 2024), but the growth rate signals a shift in strategy.
According to Congressional staffers and advocates cited in reporting, AI companies publicly support regulation while pushing for "light-touch and voluntary rules" in private meetings. They have also repositioned AI development as a national security imperative, arguing that government must support industry growth to maintain competitive advantage against China.
Red Label Assessment
The concentration is real. The risks are often misunderstood.
Most coverage of AI market concentration focuses on competitive dynamics between the leading companies. For investors and corporate users, the more relevant questions are structural: What happens when a small number of suppliers control access to a technology that companies increasingly depend on?
Three risks that matter
AI API prices have fallen rapidly as providers compete for market share and improve efficiency. But with 88% concentration, the leading companies have significant room to change pricing once customers are integrated. Enterprise software buyers have seen this pattern before. The switching costs for AI providers are high: fine-tuned models, integrated workflows, employee training, and compliance documentation all create lock-in. OpenAI's annualized revenue grew from $3.7 billion to $20 billion in twelve months. The question is what pricing looks like once growth slows and investors expect margins. This could be disrupted by new entrants, open-source alternatives, or technological shifts that reduce the capability gap between providers, but current trajectories favor the incumbents.
The FTC investigation into Microsoft-OpenAI, the pending copyright cases, and the European regulatory actions will produce outcomes that affect the entire market. Most likely: these end in settlements, consent decrees, or negotiated agreements rather than dramatic restructuring. But even moderate outcomes matter. If Microsoft faces restrictions on its OpenAI integration, that affects competitive dynamics for every company using OpenAI's API. If copyright rulings require licensing payments, the costs flow through to customers. The range of outcomes spans from minimal impact to forced restructuring, but companies with deep AI integrations carry exposure to decisions that management cannot influence regardless of where on that spectrum the resolution lands.
OpenAI completed its restructuring in October 2025, converting to a public benefit corporation controlled by a non-profit foundation. Microsoft holds 27% equity but has no governance power. The safety committee reports to the foundation, not the for-profit entity. These structures are new and have not been tested by serious disagreement between commercial and safety interests, by a major failure, or by the pressure of a potential $1 trillion IPO. Anthropic has prepared for a 2026 listing. The governance commitments made during the startup phase may not survive public market pressure.
What the concentration does not mean
Some observers treat AI concentration as inherently threatening. The evidence does not support that yet. Competition between Anthropic, OpenAI, and Google has driven rapid capability improvements and price reductions. The labor market impact remains contested: some sectors show disruption, but net employment effects are unclear. The copyright litigation may ultimately establish fair use protections that benefit the industry. And concentration could decline: open-source models from Meta and others are improving rapidly, Chinese competitors like DeepSeek are gaining attention, and custom silicon from Amazon and Google may reduce the infrastructure advantages of incumbent providers.
The risk is not that concentration is harmful today. The risk is that the structure creates dependencies before the implications are clear. Companies are building AI into core operations while regulatory, pricing, and governance questions remain open. The prudent response is to understand the exposure, not to avoid the technology.
Client Implications
For clients conducting due diligence on AI-dependent businesses or assessing portfolio exposure, this concentration creates specific questions that standard commercial analysis may not capture.
- Vendor concentration: What percentage of the target's AI functionality depends on a single provider? What are contractual switching costs?
- Pricing assumptions: Financial models should stress-test AI costs at 2-3x current levels, which remains possible with 88% concentration.
- Regulatory scenarios: Would forced restructuring of Microsoft-OpenAI affect the target's operations or competitive position?
- Multi-provider strategy: Companies with AI dependencies should evaluate whether secondary providers are contractually ready, even if not currently used.
- Data rights: If copyright rulings restrict training data, providers may need access to customer data to maintain model quality. Understand what rights your contracts grant.
- Governance diligence: For deep integrations, the provider's governance structure affects your operational risk. Understand who controls safety decisions and how those interact with commercial pressure.
- Direct exposure: Holdings in the leading AI companies or their major investors (Microsoft, Amazon, Google) carry regulatory risk that may not be reflected in current valuations.
- Portfolio companies: Operating businesses with AI dependencies may face cost or operational disruption that affects valuation.
- M&A due diligence: AI vendor concentration should be a standard item in technology diligence, including contract terms, switching costs, and provider governance.
- Copyright exposure: Targets using AI for content generation may carry liability if training data rights are challenged.
- Regulatory monitoring: FTC actions, CMA decisions, and German court rulings may affect deal conditions or representations.
When assessing AI-dependent businesses, consider:
- Which AI provider(s) does the target use, and what percentage of core functionality depends on them?
- What are the contractual switching costs and minimum commitments?
- Has the target evaluated alternative providers, and at what cost differential?
- What happens to the target's operations if their primary AI provider's terms change significantly?
- Does the target's use of AI create copyright exposure from training data or generated content?
- What regulatory scenarios could affect the target's AI supply chain?
Sources
| Source | Data Used | Date |
|---|---|---|
| Menlo Ventures | Enterprise LLM market share data (88% concentration, individual shares) | 2025 |
| Federal Trade Commission | Staff report on AI partnerships, 5-0 vote, investigation scope | January 2025 |
| OpenSecrets | AI company lobbying expenditure data | 2024 |
| TechCrunch | OpenAI restructuring, valuation, Microsoft stake details | October 2025 |
| Copyright Alliance | AI copyright litigation status (50+ cases pending) | 2024 |
| Semiconductor Digest | NVIDIA market share, compute concentration, chip backlog data | 2025 |