In March 2026, a leaked secondary market transaction valued Anthropic’s shares at a price implying a 35x multiple on its annualised revenue run rate of approximately $1.7 billion. Around the same time, Perplexity AI — a search-focused AI company with approximately $100 million in annualised revenue — was reportedly in fundraising discussions at a $14 billion valuation, implying a 140x revenue multiple. These are not fringe cases from an overheated seed market. They are the current market-clearing prices for companies that sophisticated institutional investors — sovereign wealth funds, corporate strategics, and top-tier venture firms — are actively competing to own.
The question these valuations raise is not whether AI is a transformative technology — that debate is settled. The question is whether the specific companies receiving these valuations have the durable competitive characteristics that justify paying 20 to 30 times or more on current revenues. The history of technology investing provides some guidance, but also some sobering precedents.
The Bull Case: Why Premium Multiples Might Be Rational
The strongest argument for premium AI valuations rests on three propositions. First, the total addressable market for AI applications is genuinely enormous. McKinsey’s 2024 Global AI Survey estimated that AI could add $13 trillion to global GDP by 2030. Even if the actual figure is a fraction of that projection, the revenue opportunity for companies that capture meaningful share of AI-driven enterprise software, automation, and services markets is orders of magnitude larger than their current revenue bases.
Second, AI infrastructure companies exhibit network effects and data flywheel dynamics that reinforce competitive position over time. Each interaction with an AI system generates data that can improve the model; improved models attract more users; more users generate more data. OpenAI’s ChatGPT has over 400 million weekly active users as of early 2026. That user base and the interaction data it generates represents a competitive moat that is difficult for a new entrant to replicate quickly, regardless of how much capital they deploy.
Third, the marginal economics of AI software, once models are trained and infrastructure is deployed, are genuinely attractive. The incremental cost of serving an additional API query is small relative to the value it delivers to the customer. If AI companies can build durable pricing power — maintaining revenue per query as competition increases — the unit economics at scale could support very high profitability. Investors pricing in those future profitability levels are not being irrational; they are applying a standard discounted cash flow logic to a business with potentially exceptional long-run margins.
The Bear Case: What the Multiples Require to Be Justified
The bear case does not dispute the size of the AI opportunity. It disputes the assumption that the specific companies receiving premium valuations will capture durable, profitable share of that opportunity. Several structural risks challenge that assumption.
Model commoditisation is the most fundamental threat to AI valuations. The capability gap between leading frontier models and open-source alternatives has narrowed dramatically over the past 18 months. Meta’s Llama 3 series, Mistral’s models, and DeepSeek’s R1 — released by a Chinese lab at a fraction of the cost of comparable Western models — demonstrated that frontier AI capability is becoming accessible to organisations that are unwilling to pay premium pricing for proprietary models. If model capability converges and open-source alternatives become genuinely competitive, the pricing power of frontier AI companies faces structural pressure.
Compute cost is the second constraint. Training and serving frontier AI models requires enormous capital investment in GPU clusters and data centre infrastructure. OpenAI’s compute costs have been estimated at $3-5 billion annually; the company’s path to profitability requires either dramatically reducing those costs through model efficiency improvements or achieving revenue scale that covers them with margin to spare. Both are achievable scenarios, but neither is guaranteed, and the capital required to maintain competitive model capability is ongoing rather than one-time.
Customer concentration and churn risk are less discussed but equally important. Enterprise AI adoption is still in early stages, and the actual retention and expansion economics of AI products — how many enterprise customers that start a pilot convert to full deployment, how quickly they expand usage, how sticky the products prove to be — are not yet established at the scale that the current valuations assume. The SaaS analogy is instructive: early SaaS valuations in the 2010s were justified by retention and expansion metrics that took years to establish as reliable. AI companies are being valued as if those metrics are already proven.
Valuing Different Categories of AI Company
The AI valuation question is not uniform across the sector. Different categories of AI company have fundamentally different competitive characteristics and therefore different valuation frameworks.
Foundation model companies — OpenAI, Anthropic, Google DeepMind, Meta AI — are competing in a capital-intensive arms race where the prize is genuinely large but the competitive dynamics are brutal. The incumbents have massive compute advantages, distribution through existing products, and the ability to absorb losses that independent companies cannot match indefinitely. For pure-play foundation model companies without the backing of a major technology conglomerate, the path to self-sustaining profitability requires capturing a share of the AI application layer that may prove elusive as application-layer competition intensifies.
Application-layer AI companies — those building specific products on top of foundation models — have lower compute costs but face the risk of being displaced when foundation model providers add their application as a native capability. The ‘getting Zoom’d’ or ‘feature risk’ problem that affected many SaaS companies in the 2010s applies with particular force to AI application companies: if the underlying model provider decides to compete in your specific use case, your competitive moat may be shallow.
AI infrastructure companies — providing the tooling, observability, data management, and deployment infrastructure that enterprises need to operationalise AI — may represent the most defensible category. These businesses are selling to the AI trend without being exposed to the model commodity risk; their value is in the workflow integration and enterprise relationships rather than in the model capability itself.
The Historical Precedent
Technology investors who lived through the dot-com era and the 2021 cycle have reason for caution about the current AI valuations, but also reason not to apply those historical analogies too mechanically. The dot-com bubble involved companies with no revenue and no credible path to revenue being valued at hundreds of millions or billions of dollars. The leading AI companies have real and growing revenue, genuine product-market fit across millions of customers, and defensible competitive positions — these are categorically different from pets.com.
The 2021 SaaS bubble is a closer analogy. Companies with real but decelerating growth, high revenue multiples, and assumptions about future margin expansion that proved too optimistic corrected by 60-80% when interest rates rose and growth slowed. Some of those companies have since recovered and surpassed their 2021 peaks; others have not. The lesson is that the technology and the valuation can both be real, while the specific price paid at the peak of excitement still proves too high.
What Investors and Founders Should Do With This
For investors, the AI valuation environment requires explicit scenario analysis rather than benchmark-anchored multiples. The relevant questions: what revenue run rate and margin profile does this company need to achieve for my return assumption to hold? What are the most plausible scenarios where it does not reach those targets? How does the competitive moat look in five years given model commoditisation and big-tech competition? These questions have defensible answers for some AI companies and very uncertain answers for others.
For founders, the premium valuation environment creates both opportunity and risk. Raising at a high valuation now provides capital to invest in competitive position, but it also sets a high bar for the next round and embeds structural terms that can be painful in down scenarios. Founders who understand the bubble risk and build accordingly — maintaining capital efficiency, prioritising genuine retention over growth metrics, and building moats that extend beyond model performance — are better positioned to navigate what comes after the peak.
The 20 to 30x revenue multiples are not uniformly unjustified. For the handful of companies with genuine network effects, defensible data moats, and clear paths to high-margin scale, they may prove to be bargains in retrospect. For the majority of AI companies that lack those characteristics, the multiples embed assumptions that the market will eventually revisit. Distinguishing between the two requires more rigorous analysis than the current excitement typically produces.