AI’s next investment cycle belongs to applications
The artificial intelligence (AI) industry has reached a crossroads. For years, companies poured money into infrastructure such as data centres, chips and underlying models. Now, the big question is not about AI working, but whether it can be profitable. The solution lies in concrete AI applications, and not just having more training or larger GPU clusters.
AI infrastructure versus AI applications
In 2025, companies spent around $320 billion on AI infrastructure. Despite this huge investment, foundation model businesses have thin profit margins. High inference costs cut into revenue and competition keeps prices down. For instance, OpenAI reached $13 billion in annualised revenue by August 2025 but still lost $5 billion in 2024. This approach is not sustainable; it is a short-term fix supported by venture capital and corporate money hoping for better returns.
The story is different for AI applications. In 2025, businesses spent $19 billion on AI applications, making up more than half of all generative AI spending. This is over 6% of the total software market, reached just three years after ChatGPT was launched. More importantly, this spending proves real market demand: companies are no longer only testing AI. They are using it widely. At least 10 AI products now bring in over $1 billion in annual recurring revenue, and 50 products make more than $100 million.
Meta’s $2 billion purchase of Manus in December 2025 shows this shift. Manus, a Singapore startup, launched its AI agent just nine months earlier and quickly reached $125 million in annual revenue by supplying a simple but effective AI product that gets tasks done — not just talks about them — thus proving both technical ability and business success.
Investors want companies with real customers, not just technology. By the third quarter of 2025, there were 265 private equity deals involving AI applications, a 65% increase from the previous year, and 78% were add-on acquisitions for existing portfolio companies. Strategic mergers and acquisitions in AI hit record highs by Q3, with deal values up 242% from the year before.
Where the real value is
Real value is emerging in the departmental AI segment. In 2025, coding tools made up $4 billion of the $7.3 billion departmental AI market, making them the largest segment. Half of all developers now use AI coding tools every day, a number which rises to 65% in top-performing companies. When ServiceNow bought Moveworks or Nvidia purchased several AI startups, these were not infrastructure deals. They were investments in companies that help customers achieve real business results with AI.
The foundation model landscape itself tells the application story. Anthropic now commands 40% of enterprise LLM spending, up from 24% last year and 12% in 2023, while OpenAI’s enterprise share fell to 27% from 50% in 2023. Anthropic did this by dominating coding applications, where it holds a market share of 54% compared to OpenAI’s 21%. Applications drive infrastructure adoption and foundation models, not the other way around. Morgan Stanley reports that generative AI reached a 34% contribution margin in 2025, its first profitable year, and this could rise to 67% by 2028 as infrastructure costs fall and efficiency improves. However, most of these profits go to companies selling complete solutions, not just raw computing power.
Private investors now have to decide which use cases will create the next wave of real value, not just add a simple interface to ChatGPT without real value. But solutions built for specific verticals such as health care, law, finance and manufacturing (those that are deeply integrated into workflows, use unique data, and become essential to operations) are the businesses that are worth serious investment.
The free market distributes resources well when investors focus on fundamentals, beyond mere stories. Revenue, customer retention, growth rates and paths to profitability matter again. Right now, circular financing obscures true demand. For instance, of Microsoft’s reported Azure AI revenue, quite a lot comes from OpenAI’s spending on compute at heavily discounted rates that essentially cover only Microsoft’s costs. Applications break this pattern because they generate revenue from outside the circular financing loop.
Core issues
For governments, the next phase will raise tough questions about competition, especially as foundation model providers begin to build their own applications. When OpenAI launches coding tools or Anthropic develops enterprise solutions, it puts pressure on independent application builders who do not have the same infrastructure advantages. Copyright issues are also becoming more important as the source of training data becomes a key legal concern. Privacy rules will need to adapt to address AI agents that access large amounts of personal and business information.
Policymakers should not hurry into strict regulations. The application layer needs room to experiment, fail and improve until it finds product-market fit. But still, rules about competition, especially reviews of acquisitions that prevent big companies from buying and shutting down potential rivals, are important. The trend of acqui-hires (where startups are bought mainly for their staff and then closed) often leaves employees stranded and can hurt the energy and the innovation that the sector needs.
The Internet was not monetised by selling bandwidth. It was monetised by building applications that made bandwidth valuable. AI will follow the same trajectory.
Arindam Goswami is a Research Analyst in the High Tech Geopolitics Programme at The Takshashila Institution, Bengaluru
Published – February 04, 2026 12:08 am IST




