
Rows of Nvidia GPU racks stacked floor to ceiling, cooling systems operating at maximum capacity, and the entire operation consuming electricity at a scale that would have seemed unthinkable ten years ago are a common sight in the data centers rising across Texas, Virginia, and the Nevada desert.
The chips in almost all of these facilities bear the name Nvidia, and they cost billions to construct and billions more to run. For its 2026 fiscal year, the company reported revenues of more than $200 billion, shattering its own record by so much that Wall Street analysts had to discreetly adjust their already aggressive models. Nevertheless, the stock’s valuation still necessitates a level of future earnings that calls for careful consideration.
| Category | Details |
|---|---|
| Company | Nvidia Corporation |
| Ticker | NVDA (NASDAQ) |
| Founded | 1993, Santa Clara, California |
| CEO | Jensen Huang |
| Market Capitalization (peak) | ~$3+ trillion USD |
| FY2026 Annual Revenue | Over $200 billion (all-time high) |
| Q3 FY2026 Revenue | $57 billion (record quarter) |
| Primary Revenue Driver | Data center GPU sales (AI infrastructure) |
| Key Product | H100 / H200 / Blackwell GPU chips |
| Global AI Capex (2025 est.) | ~$400–580 billion annually |
| Key Customers | Microsoft, Google, Amazon, Meta, OpenAI |
| Nvidia Investment in OpenAI | ~$100 billion commitment |
| Key Risk | Valuation dependent on sustained AI spending growth |
| Reference Website | Harvard Business Review — Is AI a Boom or a Bubble? |
When you sit down to do the math, it is not comforting. Nvidia is among the most valuable companies in the history of public markets, with a market capitalization that has fluctuated between three trillion dollars and more during the past year.
The company must produce profits on a scale that has no real precedent in the technology sector in order to defend that price using traditional valuation logic, the kind that is taught in finance classes and eventually reemerges in every market. Using discounted cash flow models, some analysts have claimed the stock is undervalued, citing the structural nature of GPU demand and forecasting years of compounding revenue growth. They might be correct. It’s also possible that they are modeling a world in which all of the current assumptions hold true forever, which is a specific type of optimism that markets have previously temporarily rewarded.
Anyone who is seriously considering the AI investment cycle should find this number unsettling in the larger context. The S&P 500’s technology companies spend between $400 and $580 billion a year on AI infrastructure, including data centers, chips, power systems, fiber, and cooling.
When loosely adjusted for inflation, that amount surpasses the combined cost of the Apollo program and the Marshall Plan. Both of those were enormous, historically noteworthy investments that eventually produced real returns. However, their governments also had clear goals in mind. The biggest technology companies on the planet are currently engaged in a private sector arms race in which they are essentially vying to spend money more quickly than their competitors. The financial returns from this spending are still, by most honest assessments, extremely uncertain.
The reasoning behind the expenditure has some basis. People like Jensen Huang, Sam Altman, Mark Zuckerberg, and Sundar Pichai genuinely believe that AI will grow in capability and economic utility at a rate that eventually justifies the infrastructure being built today. The internet is the most frequently mentioned historical parallel. During the bust, firms that constructed fiber networks in the 1990s appeared ridiculously overpriced. Their infrastructure served as the foundation for everything that came after. That comparison is a major source of support for Nvidia’s bulls, so it’s important to consider it rather than brush it off.
The transparency of the uncertainty is what distinguishes the current situation and makes it worthwhile to proceed cautiously. This is not 2008, when risk mechanisms were concealed in financial instruments that were not even fully understood by those who sold them. The figures for AI spending are available to the public. The revenue models are available to the public.
The disparity between the amount spent and the revenue generated by AI applications is evident and frequently discussed, even by the executives who are responsible for the expenditures. Sam Altman has admitted that over-investment is likely. Similar remarks have been made by Zuckerberg. Because the perceived cost of being late is greater than the perceived cost of being incorrect, they are still spending money. At the corporate level, that makes sense strategically. At the level of the asset prices built upon that reasoning, the picture is more nuanced.
In a unique way, Nvidia is at the center of this. The business isn’t placing a wager on whether AI will yield financial gains because it has already done so for Nvidia. Every dollar spent on AI infrastructure by Microsoft, Google, Amazon, and Meta flows through Nvidia’s order books in a tangible and up-to-date manner. There was no forecast for the $57 billion quarter. It took place.
Those chips have incredible margins. The competitive moat created by CUDA, the software layer that makes Nvidia’s hardware the preferred option for AI developers, is real and has proven to be more resilient than doubters anticipated. There’s a feeling that Nvidia has earned some of the premium the market gives it, at least for the time being.
What part is the question? And that’s where the math becomes awkward once more. A company that is trading at the multiples Nvidia has maintained throughout much of 2025 and into 2026 is priced for a future in which the current momentum will continue without major disruption—no major customer choosing to cut GPU spending, no competing chip architecture gaining significant traction, no AI application layer failing to produce the returns that justify the infrastructure, and no geopolitical disruption to the Taiwan Semiconductor supply chain that produces these chips. Every one of those dangers is real. None of them are unlikely. On most days, the stock price appears to give them little weight.
As this develops, there is a recurring conflict between what the price suggests for tomorrow and what the numbers say today. Nvidia’s earnings are genuine and increasing. For the time being at least, the demand from hyperscalers seems to be structural rather than cyclical.
However, assumptions about AI adoption curves, the sustainability of Nvidia’s market position, and the willingness of the world’s biggest tech companies to continue spending at a rate that has no historical precedent fill the gap between the $200 billion in actual revenue and the $3 trillion valuation. Whether those presumptions are audacious or merely hopeful is still up for debate.
Real infrastructure was also constructed during the dot-com era. The internet advanced thanks to Cisco routers. The stock of the company dropped 86% from its highest point. The infrastructure endured. The appraisal didn’t. That isn’t a forecast regarding Nvidia. It serves as a reminder that the market doesn’t always settle these issues simultaneously and that being correct about the technology is distinct from being correct about the price.
