Nvidia Earnings Digest: It's a marathon, not a sprint.
And why it's good thing to be the shovel seller at the top of the gold mine
For the most part, the market breathed a sigh of relief when NVDA 0.00%↑ reset the clock on the ticking AI bomb;
—
Nvidia reported revenue soaring to $57 billion a 62% annual increase that topped expectations by over $9 billion. However, there is a much more interesting story behind these headline-grabbing figures about who is cashing in on the AI revolution.
Nvidia may have ended the week selling off it’s earnings sweep. But fret not dear reader because there is still plenty of room on the upside —
A Shovel Seller’s Dream
Nvidia is running the shovel store sitting atop the AI gold mine while the titans of this new industry still struggle to truly hit paydirt. OpenAI, Anthropic, xAI, and countless other enterprises are burning through billions at Jensen’s mining emporium.
The enterprise data center business—the real cash cow—brought in $51.2 billion in Q3, a 66% increase compared to the same period last year.
Nvidia’s gross margin continue to sit at an astonishing 73.4%. Meaning - they get about 73 cents of gross profit for each dollar in AI chips they sell. In the meantime, their customers—the hyperscale’s, model builders, and enterprises—are hemorrhaging money trying to figure out if such AI investments will ever pay off.
The Economics of AI: A Strange Game
Interestingly enough, training the models is the lowest expense of any AI-company — Training Open AI GPT-4 reportedly cost around $80–$100 million. Anthropic’s Claude cost about $30 million. Google’s Gemini? Try closer to $200 million. These aren’t simple rounding errors—they’re the GDP of small nations tossed into a matrix multiplication.
What most people fail to realize is that training is only the appetizer. The biggest elephant to eat comes from inference—running these models for users. This is where the economics turn negative.
But first, an analogy —
Before we get into the numbers lets compare an AI-Companies business model to that of say a restaurant:
Imagine a consumer based AI-company as a brand-new restaurant being opened by a world-renowned, Michelin-starred chef. They’ve spent a small fortune buying real estate, hiring architects to design a stunning, world-class location, and even more in menu development, ingredient sourcing, kitchens, etc. But! — as soon as they’ve opened for business, they announce, “Everything is free! Just please tip your server!”
As you would imagine, the demand for this restaurant would be off the charts as customers come flooding in. But instead of closing the doors, the restaurant decides to expand and buys another location next door to facilitate overflow seating. The restaurant continues to pay for the food, ingredients, staff, and other overhead costs while the ‘customers’ continue to dine for free!
Now, even a middle schooler with a modest degree of economic knowledge would be able to tell you a business like this simply won’t last. This analogy is meant to illustrate the point that by-and-large most consumer-facing AI systems are operating on a negative unit cost basis.
For many AI companies, inference constitutes 80–90% of total lifetime costs. Every time someone queries ChatGPT, OpenAI this incurs a real cost. Various reports suggest that OpenAI’s 2024 revenue was only a fraction of its inference costs.
OpenAI CEO Sam Altman even claimed they’re “profitable on inference,” but that’s creative accounting—they’re probably only including gross margins, leaving out training, R&D, sales, marketing, and indirect costs.
“We’re profitable on inference. If we didn’t pay for training, we’d be a very profitable company.”
Sam Altman
Defense in Depth: The Software Stack
Everyone is buzzing about Nvidia’s silicon, but they’re missing the Nvidia software goldmine. On the earnings call, CFO Colette Kress spotlighted how Nvidia’s CUDA ecosystem, developed over two decades, brings massive TCO (total cost of ownership) advantages.
Here lies the genius: Nvidia sells more than chips. They offer an entire stack:
CUDA X libraries that accelerate workloads at every phase of the AI lifecycle
Rapid NVLink networking revenue growth
Omniverse, which enables digital twins and physical AI
Enterprise AI software that streamlines model deployment
Thanks to constant software improvement, even six-year-old hardware stays relevant. Customers find themselves further entangled in the vines of Nvidia’s ecosystem because they can’t easily switch to competitors without abandoning the entire toolchain. This is a similar model Apple imparts in their consumer electronic division where they suck customers into the total Apple ecosystem.
Conference Call Sentiment
Jensen Huang once more was able to shield the AI bubble with his revenue chart and confident sentiment.
“There’s been a lot of talk about an AI bubble,” Huang said. “But from where we sit, what we see is something completely different.”
He detailed three platform shifts happening at once:
The transition of hundreds of billions in cloud computing spend to CUDA GPUs as Moore’s Law slows—CPU to GPU acceleration
The shift from classical ML to generative AI—transforming search, recommendations, and ad targeting across hyperscale infrastructure
The rise of agentic AI—from coding assistants to autonomous systems—a brand-new frontier
The tone was bullish, data-oriented, and crystal clear. When asked whether supply could eventually catch up with demand, Huang responded, “The clouds are sold out, and the base of our GPUs, both new and previous generations, is fully utilized.”
Kress added another bombshell: $500 billion in Blackwell and Rubin revenue visibility through 2026 and a projected $3–4 trillion annual AI infrastructure market by decade’s end.
Notably, there was no mention of competition, margin pressure, or weakening demand in the Q&A. If anything, executives sounded annoyed at the very mention of an AI bubble—suggesting that anyone who really understood the scale of what’s happening wouldn’t call it that.
Option Traders Are Betting Big (And Bullish)
The options market tells a fascinating story about where traders think Nvidia is headed. Post-earnings data reveals aggressively bullish positioning.
Immediate post-earnings flow saw a call-to-put ratio near 2 to 1, signaling bullish sentiment. Major institutional sweeps included tens of millions in long-dated options. Implied volatility, while high, rapidly cooled off as traders digested the positive surprise.
Six-month outlook from options pricing? Wall Street analysts raised targets across the board, with some calling for near-doubling of the stock. The average 12-month price target now sits well above current levels, as institutions position for continued upside in the $200-$250 strike range. The put buying appears protective rather than speculative—sophisticated players hedging massive profitable long positions rather than betting against the stock.
The Profitability Paradox: Who’s Really Profitable?
Nvidia’s operating margins are off the charts—profits are at all-time highs. While their customers continue to hump their heavy shovels down into the mines.
The Hyperscaler Math:
Hundreds of billions are being spent by Microsoft, Amazon, Google, and Meta on AI infrastructure
Not just supporting new AI products—also moving existing workloads to accelerated computing to save money
Generative AI is helping, but ROI remains unclear
Model Builder Struggle:
OpenAI is ramping up to hundreds of millions of users, and per-query costs keep climbing. (see the inference section of this article)
As usage increases, the “success tax” bites even harder
The Enterprise Reality:
Less than half of AI projects break even within 24 months
Early adopters with careful planning are seeing decent ROI
Why Nvidia Wins No Matter What
Scenario 1: AI Profits Surge
Demand for compute goes even higher
Nvidia sells more chips, software, and networking
Scenario 2: AI Profitability Lags
Hyperscalers need GPU acceleration to make existing workloads cheaper
Enterprises turn to AI for productivity and scaling even if there are no direct profits
Government and national AI projects move forward regardless of short-term economics
The Forward View: Beyond $500 Billion
Nvidia has visibility into $500 billion in Blackwell and Rubin revenue through 2026. The company is securing supply chains, expanding capacity, and ramping manufacturing to keep up with ever-growing demand.
For Q4 Nvidia is projecting approximately $65 billion in revenue, plus or minus 2%. The company also released guidance for a non-GAAP gross margin of 75%, plus or minus 50 basis points. This forecast came in well above average analyst expectations, reflecting continued robust demand for their AI chips and strong pricing power.
The elephant in the room, however is is Margin Compression (Nvidia’s costs rise faster than they deliver products/services.) This is also one of the leading bearish cases against Nvidia —
In general I feel like there is little to be concerned about here based on these factors:
Nvidia has a backlog of approximately 500 billion worth of demand for Blackwell and Rubin products. This exuberant amount of demand means they rarely need to offer discounts or cut deals with customers which further allows Nvidia to set a price premium to outpace margin compression.
The massive $500 billion backlog for Blackwell and Rubin architectures—are typically governed by commercial purchase agreements with hyperscalers and enterprise clients. In most cases, these are legally binding contracts that specify quantities, pricing, delivery schedules, and cancellation/force majeure terms. Hyperscalers historically have rarely canceled or walked away from such commitments with Nvidia due to the strategic nature of these chips.
Chips are further bundled with proprietary software stacks (CUDA, AI Enterprise, Omniverse) which are deeply integrated into customer operations. This creates customer “lock-in” and generates high-margin, recurring software revenue, insulating overall margins from competitive threats or hardware price pressure. Nvidia has a formidable ‘defense in depth’ moat system backstopped by their non-hardware revenue growth.
The Final Verdict
Jensen Huang and Nvidia can afford to be confident because he’s selling the best shovels money can buy in our generations gold rush. He doesn’t really care if the miners strike gold — as long as he keeps selling shovels.
After the earnings retreat Nvidia is on a considerable discount here and I have a price target of 250 a share for NVIDIA in 12 months.
Conduct yourself accordingly.





