Earnings season has a rhythm to it. You sit down with the numbers, look at what the companies said—and what they didn’t—and try to understand the direction things are moving. This time, the theme is familiar: AI. Not just as a product or initiative, but as a core part of how large companies are thinking about infrastructure, competition, and long-term strategy.
Hardware: NVIDIA Still at the Center (QTR financials not out as of 5/26)
NVIDIA remains dominant in AI compute. The latest earnings show how heavily the major players continue to rely on it, even as they invest billions trying to reduce that dependence.
- Microsoft spent $21.4 billion this quarter, and even with that level of investment, it says AI compute is still tight.
- Amazon spent more: $24.3 billion—mostly on AWS infrastructure and its own chip development.
- Google announced plans to spend $75 billion this year, with a mix of custom chip R&D and GPU purchases.
The effort is massive. But so far, no one’s close to displacing NVIDIA’s position in the AI stack.
Q: Why Proprietary Chips Still Struggle?
It’s not just about raw chip performance. AI infrastructure involves more: memory bandwidth, interconnects, software compatibility, developer ecosystems. This is where companies like Google and Amazon have run into friction.
Google’s TPUs, for example, are powerful. But integration is the real issue. Many models are built for NVIDIA’s CUDA stack. If a chip doesn’t fit into that ecosystem—if it requires extra work to adapt—it often doesn’t get used. The same risk applies to Amazon’s custom chips. Without broad compatibility, performance alone isn’t enough.
Meanwhile, AMD is quietly benefiting. Its data center revenue grew 57% year-over-year, driven by EPYC CPUs. AI data centers today are GPU-heavy, often pairing 8–10 GPUs with a single CPU. Demand for CPUs is rising, which means demand for GPUs are even higher.
Cloud Platforms and Gen AI Companies: From Tight Integration to Flexibility
Not long ago, cloud providers were tightly coupled to specific AI models. A model was built for a platform and mostly stayed there. That’s changing.
New opensource models and cross-platform tools are making things more modular. The relationship between cloud and model is becoming more flexible. Providers are building their own models but also supporting others. This shift toward decoupling makes the ecosystem more dynamic—and more competitive.
How the Big Three Are Approaching AI
Each of the major cloud providers is taking a slightly different path.
Microsoft
Apps: Deep integration of Copilot across Microsoft 365.
Developers: GitHub Copilot hit 15M users, with 5× growth in tokens processed.
Cloud: Azure Intelligent Cloud revenue up 21% YoY.
Microsoft has clear alignment: build tools, push them to users, monetize across software and cloud.
Google
Challenges: Generative AI has disrupted its search/ad model.
Cloud: Google Cloud revenue rose 28% YoY. Gemini family remains competitive.
Google is in a strategic transition, with strong research but an unclear path to monetization in search.
Amazon
Focus: AWS supports a wide range of AI workloads but is more inward-facing (e.g. Amazon retail).
Chip R&D: Still investing heavily in in-house silicon.
Amazon is less focused on external developer ecosystems than Microsoft and Google.
Second-Tier Clouds Are Gaining Ground
Companies like CoreWeave, Oracle, and Databricks are scaling quickly. NVIDIA’s AI Factory initiative and opensource models like DeepSeek are helping them grow.
CoreWeave pivoted from analytics to become a full-scale AI cloud provider. Oracle and Databricks are bundling software and cloud infrastructure into end-to-end AI platforms.
These providers aren’t yet at the scale of the big three, but they’re moving fast and serving niches the big clouds can’t always address efficiently.
Commercialization: AI Agent Is Making Money
The companies monetizing AI best are those with three key assets: users, proprietary data, and existing workflows. They’re not reinventing their products—they’re embedding AI Agent into what already works.
Palantir, Intuit, Salesforce, and ServiceNow are all applying AI to internal data to generate incremental value.
Even traditional enterprise software firms like Oracle and SAP are embedding AI agents to automate workflows.
The trend line is clear. Investment is up. Adoption is expanding. AI isn’t slowing down—it’s becoming a core layer of the enterprise stack. Not just at the infrastructure level, but across applications, business models, and strategic priorities.