An Nvidia executive admits that AI compute exceeds salary costs. Analysis and strategies for SMEs and mid-market companies looking to invest intelligently.


In May 2025, a Nvidia executive dropped a bombshell at an investor conference: "The cost of compute far exceeds the cost of employees". This statement, coming from the world's leading supplier of artificial intelligence chips, deserves our attention.
For SME and mid-market company leaders considering AI investments, this finding changes the game. The economic equation that AI solution vendors present to you—where automation gradually replaces payroll costs—runs up against a technical reality: running generative AI models costs a fortune in infrastructure.
This article analyzes the concrete implications of this revelation for your business strategy. We'll examine why costs are so high, how they're evolving, and most importantly, how to invest intelligently in AI without burning through your cash reserves.
To understand Nvidia's claim, let's look at the orders of magnitude. An H100 GPU, the reference chip for AI training and inference, costs approximately EUR 30,000 to 40,000 per unit. A company wanting to run a high-performance language model in-house needs several dozen of these chips.
Add to that:
In France, the total cost of a skilled employee, including benefits, ranges between EUR 50,000 and 90,000 per year depending on the position. A writer, administrative assistant, junior analyst: around EUR 55,000 all-in.
To have the same work performed by AI hosted in-house with comparable performance, you'll easily spend three to five times that amount on infrastructure and energy. Nvidia's calculation isn't marketing provocation: it's an accounting reality.
Generative AI models are growing exponentially. GPT-3 had 175 billion parameters in 2020. GPT-4, released in 2023, reportedly has over 1,000 billion according to estimates. Each generation multiplies computing requirements.
This inflation has direct consequences on usage costs. When you query ChatGPT or a competing AI assistant, each request mobilizes server resources billed per token, the text unit that models process. The bigger the model, the more expensive each token costs to process.
Demand for AI chips far exceeds supply. Nvidia holds over 80% of the data center GPU market. This dominant position, combined with limited production capacity at foundry TSMC in Taiwan, keeps prices at high levels.
Cloud giants—Microsoft, Google, Amazon—pre-purchase colossal capacity for their services. SMEs and mid-market companies find themselves at the end of the chain, with pricing reflecting this structural shortage.
Using AI via cloud APIs (OpenAI, Anthropic, Google) seems cheaper than investing in your own infrastructure. Only in appearance. At AISOS, we observe that companies systematically underestimate their actual consumption:
A mid-market company deploying an AI assistant for customer service can easily reach EUR 5,000 to 15,000 monthly in API costs, not counting integration and maintenance.
The promise of AI replacing entire teams at lower cost doesn't hold up. At least not in 2025, and probably not before 2028-2030 according to current analyst projections. Leaders who lay off to automate often discover that promised savings don't materialize.
The profitable approach consists of using AI as a productivity amplifier rather than a substitute. A writer assisted by AI produces three times more content. A salesperson equipped with AI analysis tools better qualifies prospects. ROI is found in increasing human capabilities, not eliminating them.
Not all AI uses are economically equal. Profitable applications share common characteristics:
There's a category of AI investment often overlooked: your company's visibility in generative search engine responses. ChatGPT, Perplexity, Google AI Overview, Gemini: these tools are becoming major entry points for your B2B prospects.
Unlike internal automation, investing in your AI presence doesn't require expensive infrastructure. It involves optimizing your content, structured data, and digital authority so language models cite you as a reference in your field.
The cost of such a program represents a fraction of what an ambitious automation project would cost, often for superior commercial impact. When an executive asks ChatGPT "who are the best X suppliers in France," appearing in the response is worth all advertising budgets.
Several developments could lower AI compute costs in coming years:
Conversely, other dynamics maintain cost pressure:
Gartner and McKinsey analysts anticipate a 20-30% drop in AI inference costs by 2027, mainly through model efficiency gains. This won't be enough to make mass automation economically viable for most companies.
The strategic recommendation: plan your AI investments on assumptions of stable or even slightly rising costs. Any decrease will be a bonus, not a condition for project profitability.
Before any investment, precisely map processes candidates for AI automation or augmentation. For each, evaluate:
Use trial offers and pay-as-you-go pricing from cloud platforms to validate your assumptions. A three-month proof of concept costs a few thousand euros and gives you real data on operational costs.
Be wary of commercial demonstrations: they always show optimal cases. Test with your real documents, real queries, real volumes.
For an SME or mid-market company, the commercial impact of being visible in AI search engine responses generally exceeds that of automating internal processes. A prospect discovering you via ChatGPT represents immediate value. An automated process represents future savings, often overestimated.
AISOS audits reveal that fewer than 15% of French SMEs appear in AI assistant responses for their key business queries. The differentiation potential is enormous for early entrants.
Multiply vendor cost estimates by 1.5. Plan for maintenance and evolution costs representing 20-30% of initial investment annually. AI isn't a project you install and forget: it's a living system requiring continuous attention.
The Nvidia executive's statement sets the record straight. AI isn't a miracle solution for reducing labor costs. It's a powerful but expensive tool requiring deep strategic thinking before significant investment.
Companies that will succeed are those that:
Your next step: audit your company's current visibility in ChatGPT, Perplexity, and Google AI Overview responses. You'll likely discover that your competitors already appear there, or that no one has yet taken this position. Either way, the information is worth its weight in gold for your 2025-2026 strategy.