A UCLA/MIT study reveals that removing an AI assistant after just 10 minutes causes performance to drop below initial levels. How can you avoid this trap?


Researchers from UCLA, MIT, Oxford and Carnegie Mellon conducted an experiment with 1,222 participants. The protocol was simple: provide an AI assistant for ten minutes, then take it away. The results surprised even the scientists.
After removing the assistant, participants' performance didn't simply return to its initial level. It dropped below the control group that had never used AI. Even worse: participants stopped trying to solve problems on their own.
Researchers dubbed this phenomenon the 'boiling frog' effect, referencing the metaphor of a frog that doesn't react to gradually heating water. Your teams get accustomed to AI without perceiving the erosion of their own skills. And when the tool becomes unavailable, paralysis sets in.
This article analyzes the mechanisms behind this dependency and offers concrete strategies to leverage AI benefits without falling into this trap. Because the question is no longer whether you'll deploy AI assistants, but how to do so without weakening your organization.
The 2024 study tested participants on various cognitive tasks: problem-solving, writing, data analysis. The test group received a powerful AI assistant for exactly ten minutes. The control group worked without assistance.
The numbers are telling:
The most concerning aspect isn't the performance drop. It's the behavioral change: participants developed a form of cognitive passivity in just ten minutes of use.
The human brain constantly optimizes its energy expenditure. When an external tool takes charge of a cognitive function, the brain immediately reduces resource allocation to that function. This is a perfectly normal and even desirable adaptation mechanism in most contexts.
The problem arises with modern AI assistants that are too powerful, too quickly. Unlike a calculator or spell checker, a generative AI assistant can handle high-level cognitive functions: thought structuring, complex problem-solving, decision-making.
The brain delegates these functions without the user being aware. Hence the frog metaphor: the temperature rises, but no one jumps out of the pot.
This is the most visible and easiest level to manage. An employee can no longer write an email without ChatGPT. A salesperson can't build a proposal without assistance. A developer no longer codes without Copilot.
Identifiable symptoms:
This level of dependency is manageable with backup procedures and regular training. But it often masks deeper problems.
More insidious, this dependency affects thought processes themselves. Employees no longer know how to structure their thinking without AI. They lose critical analysis capability because they've gotten used to validating generated responses without verification.
At AISOS, we observe this phenomenon in AI maturity audits: entire teams that no longer question the assistant's suggestions. The verification reflex disappears within weeks of intensive use.
The consequences are serious:
This is the most dangerous level for SMEs and mid-market companies. The company becomes dependent on a specific AI provider for critical functions. Business processes are redesigned around the tool's capabilities. The day the vendor changes pricing, modifies their API, or disappears, business operations are threatened.
Concrete examples of strategic dependency:
Watch for these changes in your team's behavior:
Regularly measure these metrics:
A dependency ratio above 60% for critical functions should trigger an alert. Beyond 80%, you're in a major risk zone.
At the management level, ask these questions:
A practice adopted by several technology companies: prohibit AI assistant use one day per week or month. The goal isn't to punish but to keep basic skills active.
Implementation guidelines:
AISOS audits reveal this practice reduces recovery time by 40% during unexpected outages.
All AI output must be verified before use. This simple rule is rarely applied in practice. Employees end up blindly trusting after a few weeks of satisfactory results.
To make verification a reflex:
When an employee uses AI to solve a problem, require them to document why the proposed solution is relevant. This practice forces maintenance of critical thinking and creates a knowledge base for the company.
Recommended format for each AI-assisted deliverable:
Don't put all your eggs in the same algorithmic basket. Using multiple AI assistants for similar functions offers three advantages:
In practice, identify your three most AI-dependent functions and ensure you have at least two options for each.
The classic mistake: training newcomers to use AI from day one. Result: they never learn business fundamentals. They become AI operators, not domain experts.
Recommended approach:
AI's immediate productivity gains are undeniable. A 2023 BCG study measures 25% to 40% gains on writing and analysis tasks. But these figures mask a phenomenon economists call the short-term productivity paradox.
When all your competitors use the same AI tools, productivity gains neutralize each other. What differentiates your company is your teams' ability to go beyond what AI proposes. This ability rests precisely on the skills that the boiling frog effect erodes.
Companies that will create value in five years won't be those that automated the most. They'll be those that knew how to preserve and develop their teams' collective intelligence while exploiting AI as an amplifier, not a substitute.
A corporate AI policy must explicitly address dependency risk. Here are elements to include:
This policy should be reviewed at least annually given the rapid evolution of technologies and usage patterns.
The UCLA/MIT study scientifically demonstrates what many leaders suspect: AI assistants can weaken as much as they strengthen. The boiling frog effect is real, measurable, and affects all organizations deploying AI without precaution.
The solution isn't to reject AI. It's to adopt a conscious approach that maximizes benefits while preserving your teams' autonomy and skills. The five strategies presented in this article provide an actionable starting point.
The challenge for your SME or mid-market company is to transform AI into a sustainable competitive advantage rather than a source of fragility. This begins with an honest assessment of your current dependency level and continues with implementing safeguards adapted to your context.
AISOS supports SME and mid-market leaders in this approach: AI dependency audits, usage policy definition, and visibility strategies in generative search engines. Contact us to assess where your organization stands.