The UCLA/MIT/Oxford study reveals a performance collapse when AI is removed. Here's how to protect your team's skills.


Researchers from UCLA, MIT and Oxford conducted a revealing experiment on 1,222 participants. The protocol: provide an AI assistant for ten minutes to solve problems, then abruptly remove it. The result surprised even the scientists: user performance dropped below the group that had never used AI.
This phenomenon has an evocative name: the "boiling frog" effect. Like the amphibian that doesn't react to gradually heating water, companies become accustomed to delegating critical skills to artificial intelligence without perceiving the erosion of their internal capabilities.
For SME and mid-market company leaders, this study raises a major strategic question: how can we harness AI's power to gain visibility and productivity without creating a dependency that weakens the organization? This article details the mechanisms of this trap and proposes concrete strategies to avoid it.
The 2024 study divided participants into three groups: a control group without AI, a group with permanent AI assistance, and a group with AI removed after ten minutes. Tasks included logical problem-solving, writing, and data analysis.
Key results:
The human brain constantly optimizes its energy expenditure. When faced with a tool that efficiently solves problems, it quickly deactivates the neural circuits usually mobilized. This mechanism, called "cognitive offloading," explains why we no longer remember phone numbers since smartphones arrived.
With generative AI, this phenomenon accelerates. The tool doesn't just store information: it produces reasoning, analysis, and creativity. The delegated skills are therefore deeper and their erosion more problematic.
Marketing and sales teams are massively adopting ChatGPT and its alternatives to produce emails, LinkedIn posts, and sales pitches. The time savings are real. So is the risk: gradual loss of the "company voice," inability to craft impactful messages without assistance, and communication uniformity.
At AISOS, we observe that companies most visible in generative search engines are those that have maintained a distinctive editorial voice. Generic content produced entirely by AI gets lost in the crowd and struggles to emerge in Perplexity or Google AI Overview responses.
AI-powered dashboards deliver ready-made recommendations. Convenient, but dangerous if leaders lose the ability to challenge these analyses. An industrial SME that lets AI manage its purchasing without understanding the underlying mechanisms becomes vulnerable to algorithmic errors and context changes that the machine doesn't capture.
Developers who can no longer code without Copilot, accountants dependent on automation software, lawyers unable to draft clauses without AI templates: every profession experiences its version of this phenomenon. Expertise becomes fragile, concentrated in the tool rather than the team.
When AI manages documentation, processes, and internal FAQs, the company risks losing track of its own operations. A tool change or extended outage can then paralyze the organization.
Some companies now impose periods when generative AI tools are disabled. The goal isn't to reject technology but to keep basic skills exercised. One day per month is enough to preserve fundamental reflexes.
Practical implementation:
For creative and strategic tasks, require a first version written without AI before using the tool to improve, enrich, or reformulate. This approach preserves autonomous thinking capacity while benefiting from technological assistance.
This method generally produces higher-quality content: AI works on a foundation that already contains the company's vision and domain expertise, rather than generating generic content.
Before deploying an AI tool on a process, precisely document how that process works manually. This documentation becomes insurance: if the tool disappears or malfunctions, the team can take back control.
AISOS audits reveal that companies that have preserved this documentation recover four times faster from service interruptions than those that have "forgotten" their previous methods.
Tomorrow's critical skill isn't knowing how to use AI but knowing how to supervise it. This means understanding its limitations, detecting its errors, and challenging its recommendations. Training programs must integrate this dimension.
Elements to include in training:
Explicitly define, for each process, what falls under AI and what remains in the human domain. This distribution should prioritize maintaining strategic skills in-house.
Example distribution for content creation:
ChatGPT, Perplexity, Google AI Overview, and Gemini respond directly to user questions by citing sources. For an SME or mid-market company, appearing in these responses becomes a major visibility challenge. But this presence requires producing content that AI can identify as relevant and reliable.
The paradox: to be visible in AI responses, you often need to use AI to produce content. But content generated entirely by AI typically lacks the specificity and expertise that would make it worth citing.
Companies succeeding in generative search engine visibility use AI to amplify their expertise, not replace it. Specifically:
This approach produces content that LLMs identify as authoritative sources: they contain specific information found nowhere else, formulated clearly and structured.
Regularly assessing your organization's degree of dependency allows you to act before the problem becomes critical. Here are the signals to monitor:
Transitioning to controlled AI use requires a progressive approach. Here's a realistic timeline for an SME or mid-market company:
Months 1-2: audit and mapping
Months 3-4: documentation and training
Months 5-6: testing and adjustments
The UCLA/MIT/Oxford study delivers a clear message: AI dependency isn't a theoretical risk for tomorrow, it's a measurable reality that sets in within minutes of use. For SME and mid-market leaders, ignoring this phenomenon exposes the organization to growing fragility.
The good news: simple strategies allow you to benefit from AI's power while preserving autonomy and internal skills. AI-free days, human draft rules, process documentation, supervision training: these practices don't slow technology adoption, they make it sustainable.
For your visibility in generative search engines as well as your operational resilience, the key remains the same: AI must amplify your expertise, never replace it. Companies that integrate this principle into their digital strategy build a solid competitive advantage, independent of technological uncertainties.
Want to assess your AI dependency level and optimize your visibility in generative search engines? Contact the AISOS team for a personalized audit of your digital presence.