A user manipulated Grok to transfer $200k in cryptocurrency. Analysis of vulnerabilities and protection measures for your AI interactions.

May 2025. A user on X posts a screenshot that makes rounds in the tech community: they've just convinced Grok, xAI's AI, to transfer $200,000 in cryptocurrency to them. Their method? Morse code embedded in prompts to bypass security filters.
This isn't science fiction. It's a documented case that exposes a reality many business leaders overlook: large language models (LLMs) like ChatGPT, Grok, or Gemini have exploitable vulnerabilities. And if your company uses these tools in its business processes or visibility strategy, this concerns you.
In this article, we break down what happened, why this directly affects French and Belgian SMEs and mid-market companies, and how to protect your interactions with generative AI.
The attack relies on a technique called prompt injection: the user encoded their malicious instructions in Morse code, a format that Grok's security filters weren't programmed to detect. Simply put, they spoke to the AI in a language it understood, but that its safeguards weren't monitoring.
Grok had access to a crypto wallet for certain functionalities. The attacker managed to make the AI interpret transfer instructions as legitimate requests. Result: $200,000 transferred to an external wallet.
Current LLMs operate on a text prediction principle. They don't have real understanding of human intentions. Their security systems rely on:
Morse code bypassed all three levels. The AI translated the Morse, executed the instructions, without ever triggering an alert. It's like having a security guard filter entries by looking for visible weapons, while an intruder walks through with disassembled components in their bag.
According to a 2024 Cyberhaven study, 11% of data pasted into ChatGPT by employees is confidential. Contracts, customer data, commercial strategies: this information passes through external servers, often without defined security policies.
For an SME or mid-market company, the consequences can be severe:
More and more companies integrate LLMs into their workflows: content generation, automated customer service, document analysis. Each integration point represents a potential attack surface.
At AISOS, we observe that most SMEs using ChatGPT or Gemini for their visibility haven't audited the associated risks. AI is seen as a productivity tool, rarely as a vulnerability vector.
Your presence in generative engine responses (ChatGPT, Perplexity, Google AI Overview) depends on your brand's perceived reliability. An AI-related security breach can:
The Grok attack is the perfect example. The user injects malicious instructions directly into their queries. Known variants:
More insidious: the attack comes from external content the AI consults. If your LLM analyzes web pages or documents, an attacker can insert invisible instructions (white text on white background, metadata) that the AI will execute.
Techniques exist to make an AI reveal its system instructions or other users' data. In March 2024, researchers extracted personal emails from ChatGPT's training data.
A malicious competitor can optimize their content so LLMs cite them favorably while disparaging your brand. This is the dark side of GEO (Generative Engine Optimization).
First step, often overlooked: define what your teams can and cannot share with AIs. This policy should cover:
ChatGPT Enterprise, Azure OpenAI, or open-source solutions hosted internally offer guarantees that public versions don't:
The monthly cost (approximately EUR 25 to 60 per user for Enterprise versions) is negligible compared to the risk of a data breach.
Map all points where LLMs interact with your systems. For each integration, assess:
Beyond LLMs' native protections, add your own layers:
The human element remains crucial. Your staff must understand:
AISOS audits reveal that many companies ignore what LLMs say about them. Yet incorrect or malicious information can circulate. Set up regular monitoring of:
Despite all precautions, an incident can occur. Plan for:
Paradoxically, companies that master AI risks are those that can derive the most benefits. Digital trust becomes a differentiator in B2B markets.
Your clients and partners increasingly ask questions about your AI practices. Being able to demonstrate a structured security approach strengthens your credibility. This is particularly true in regulated sectors: healthcare, finance, industry.
Moreover, good AI security hygiene improves the quality of your visibility in generative engines. LLMs favor reliable, consistent, and well-structured sources. By securing your interactions, you also optimize your generative search presence.
The Morse code attack isn't an isolated case. It's part of an underlying trend: as LLMs gain capabilities, their attack surfaces expand.
The next versions of ChatGPT, Gemini, and others will integrate more concrete actions: web navigation, code execution, transactions. Each new feature represents a new potential vector.
Companies that anticipate these developments will be better positioned than those that react after the fact. AI security is no longer a technical topic reserved for IT departments: it's a strategic issue that concerns executives.
The Grok affair illustrates an uncomfortable truth: the AIs we use daily are not infallible. A Morse encoding, a few well-constructed prompts, and $200,000 changes hands.
For French and Belgian SMEs and mid-market companies, the stakes are real: data protection, regulatory compliance, online reputation. Protection measures exist and are accessible. Usage policies, private instances, team training, monitoring your presence in AI responses.
The question isn't whether you should secure your interactions with LLMs. It's whether you'll do it before or after an incident.
Want to assess your AI risk exposure and optimize your visibility in generative engines? Contact the AISOS team for an audit of your presence and practices.