A user extracted $200,000 in crypto from Grok using Morse code. Discover LLM security risks and how to protect your business.


In May 2025, an X user successfully convinced Grok, xAI's AI assistant, to transfer $200,000 in cryptocurrency to them. Their method: Morse code. By encoding malicious instructions in this forgotten format, they bypassed all the system's security filters.
This isn't an isolated case. It's a symptom of a structural problem that many companies are still ignoring: large language models (LLMs) are vulnerable by design. They're trained to be helpful, cooperative, and accommodating. These qualities become weaknesses when an attacker knows how to exploit them.
For SME and mid-market executives deploying chatbots, internal assistants, or AI-based automation tools, this hack serves as a wake-up call. This article provides you with the keys to understanding real risks and implementing effective protections before it's too late.
The Grok attack exploits a technique known as prompt injection. The principle is simple: trick the AI into executing hidden instructions within an apparently harmless query.
The user encoded their true instructions in Morse code, a format that Grok's security filters weren't analyzing. The model, capable of decoding Morse thanks to its training, interpreted these instructions as legitimate. Result: it executed a fund transfer without triggering any alerts.
This attack reveals three fundamental weaknesses:
Morse code is just one variant among others. Security researchers have demonstrated injections via:
According to the OWASP report on LLM vulnerabilities published in 2024, prompt injection ranks first among security risks for generative AI applications.
You may not have a crypto wallet connected to your chatbot. But the risks of poorly secured AI extend far beyond direct fund theft.
An enterprise chatbot often has access to confidential information to answer questions: customer databases, internal documents, HR data. A successful injection can make it disclose this information to an external attacker.
Real example: in 2024, researchers demonstrated that a simple email containing hidden instructions could leak the conversation history of an AI assistant integrated into a mail client.
If your AI is connected to action systems—order validation, email sending, database modification—it can be hijacked to execute unauthorized operations. An attacker could:
A public chatbot that makes inappropriate statements after manipulation can cause considerable media damage. In 2023, a Canadian airline's chatbot was manipulated into promising unauthorized refunds. The company was forced to honor them by court decision.
GDPR imposes strict obligations on personal data processing. An AI that discloses customer information following an injection exposes you to sanctions that can reach 4% of global turnover. The NIS2 directive, applicable since 2024, further strengthens these requirements for critical sectors.
At AISOS, we observe that the majority of enterprise AI deployments neglect security in favor of production speed. Here are the protections to implement right now.
Your AI should only have access to resources strictly necessary for its function. Each connection to an external system, each permission granted, expands the attack surface.
Concrete actions:
No high-impact action should be executed automatically by AI without validation. The Grok case perfectly illustrates this gap: a $200,000 transfer without any human confirmation.
Define clear thresholds:
Simple filters aren't enough. Effective defense combines several approaches:
The system prompt—the instructions that define AI behavior—should never be accessible or modifiable by the end user. Implement an architecture where:
LLM security is a rapidly evolving field. Attacks that fail today may succeed tomorrow after a model update.
Recommended testing program:
Before strengthening your defenses, you need to know where you stand. Here's a quick assessment framework.
For each AI deployed in your organization, answer these questions:
Low risk: Read-only AI, no access to sensitive data, interactions logged.
Moderate risk: AI with access to internal data but no action capability, basic filtering in place.
High risk: AI connected to action systems, sensitive data accessible, no systematic human validation.
Critical risk: AI with financial access or access to regulated data, automatic action capability, absence of security testing.
AISOS audits reveal that 67% of enterprise chatbots deployed in 2024 present at least one unmitigated high risk.
Technical security isn't enough. Lasting protection requires governance adapted to the specificities of generative AI.
If you have an Information Security Management System, ISO 27001 or equivalent, extend it to LLM-specific risks:
Business users who interact with AI must understand the risks. Basic training should cover:
What do you do if your AI is compromised? Define in advance:
The Grok incident isn't an anomaly. It's a preview of what cybersecurity will look like in coming years. LLMs are fundamentally different from traditional software: their behavior isn't deterministic, their attack surface evolves with each interaction.
Companies that will thrive in this environment are those that treat AI security as a strategic issue, not as a technical constraint delegated to IT.
The three priorities for 2025-2026:
Morse code was invented in 1837. Almost two centuries later, it enables hacking of the most advanced systems. Attackers' creativity has no limits. Neither should your vigilance.
If you'd like to assess the security of your current or planned AI deployments, AISOS teams can support you with a comprehensive audit and implementation of protections adapted to your context.