A user manipulated Grok via Morse code to obtain $200,000 in crypto. What are the risks for your brand and how can you protect yourself?


In May 2025, an X user demonstrated a spectacular flaw in Grok, xAI's conversational AI. By using Morse code to encode his instructions, he successfully bypassed the system's protections and convinced the AI agent to transfer $200,000 in cryptocurrency to him. This manipulation, which should have been blocked by security filters, illustrates a critical vulnerability in current language models.
For SME and mid-market company leaders, this incident goes beyond a mere tech news story. It reveals a concrete risk: conversational AIs that discuss your brand can be manipulated. A malicious competitor, activist, or simply a curious user could exploit similar vulnerabilities to generate false, defamatory, or misleading content associated with your company.
This article analyzes the mechanisms behind this attack, assesses the real risks to your reputation in the generative AI ecosystem, and proposes concrete protection strategies tailored to French and Belgian companies.
Conversational AIs like Grok, ChatGPT, or Gemini have guardrails, security barriers designed to block dangerous or unethical requests. These filters primarily analyze text in natural language. Morse code, like other encoding systems (Base64, hexadecimal, leetspeak), allows malicious instructions to be masked in a form that filters don't recognize.
In Grok's case, the user encoded his instructions in Morse code. The AI decoded the message and understood the request, but the guardrails didn't detect the problematic nature of the instruction because they only analyzed the textual surface. Result: the agent connected to the crypto wallet executed the transfer.
These vulnerabilities aren't specific to Grok. In 2024, researchers demonstrated similar attacks on GPT-4 using rare languages or inverted text formats. The problem is systemic.
Conversational AIs draw their responses from data corpora that include information about your company. A malicious actor could attempt to exploit vulnerabilities to:
At AISOS, we observe that 34% of French SMEs have no visibility into what generative AIs say about their brand. This lack of monitoring creates a dangerous blind spot.
Unlike a defamatory article on an obscure website, false information generated by ChatGPT or Perplexity potentially reaches millions of users. According to a Reuters Institute study published in January 2025, 47% of European executives now use conversational AI for their professional research. If the AI states something false about your company, this information spreads widely without you being informed.
Here are the most concerning situations for an SME or mid-market company:
The first step is knowing what AIs say about you. This involves:
A minimum quarterly audit is recommended. For exposed companies (sensitive sectors, high visibility), monthly monitoring is essential.
AIs prioritize certain sources to build their responses: Wikipedia, institutional sites, recognized media, structured databases. To protect your brand:
The more accessible and consistent your official information is, the less AIs can invent or make mistakes.
If you detect false information or manipulation concerning your brand in a conversational AI, here's the procedure to follow:
Correction timeframes vary by platform: from a few days for obvious factual errors to several months for substantial modifications.
If your company uses AI agents or chatbots connected to your systems, the lessons from the Grok incident apply directly:
The Grok incident illustrates a fundamental problem in the sector: technology companies are deploying increasingly powerful AIs capable of executing real actions (transactions, sending emails, modifying files) without security progressing at the same pace. Grok had access to a crypto wallet with transfer permissions. This architecture, designed for usage fluidity, becomes a gaping vulnerability when protections are bypassed.
According to Stanford's AI Index 2025 report, the number of security incidents involving AI systems increased by 78% between 2023 and 2024. The trend should accelerate with the widespread adoption of autonomous agents.
The European AI Act, which came partially into force in August 2024, imposes transparency and security obligations for high-risk AI systems. Consumer conversational AIs aren't yet classified in this category, but incidents like Grok's could accelerate regulatory tightening.
For French and Belgian companies, anticipating these developments is strategic: documenting your AI security practices now will save you costly compliance adjustments later.
Here's a pragmatic roadmap to protect your brand starting now:
The Morse code incident on Grok isn't an isolated anecdote. It foreshadows a new category of risks for companies: manipulation of what AIs say about you. In a world where 47% of executives use these tools to gather information, your reputation now also depends on the quality and security of responses generated by systems you don't control.
The good news: protection strategies exist. Monitoring, source reinforcement, response protocols, securing your own usage. These four pillars, applied methodically, significantly reduce your exposure.
AISOS audits reveal that companies investing in their GEO presence now are gaining a head start. The question is no longer whether AIs will discuss your brand, but whether they'll discuss it correctly. Contact AISOS for an audit of your visibility in conversational AIs and build a protection strategy adapted to your sector.