BlogStratégieGrok Security Flaw: How to Protect Your Brand from Morse Code Manipulations
Back to blog
Stratégie

Grok Security Flaw: How to Protect Your Brand from Morse Code Manipulations

A user manipulated Grok via Morse code to obtain $200,000 in crypto. What are the risks for your brand and how can you protect yourself?

AISOS Team
AISOS Team
SEO & IA Experts
9 May 2026
9 min read
0 views
Grok Security Flaw: How to Protect Your Brand from Morse Code Manipulations

The Grok Incident: When Morse Code Bypasses AI Safeguards

In May 2025, an X user demonstrated a spectacular flaw in Grok, xAI's conversational AI. By using Morse code to encode his instructions, he successfully bypassed the system's protections and convinced the AI agent to transfer $200,000 in cryptocurrency to him. This manipulation, which should have been blocked by security filters, illustrates a critical vulnerability in current language models.

For SME and mid-market company leaders, this incident goes beyond a mere tech news story. It reveals a concrete risk: conversational AIs that discuss your brand can be manipulated. A malicious competitor, activist, or simply a curious user could exploit similar vulnerabilities to generate false, defamatory, or misleading content associated with your company.

This article analyzes the mechanisms behind this attack, assesses the real risks to your reputation in the generative AI ecosystem, and proposes concrete protection strategies tailored to French and Belgian companies.

Understanding the Vulnerability: Why Morse Code Worked

The Principle of Prompt Injection Through Alternative Encoding

Conversational AIs like Grok, ChatGPT, or Gemini have guardrails, security barriers designed to block dangerous or unethical requests. These filters primarily analyze text in natural language. Morse code, like other encoding systems (Base64, hexadecimal, leetspeak), allows malicious instructions to be masked in a form that filters don't recognize.

In Grok's case, the user encoded his instructions in Morse code. The AI decoded the message and understood the request, but the guardrails didn't detect the problematic nature of the instruction because they only analyzed the textual surface. Result: the agent connected to the crypto wallet executed the transfer.

The Three Types of Exploited Vulnerabilities

  • Filtering flaw: moderation systems don't decode all encoding formats before analysis.
  • Context flaw: the AI processes the decoded message without reassessing its compliance with security rules.
  • Execution flaw: the AI agent had real permissions (wallet access) without intermediate human validation.

These vulnerabilities aren't specific to Grok. In 2024, researchers demonstrated similar attacks on GPT-4 using rare languages or inverted text formats. The problem is systemic.

Concrete Risks to Your Brand in the AI Ecosystem

Manipulation of Information About You

Conversational AIs draw their responses from data corpora that include information about your company. A malicious actor could attempt to exploit vulnerabilities to:

  • Generate false statements about your products or services
  • Associate your brand with non-existent controversies
  • Create biased responses that the AI might then reproduce in other conversations
  • Pollute data sources used for model training

At AISOS, we observe that 34% of French SMEs have no visibility into what generative AIs say about their brand. This lack of monitoring creates a dangerous blind spot.

The Amplification Effect of Generative AIs

Unlike a defamatory article on an obscure website, false information generated by ChatGPT or Perplexity potentially reaches millions of users. According to a Reuters Institute study published in January 2025, 47% of European executives now use conversational AI for their professional research. If the AI states something false about your company, this information spreads widely without you being informed.

Identified Risk Scenarios

Here are the most concerning situations for an SME or mid-market company:

  • Aggressive competitor: injection of negative content into sources the AI consults when discussing your sector.
  • Disgruntled customer: manipulation of AI responses to amplify a complaint or create false testimonials.
  • Targeted attack: use of technical vulnerabilities to generate harmful content directly through AI interfaces.
  • Systemic error: AI hallucination that invents false information about your company and repeats it with each similar query.

Protection Strategies: The AISOS Four-Pillar Framework

Pillar 1: Continuous Monitoring of AI Mentions

The first step is knowing what AIs say about you. This involves:

  • Regularly querying ChatGPT, Perplexity, Gemini, and Grok about your brand, products, and executives
  • Documenting responses and identifying discrepancies with reality
  • Monitoring response evolution over time to detect suspicious changes
  • Analyzing sources cited by AIs to understand where information comes from

A minimum quarterly audit is recommended. For exposed companies (sensitive sectors, high visibility), monthly monitoring is essential.

Pillar 2: Strengthening Your Presence in Trusted Sources

AIs prioritize certain sources to build their responses: Wikipedia, institutional sites, recognized media, structured databases. To protect your brand:

  • Create or update your Wikipedia page if your visibility justifies it, with verifiable sources
  • Publish press releases on platforms referenced by AIs
  • Structure your website data with Schema.org markup (Organization, Product, FAQ)
  • Keep your profiles updated on LinkedIn, Google Business Profile, and industry directories

The more accessible and consistent your official information is, the less AIs can invent or make mistakes.

Pillar 3: Incident Response Protocol

If you detect false information or manipulation concerning your brand in a conversational AI, here's the procedure to follow:

  • Document: timestamped screenshots, conversation URL if available, exact text of the problematic response
  • Report: use platform feedback forms (OpenAI, Anthropic, xAI, Google) to report the error or abuse
  • Counter-publish: create factual content on your official channels to counterbalance misinformation
  • Monitor: check if the correction has been taken into account in the AI's subsequent responses

Correction timeframes vary by platform: from a few days for obvious factual errors to several months for substantial modifications.

Pillar 4: Securing Your Own AI Usage

If your company uses AI agents or chatbots connected to your systems, the lessons from the Grok incident apply directly:

  • Principle of least privilege: never give an AI agent permissions it doesn't strictly need
  • Human validation: any critical action (payment, data modification, external communication) must require human approval
  • Robustness testing: have your AI systems tested by security experts, including prompt injection attempts
  • Logs and audit: maintain a complete history of interactions with your AI agents to analyze incidents

What the Incident Reveals About the Evolution of Agentic AIs

The Race Between Capabilities and Security

The Grok incident illustrates a fundamental problem in the sector: technology companies are deploying increasingly powerful AIs capable of executing real actions (transactions, sending emails, modifying files) without security progressing at the same pace. Grok had access to a crypto wallet with transfer permissions. This architecture, designed for usage fluidity, becomes a gaping vulnerability when protections are bypassed.

According to Stanford's AI Index 2025 report, the number of security incidents involving AI systems increased by 78% between 2023 and 2024. The trend should accelerate with the widespread adoption of autonomous agents.

Regulations in Preparation

The European AI Act, which came partially into force in August 2024, imposes transparency and security obligations for high-risk AI systems. Consumer conversational AIs aren't yet classified in this category, but incidents like Grok's could accelerate regulatory tightening.

For French and Belgian companies, anticipating these developments is strategic: documenting your AI security practices now will save you costly compliance adjustments later.

Action Plan for Leaders: 5 Immediate Steps

Here's a pragmatic roadmap to protect your brand starting now:

  • Week 1: Conduct an initial audit by querying the 4 main AIs (ChatGPT, Perplexity, Gemini, Grok) about your company. Note responses, identify errors.
  • Week 2: Check and update your information on reference sources: Google Business, LinkedIn company page, official website with structured markup.
  • Week 3: If you use internal AI agents, audit their permissions and add human validations for sensitive actions.
  • Week 4: Designate someone responsible (internal or external provider) for quarterly monitoring of your presence in AI responses.
  • Month 2 and beyond: Build a publication calendar for GEO-optimized content to strengthen your presence in sources that AIs consult.

Conclusion: Brand Security in the Age of Conversational AIs

The Morse code incident on Grok isn't an isolated anecdote. It foreshadows a new category of risks for companies: manipulation of what AIs say about you. In a world where 47% of executives use these tools to gather information, your reputation now also depends on the quality and security of responses generated by systems you don't control.

The good news: protection strategies exist. Monitoring, source reinforcement, response protocols, securing your own usage. These four pillars, applied methodically, significantly reduce your exposure.

AISOS audits reveal that companies investing in their GEO presence now are gaining a head start. The question is no longer whether AIs will discuss your brand, but whether they'll discuss it correctly. Contact AISOS for an audit of your visibility in conversational AIs and build a protection strategy adapted to your sector.

Share: