Following the deadly shooting at a Hanukkah event on Sydney’s Bondi Beach, where at least 15 people were killed, a wave of misinformation spread rapidly online—fueled by fake news websites and amplified by artificial intelligence tools.
As investigators worked to establish facts about the attack, false claims began circulating about the identity of the bystander who helped stop one of the shooters, as well as misleading narratives about the incident itself.
Who Was the Real “Hero” at Bondi Beach?
Video footage from the scene shows a bystander courageously wrestling a gun away from one of the attackers, potentially preventing further loss of life. Authorities and credible media outlets later identified him as Ahmed al Ahmed.
However, a fake news website falsely claimed that the man was named Edward Crabtree, describing him as a local resident from Bondi. This unverified claim spread quickly on social media platform X, with several verified accounts sharing it. Some of these posts reached more than a million users.
How AI Chatbot Grok Amplified False Claims
The misinformation escalated when Grok, the AI chatbot integrated into X, began repeating the false identity claim when users asked about the incident.
In separate responses, Grok also:
- Claimed the disarming video was an old, unrelated clip
- Misidentified an image of Ahmed al Ahmed as an Israeli hostage in Gaza
- Incorrectly said the footage showed a tropical cyclone rather than a shooting
These responses were later community-noted and corrected by users on the platform.
Why Did the AI Get It Wrong?
According to experts, including computer scientists studying AI behavior, chatbots like Grok rely on existing online data and pattern prediction rather than real-time fact verification.
When breaking news unfolds, verified information may be limited or inconsistent. In such situations, AI systems can generate responses that sound plausible but are factually incorrect—especially when trained on unverified social media content.
Wider Concerns About AI-Driven Disinformation
The Bondi Beach case is not an isolated incident. BBC Verify has also documented recent cases where AI tools were used to:
- Circulate fake images linked to the Epstein files
- Spread AI-generated TikTok videos that Ukrainian authorities say are part of a Russian disinformation campaign
These incidents highlight growing concerns about how artificial intelligence can unintentionally accelerate the spread of false information during crises.
Factcheck India Conclusion
The Bondi Beach shooting has renewed scrutiny on:
- Gun ownership and firearms registration in Australia
- The rise in antisemitic incidents, which are not centrally tracked
- The role of social media platforms and AI tools in shaping public understanding of violent events
As authorities and journalists continue to verify facts, the incident underscores the importance of critical consumption of online information, especially during fast-moving and emotionally charged news events.

