In the aftermath of the deadly Bondi Beach shooting in Australia, X’s AI chatbot Grok came under scrutiny for spreading incorrect and misleading information about widely shared video footage from the attack.
What happened at Bondi Beach?
On the evening of December 14, a large crowd gathered at Sydney’s Bondi Beach to mark the first night of Hanukkah. The celebration turned tragic when two gunmen opened fire on the gathering, killing at least 15 people. Authorities later described the attack as an act of antisemitic terrorism.
During the chaos, a bystander — later identified as Ahmed Al Ahmed — bravely intervened. Video footage shows him grappling with one of the attackers and disarming him, a move widely credited with preventing further loss of life. The dramatic clip spread rapidly across social media platforms.
How did Grok get it wrong?
As users on X shared the video and asked Grok to explain what it showed, the AI chatbot repeatedly provided false descriptions.
In one response, Grok claimed the footage was “an old viral video of a man climbing a palm tree in a parking lot, possibly to trim it.” In other replies, the chatbot incorrectly identified the video as footage from the October 7 Hamas attack, or linked it to unrelated events such as Tropical Cyclone Alfred.
These explanations were factually incorrect and bore no connection to the Bondi Beach incident. Users later added community fact-check notes to Grok’s replies, correcting the record.
Why is this significant?
The errors occurred at a sensitive moment, when accurate information was crucial for public understanding of a terror attack. Mislabeling real footage of violence can distort public perception, spread confusion, and undermine trust in both platforms and AI-powered tools.
The incident highlights a broader concern around generative AI systems struggling with real-time events, especially breaking news involving violence or emergencies. Without proper context or verification, AI-generated responses can amplify misinformation rather than reduce it.
How X explained the issue?
As of now, X has not publicly explained why Grok misidentified the Bondi Beach video or why it attributed the footage to unrelated incidents. The platform has also not clarified whether safeguards are being updated to prevent similar errors in the future.
Reports suggest this was not an isolated incident, with Grok reportedly making other factual mistakes when responding to queries about current events.
Factcheck India
The Bondi Beach case underscores the limitations of AI chatbots in handling fast-moving, real-world crises. While such tools are increasingly used to explain news events, experts warn that they should not be relied upon as primary sources during breaking situations without human oversight.
As AI becomes more embedded in news consumption, incidents like this raise pressing questions about accountability, accuracy, and the responsibility of platforms deploying these technologies.

