Introduction
Artificial intelligence is rapidly transforming the internet. From writing emails to generating images. But a new study warns that the same technology could also threaten online anonymity. Researchers have found that large language models (LLMs). The technology behind tools like ChatGPT — may allow hackers, governments, or malicious actors to identify anonymous social media users by piecing together small bits of publicly available information. The findings raise serious questions about whether anonymity on the internet can survive in the age of advanced AI.
What the Study Found
The research, conducted by AI researchers Simon Lermen and Daniel Paleka, suggests that AI can successfully connect anonymous social media profiles with real-world identities in many cases. The researchers tested how AI systems analyse online information. They fed anonymous social media accounts into an AI system and allowed it to scan the internet for related clues. In many cases, the AI was able to match the anonymous account to a real identity by analysing small pieces of information shared across platforms.
How AI Can Identify Anonymous Users
The technique relies on something known as data synthesis — combining multiple pieces of seemingly harmless information.
For example, a hypothetical anonymous user might post online about:
- Struggling at school
- Walking a dog named Biscuit
- Visiting Dolores Park
Individually, these details seem harmless. But AI can scan other platforms for posts containing the same combination of clues. If a public account elsewhere mentions a dog named Biscuit and the same park, the AI can link the two profiles with a high probability. What would take humans hours or days to investigate can now be done by AI in seconds.
Risks Beyond Identity Exposure
The study highlights several potential risks if such AI tools become widely accessible.
1. Targeted scams
Hackers could use AI to conduct spear-phishing attacks, where victims receive highly personalised messages from someone posing as a trusted contact.
2. Government surveillance
Experts warn that authoritarian governments could use similar technologies to track anonymous activists or dissidents online.
3. False accusations
AI systems are not perfect. According to researchers, language models can sometimes make incorrect connections between accounts, which could lead to innocent people being wrongly identified.
What Factcheck India Recommends
Some suggested steps include:
- Limiting bulk downloads of user data
- Detecting automated scraping by bots
- Restricting large-scale data exports
Experts also advise users to be more cautious about the personal details they share online, even on anonymous accounts.
Conclusion
For decades, anonymity has been an important feature of the internet. It has allowed whistleblowers, activists, journalists, and ordinary users to express themselves without fear. But the rise of powerful AI tools is forcing researchers to rethink what anonymity really means in the digital age. As AI becomes more advanced, the simple act of sharing everyday details online may be enough to reveal far more than users ever intended.

