Congress Leader Sonia Gandhi in her youth smoking a cigarette.Users on X claimed that the image shows Congress Leader Sonia Gandhi in her youth smoking a cigarette.

Even as media try to wrap their heads around AI-generated images, AI editing tools that were a precursor are being misused to spread disinformation.

Artificial intelligence (AI) has transformed photo editing, but it also poses challenges in combating disinformation. Recent incidents highlight how AI tools are used to create “cheapfakes,” spreading false narratives. Here are two examples:

Example 1: Donald Trump Assassination Attempt Photo

  • Claim: An image showed three Secret Service agents smiling while escorting former U.S. President Donald Trump after an assassination attempt.
  • Fact Check:
    • The photo was altered using FaceApp, an AI photo editing app.
    • Fake smiles were added to the agents’ faces.
    • The original photo, taken by Associated Press photographer Evan Vucci, showed somber officers.
  • Conclusion: The altered photo misrepresented the situation.

Example 2: Sonia Gandhi Photo

  • Claim: An “archival” photo depicted Sonia Gandhi, a prominent Indian opposition politician, holding a lit cigarette.
  • Fact Check:
    • The photo was created using Remaker.ai, an AI face swap tool.
    • It morphed Gandhi’s face with a model’s face from a 2012 photograph titled “Ghazale.”
    • The image went viral on Facebook.
  • Conclusion: The photo was manipulated but wasn’t an authentic archival image.

Detection Challenges

  • Detecting manipulated images remains a challenge.
  • TrueMedia.org, a non-profit organization founded by AI researcher Oren Etzioni, developed a deepfake detection tool.
  • The tool detected manipulations in both images.
  • AI-edited real images are harder to detect than fully AI-generated ones.
  • An ensemble of models distinguishes AI-generated media from non-AI-generated media and identifies small manipulations within images.

For All Social Media Buffs

  • Misinformation refers to false or inaccurate information that is unintentionally spread. It can occur due to misunderstandings, misinterpretations, or errors. Misinformation is often shared innocently, without malicious intent.
  • Examples of misinformation include:
  • Rumors: Spreading unverified stories or claims.
  • Mistaken Identity: Incorrectly attributing an event or statement to the wrong person.
  • Outdated Information: Sharing facts that were once true but have since changed.

In an era where AI-generated images blur the line between reality and fabrication, safeguarding against image manipulation is crucial. Here are some strategies to combat AI morphed images:

  1. PhotoGuard Technique: Researchers at MIT have developed the “PhotoGuard” technique It introduces imperceptible perturbations to images, disrupting AI models’ ability to manipulate them. By immunizing images preemptively, PhotoGuard prevents unauthorized alterations.
  2. Robust Algorithms: Train AI models on datasets containing genuine and morphed images. This helps develop robust algorithms for morphing detection, allowing adaptation to evolving techniques over time.
  3. Visual Inspection: Examine images for out-of-place or warped details. Pay special attention to lighting, shadows, and inconsistencies that may reveal AI manipulation.

Remember to critically evaluate information online, especially when images are involved. Fact-checking helps combat disinformation and promotes accuracy. 🕵️‍♂️

Leave a Reply

Your email address will not be published. Required fields are marked *

error: Content is protected !!