Deepfakes, AI Scams & the Law: Is India Ready?



Share on:

Introduction

Artificial Intelligence (AI) is redefining nearly every aspect of life, from medical diagnostics and personalized marketing to education and entertainment. However, alongside its benefits, AI is spawning darker uses, including deepfakes and AI-driven scams. These forgeries can manipulate images, audio, and video with frightening accuracy, eroding trust and potentially shaking the foundations of truth itself. This poses an urgent legal and ethical dilemma in India, with over 800 million residents accessing the internet. So, the pertinent question follows: Is India legally equipped to handle the dangers of deepfakes and AI scams?

What are Deepfakes?

A deepfake is a hyper-realistic piece of content, mostly video or audio, which uses AI-based techniques to manipulate a person's appearance, speech, or behavior. These can portray people saying or doing things they never did. Created using machine learning techniques such as Generative Adversarial Networks (GANs), deepfakes are becoming increasingly difficult to detect. Examples include:

  • A fake video of a politician announcing false policies.
  • An AI-generated voice call mimicking a family member to request urgent money.
  • Celebrities appearing in explicit content without their consent.

What was once novelty content or satire is now a serious tool for financial fraud, political manipulation, blackmail, and social unrest.

AI Scams: The New Age of Fraud

Scams have evolved from email phishing to AI-generated voice cloning, chatbot fraud, and fake investor calls. Recently, a Bengaluru startup founder reportedly lost ₹12 lakh to a scammer who cloned a colleague’s voice to request funds. AI scams are increasingly real-time, interactive, hyper-personalized, and hard to trace. 

Current Legal Landscape in India

India’s legal framework was not originally designed to deal with synthetic media or machine intelligence. However, several laws can be extended or adapted:

Information Technology Act, 2000 (IT Act)

  • Section 66D, ‘Punishment for cheating by personation by using computer resource.’ It states, “Whoever, by means of any communication device or computer resource cheats by personation, shall be punished with imprisonment of either description for a term which may extend to three years and shall also be liable to fine which may extend to one lakh rupees.”
  • Section 67, ‘Punishment for publishing or transmitting obscene material in electronic form.’ It states, “Whoever publishes or transmits or causes to be published or transmitted in the electronic form, any material which is lascivious or appeals to the prurient interest or if its effect is such as to tend to deprave and corrupt persons who are likely, having regard to all relevant circumstances, to read, see or hear the matter contained or embodied in it, shall be punished on first conviction with imprisonment of either description for a term which may extend to three years and with fine which may extend to five lakh rupees and in the event of second or subsequent conviction with imprisonment of either description for a term which may extend to five years and also with fine which may extend to ten lakh rupees.”
  • The Intermediary Guidelines and Digital Media Ethics Code Rules (2021) place liability on platforms to detect and remove harmful content.

Despite these legal implications, there are certain limitations:

  • No explicit provision for deepfakes or AI-manipulated media.
  • The burden of proof is heavy on the victim.
  • Prosecution is difficult when perpetrators use VPNs or operate from abroad.

Digital Personal Data Protection Act, 2023 (DPDP Act)

This law provides individuals with rights over their personal data and consent. It can be invoked when:

  • A person’s image, likeness, or voice is used without consent in deepfake content.
  • Data is scraped to train AI tools without permission.

However, enforcement is weak, and the law doesn’t specifically mention deepfakes.

Global Legal Trends

India is not alone in facing this challenge. Countries worldwide are introducing AI-specific laws:

  • European Union: The AI Act, passed in 2024, bans certain deepfake uses and mandates transparency for synthetic content.
  • USA: States like California and Texas have enacted deepfake laws related to elections and non-consensual explicit content.
  • China: Has mandated watermarking of all AI-generated content and holds platforms accountable.

India can draw from these examples while respecting constitutional freedoms such as free speech and privacy.

What India Needs 

  • A deepfake-specification legislation.
  • Real-time detection tools to detect and flag deepfakes in real time.
  • Victim support mechanisms to protect them from blackmail and social shaming.
  • Public awareness campaigns to help people learn how to spot scams and verify sources.

Conclusion

India stands at the crossroads of a digital revolution where technology can be both a shield and a sword. Deepfakes and AI scams aren’t just technical issues; they are legal, social, and ethical emergencies. While some legal tools exist, they are outdated, reactive, and often powerless against the speed and scale of modern AI.

To protect citizens and uphold truth in the digital age, India must adopt a holistic, forward-looking legal framework, one that blends innovation, accountability, and the rule of law. Only then can the nation answer the question: “Is India ready?”—with a confident yes.


 

1. What should victims of deepfakes do?
2. How can I detect a deepfake?