India's Grief Tech Boom: When Deepfakes Bring Back the Dead
A growing industry of small-town AI creators in India now produces deepfake videos of dead relatives for weddings and family rituals. As the emotional demand surges, India's new IT Rules force platforms to delete deepfakes within three hours. The collision of grief, commerce, and regulation is raising questions nobody expected.
When the lights dimmed at Jaideep Sharma's wedding reception in Ajmer, Rajasthan, guests expected the usual montage of the couple posing at scenic locations. Instead, they watched Sharma's father appear on screen, smiling and blessing the newlyweds. His father had been dead for over a year.
The video was a deepfake, produced by a local AI creator Sharma found on Instagram. It cost about 50,000 rupees ($600) and took a week to make. "It was like a bombardment of emotions for everyone," the 33-year-old garment trader told Rest of World in March 2026. "He was like a central force in the entire family."
Sharma is not an outlier. Across India, a cottage industry of self-taught AI creators has sprung up in small towns and mid-tier cities, offering to resurrect dead family members as deepfake videos for weddings, baby ceremonies, and religious rituals.
The Creators Behind the Business
Akhil Vinayak, a 29-year-old film enthusiast in Thiruvananthapuram, Kerala, started by posting deepfake parody videos of dead Bollywood actors on Instagram. Then a client asked him to create something different: a video of her dead mother-in-law blessing her newborn baby. The mother-in-law had died before the child was born. The client wanted to surprise her husband.
Vinayak built a video showing the deceased woman descending from heaven, visiting her son, and holding the baby she never met. The family's reaction video racked up over one million likes on Instagram. Vinayak now runs Kanavu Kadha ("stories from dreams"), a five-person team that charges about 18,000 rupees ($200) per minute-long video. He uses open-source models like Stable Diffusion alongside Adobe Premiere Pro.
Then there's Divyendra Singh Jadoun, based in the north Indian town of Pushkar. Jadoun taught himself Photoshop, video editing, and generative AI during the COVID-19 lockdowns, largely through YouTube tutorials. He now runs The Indian Deepfaker, which creates what he calls "hyper realistic deepfakes" of dead people that can speak, text, and even video-chat in real time. He openly brands this as "grief tech."
The price range: Grief tech deepfakes in India cost between $200 and $600, making them accessible to middle-class families. Most clients find creators through Instagram, YouTube, or WhatsApp referrals. Turnaround time runs from a few days to two weeks depending on source material quality.
The Darker Side
The same technology powering tearful family reunions also enables extortion, harassment, and political manipulation. In October 2025, a teenager in Faridabad reportedly took his own life after cybercriminals used AI-generated obscene videos of his sisters to blackmail him. That case forced national attention onto deepfake abuse.
India's deepfake problem extends well beyond individual crimes. During the 2024 general elections, political deepfakes flooded WhatsApp groups across the country. The original incident that triggered regulatory action was the October 2023 deepfake of actress Rashmika Mandanna, a synthetic video that went viral and provoked public outrage.
Even grief tech creators acknowledge the risks. Jadoun told Livemint in November 2025 that he actively limits how much time clients spend interacting with AI recreations of dead relatives. "There's a line between remembrance and dependency," he said.
India's Regulatory Response: The 3-Hour Rule
On February 10, 2026, India's Ministry of Electronics and Information Technology (MeitY) amended the IT Rules with sweeping changes targeting deepfakes specifically. The key provisions:
- •3-hour takedown window: Platforms designated as Significant Social Media Intermediaries (SSMIs) must remove reported deepfake content within three hours, down from the previous 36-hour window.
- •Mandatory AI labelling: All AI-generated or AI-modified content must carry visible labels identifying it as synthetic.
- •Provenance requirements: Platforms must maintain origin metadata for AI content, enabling traceability back to the creator.
- •Local compliance officers: Foreign platforms meeting SSMI thresholds must maintain India-based compliance staff and dedicated moderation pipelines.
The three-hour window is among the most aggressive takedown mandates anywhere in the world. By comparison, the EU's Digital Services Act requires "expeditious" removal without a fixed deadline, and most US platforms operate on voluntary timelines.
What This Means for Everyone
India's grief tech industry sits at a strange intersection. The emotional deepfakes are, by most accounts, consensual and privately commissioned. Nobody is being deceived. But the technology, tools, and techniques are identical to those used for scams, blackmail, and election interference. The same Stable Diffusion model that builds a grandmother's blessing also builds a politician's fake confession.
The new IT Rules don't distinguish between malicious deepfakes and memorial ones. If a grief tech video gets posted publicly and someone reports it, the platform has three hours to act. That ambiguity will likely get tested soon.
For everyone else, the takeaway is practical: as deepfake creation becomes cheaper and more accessible, the ability to verify what you're seeing matters more than ever. Tools like FakeOut, free on Android (iOS beta in development), help you check whether an image or video has been AI-generated or manipulated. Whether you're encountering a viral political clip or a forwarded WhatsApp video, running it through a detection tool takes seconds and can save you from sharing something false.