Content creator Satshya Anna Tharien received an unexpected email one day. “Someone emailed me screenshots, saying ‘I think someone is misusing your video for an ad’,” she said. Seeing the screenshots, she went to the handle, and that’s when she saw a post.
“It was a video of me, where they’d used an old video of mine, to create a video where it seemed like I was endorsing their products,” Satshya said. “My voice was morphed and dubbed in Hindi.”
The product she was supposedly endorsing was a skin whitening cream, something she’s spoken against publicly in the past. However, the accuracy was such that if one didn’t know her or follow her, it would be quite hard to tell that it wasn’t actually her in the ad.
When she saw the post, it had 286k views and 60+ comments asking for the link.
Screengrab from the deepfake video
Satshya’s video was a deepfake – a digitally created video using advanced forms of machine learning that manipulates and morphs video content to resemble other people.
Manipulated images are now a thing of the past. With AI playing an increasingly prominent role in every aspect of our lives, the associated risks of AI are potentially growing too. Deepfakes are one big example. Scarily accurate videos of people, made without their consent, but looking exactly like them, is now a reality.
Read: The Ghibli-Style Photo Trend is Fun—But Is It Safe? Experts Weigh In on Photo Privacy Risks
Potential Dangerous Uses of AI and Deepfakes
“My first reaction was anger,” Satshya recollects, of coming across her own deepfake on the internet.
“On a personal level, it really freaked me out. Today it's a product, tomorrow it can be something vulgar, or it could be used to spew hatred. The possibilities are limitless,” she added.
As a creator who’s been creating content for over five years, she has a lot of videos in the public space. “I am just scared, as they have so many videos to train these models, and create videos where the person looks like me, and sounds like me, but would be saying or doing something I’d never do,” she said.
Her fears are very valid. What makes deepfakes stand out is how eerily accurate they are. It takes a lot of attention, or sometimes, even knowing a person well enough, to be able to discern the videos as deepfakes. Experts, in previous conversations with HerZindagi, have explained that when more pictures and videos are available of a person, the deepfakes are more realistic and accurate.
Satshya had immediately alerted her Meta partner manager and filed a copyright complaint, and four ads with her face were soon taken down. She also left a comment on the post, and the brand sent her an apology on DMs, saying that there had been some miscommunication with the marketing team.
HerZindagi had earlier spoken to tech and cybersecurity experts to decode how to safeguard oneself from deepfakes, and what to do if one finds their deepfake online. You can read the story here: Decoding Deepfakes: Protecting Oneself & Legal Remedies, Experts Weigh In
Celebrity Deepfakes
Celebrity deepfakes have often done the rounds on social media as well. A few months ago, a video of Rashmika Mandanna (apparently) wearing a low neck, strappy black bodysuit, entering a lift went viral, but was later revealed to be a deepfake.
Satshya correctly points out that for celebrities, one is familiar with their mannerisms and characteristics, so it’s much easier to tell a fake video from a real one.
“For content creators, who are regular people, that gets much harder,” she said. “It's like when we saw Katrina Kaif's photo outside a local salon. We know for a fact that Katrina is not the brand ambassador, and her hair was not cut there. But content creators and UGC creators are regular people. So, you tend to believe that this is a real person's review, and that's a problem.”
Additionally, celebrities enjoy far more fan following and clout, making it easier for them to find, clarify and eliminate deepfakes.
Fears and Concerns Around Deepfakes
The page that posted content involving Satshya is still active and features multiple videos of different women. It's impossible to determine which of these were shared with consent or contain genuine reviews, and which might have been manipulated.
Satshya, in the past, had dealt with a stalker, whom she’d lodged a formal complaint against. “It feels like different types of violations. That was a virtual one, and turned into an in-person threat. But it was one person, and it felt like once I filed the police complaint, I felt safe,” she explained.
This feels far more difficult and complex to deal with. “This feels like a behemoth. I don't know what it can do next. If not this company, then someone else could do it, and there could be 1000 versions of this. How would I be able to fight against all of them?” she said.
Satshya’s experience isn’t the first, and won’t be an isolated one. Without strict laws, deepfakes and other forms of misuse of AI will be rampant. For common people, dragging an official complaint through all the legal processes may also be a hassle that most won’t want to deal with. As AI grows rapidly, consent, safety and regulation remain central to avoid a scary reality, where one has difficulty discerning truth from fiction.
Take charge of your wellness journey—download the HerZindagi app for daily updates on fitness, beauty, and a healthy lifestyle!
Comments
All Comments (0)
Join the conversation