Rani (name changed), a Delhi-based beautician and service partner with Urban Company, was taken aback when she noticed something strange on her app profile. Her photo—the one she never changed—looked different. The face staring back was hers, but somehow, older, duller, and strangely unfamiliar.
She hadn’t uploaded a new picture. Yet, the change had happened.
Since then, she says her booking numbers have declined. “Maybe it’s because of the way I look in the new photo,” she said. “Clients usually prefer younger-looking professionals—they think they’ll be more energetic.” She doesn’t know for sure what triggered the drop, but the timing, she says, has made her anxious.
Urban Company, one of India’s largest home services platforms, connects gig workers like Rani to customers for everything from beauty services to appliance repairs. In this ecosystem, a profile picture isn’t just decorative—it can directly influence customer trust and booking rates.
What Rani didn’t know was that her image had been quietly altered by the platform using artificial intelligence.
Alert received by the service partners (Courtesy: Rajdhani App Workers Union)
The 35 year-old beautician is unfamiliar with concepts like AI, data privacy, or informed consent. All she knows is that her face was changed—and possibly, at a cost to her earnings.
Blurry consent lines
Earlier this month, Urban Company began rolling out AI-generated profile photos for some of its service professionals. The company said the feature was designed to make partners look more “professional” and enhance customer trust. But many workers say they were never clearly informed—let alone consulted.
Through a notification on the app, Urban Company told its service partners that the updated images could improve booking rates. They were given an option to opt out. But if they didn’t actively respond, the platform treated it as implied consent.
Rani, like many others, missed the alert. It had appeared in the final week of May, under the bell icon on the Urban Company Partner app—an area she rarely checks. Push notifications are only visible if manually enabled. Once new alerts arrive, older ones get buried.
Meanwhile, some workers say they did respond—but their input was disregarded.
Warda (name changed), another beauty service partner, recalled seeing the notification and immediately replied. “I wasn’t happy with the new image,” she said. “But they changed it anyway.”
The app offered two options: one to approve the image, another marked “another issue.” Warda selected the latter and typed out her concern: “I am not happy with the new image, and the client may not even believe it's the same person in the app and in person.
A week later, her profile image was replaced.
“My facial features looked altered. This wasn’t the version of me I wanted customers to see,” she said. “We’re often running from one booking to another. Not everyone checks every alert from the app—especially if one hasn’t enabled push notifications. This was no way of taking our consent, if at all.”
When AI distorts identity
A review by Decode of several updated partner profiles showed that the AI-generated images often looked noticeably different from the actual appearance of the workers.
This visual discrepancy was also confirmed by the Deepfakes Analysis Unit and Professor Siwei Lyu of the Department of Computer Science and Engineering at the University at Buffalo. After analysing the photos, he concluded they were “highly likely to be created by AI models,” with an average likelihood of 91%. Using ChatGPT-4o’s image generation capabilities and reference photos from Urban Company’s website, Lyu demonstrated how similar altered portraits could be replicated.
Sunand, president of the Rajdhani App Workers Union (RAWU), warned that these mismatched images can lead to real-world consequences. “If a customer doesn’t recognise the worker who shows up because their profile photo looks different, the order might be cancelled—and the worker loses income,” he said.
RAWU also criticised the practice, calling it ethically flawed. In a statement, the union said the use of AI to ‘enhance’ appearances reinforces a harmful notion—that trust and professionalism are defined by a digitally created, idealised standard.
The legal red flags
Speaking to Decode, legal experts said that the use of AI-generated images—especially without clear, explicit consent—raises serious red flags.
Alvin Antony, a lawyer who specialises in technology and digital privacy, pointed out that under India’s new Digital Personal Data Protection (DPDP) Act, consent must meet two core conditions. “First, it must be an affirmative act—the individual must explicitly say ‘yes’ to a specific use of their data,” he explained. “Second, it must be free, informed, specific, unconditional, and unambiguous.”
By that standard, Urban Company’s opt-out model—where non-response is treated as consent—doesn’t meet the legal threshold, especially in gig work environments where digital literacy varies widely.
Ada Shaharbanu, Senior Associate at Spice Route Legal, added that once a profile photo is digitally processed to enhance or extract facial features, it may qualify as biometric data. Under Indian law, processing biometric data requires written, informed consent.
Citing Section 2(47)(b) of the Consumer Protection Act, Antony noted that digitally altering a worker’s image in a way that misrepresents them to customers can be considered an unfair trade practice. “If the customer cancels, it’s the worker who suffers reputational and economic harm. But legally, the platform may bear liability for the misrepresentation,” he said.
Both experts agree that India’s labour laws, which predate the platform economy, are ill-equipped to handle this type of digital identity manipulation. But even in the absence of precedent, the lack of meaningful consent—especially for vulnerable, informal workers—could face constitutional and legal challenge.
A shifting burden
Shaharbanu noted that gig workers may be able to seek damages under Section 43A of the IT Act, which allows for compensation when personal data is mishandled or processed without due care. They can also file content takedown requests under India's intermediary guidelines, though these come with procedural hurdles.
Antony stressed the growing need for stronger legal safeguards for gig workers, especially as digital tools increasingly shape how they are represented and perceived.
An AI-generated image created by a platform could be repurposed by the worker—say, for ID verification or job applications. If that image is later found to be “incomplete or misleading,” the worker—not the platform—could face penalties. Under Section 15 of the DPDP Act, such offences could draw fines of up to Rs 10,000.
“This unfairly shifts liability onto the worker, even though the manipulation began with the platform,” he said.
He added that the law should protect those who are least informed and most exposed—especially as AI systems increasingly reshape how people are represented in public and professional spaces.
“Gig workers make up a massive part of India’s workforce,” he said. “They deserve dignity, safety, and autonomy.”
For Rani and Warda, the damage may already be done. They didn’t ask for their faces to be enhanced, softened, or modified. What they did ask for—transparency, consent, and a say in how they’re represented—wasn’t granted.
Urban Company had not responded to detailed queries at the time of publishing. This story will be updated if and when they do.