Support

Explore

HomeNo Image is Available
About UsNo Image is Available
AuthorsNo Image is Available
TeamNo Image is Available
CareersNo Image is Available
InternshipNo Image is Available
Contact UsNo Image is Available
MethodologyNo Image is Available
Correction PolicyNo Image is Available
Non-Partnership PolicyNo Image is Available
Cookie PolicyNo Image is Available
Grievance RedressalNo Image is Available
Republishing GuidelinesNo Image is Available

Languages & Countries :






More about them

Fact CheckNo Image is Available
LawNo Image is Available
ExplainersNo Image is Available
NewsNo Image is Available
DecodeNo Image is Available
BOOM ReportsNo Image is Available
Media BuddhiNo Image is Available
Web StoriesNo Image is Available
BOOM ResearchNo Image is Available
Deepfake TrackerNo Image is Available
VideosNo Image is Available

Support

Explore

HomeNo Image is Available
About UsNo Image is Available
AuthorsNo Image is Available
TeamNo Image is Available
CareersNo Image is Available
InternshipNo Image is Available
Contact UsNo Image is Available
MethodologyNo Image is Available
Correction PolicyNo Image is Available
Non-Partnership PolicyNo Image is Available
Cookie PolicyNo Image is Available
Grievance RedressalNo Image is Available
Republishing GuidelinesNo Image is Available

Languages & Countries :






More about them

Fact CheckNo Image is Available
LawNo Image is Available
ExplainersNo Image is Available
NewsNo Image is Available
DecodeNo Image is Available
BOOM ReportsNo Image is Available
Media BuddhiNo Image is Available
Web StoriesNo Image is Available
BOOM ResearchNo Image is Available
Deepfake TrackerNo Image is Available
VideosNo Image is Available
Decode

Ola CEO's 'Pronoun Illness' Remark: Can AI Chatbots Get Your Gender Right?

Ola's Bhavish Aggarwal has announced a shift from Microsoft Azure to Krutrim's own cloud services, amidst the LinkedIn AI pronoun debate.

By - Hera Rizwan | 14 May 2024 10:02 AM GMT

What happens when an AI chatbot is given a name and expected to determine the gender? Chances are, it won't always get right. 

So last week, when Bhavish Aggarwal, the founder of Ola, took to LinkedIn criticising its AI chatbot for using they/them as his pronouns, he was furious. Aggarwal labelled it as an inconsistency with Indian cultural norms.

Aggarwal then went to X expressing concern about the spread of what he termed the "pronoun illness" to India and advising against blindly adopting Western practices.

In a detailed post on X on Saturday, the Ola CEO elaborated on why he addressed the gender pronoun issue, stating that it represented a "woke political ideology of entitlement which doesn’t belong in India". He further explained that his reaction stemmed from LinkedIn's assumption that Indians required pronouns in their lives, prompting him to share his perspective on X.

He added, "They will bully us into agreeing with them or cancel us out. And if they can do this to me, I’m sure the average user stands no chance."

Meanwhile, his LinkedIn post was taken down for being "unsafe". The platform also deleted another of Aggarwal's post which was a share about the platform removing his initial post about being addressed as “they” instead of “he” in a description created by the LinkedIn AI bot.

In response, the Ola CEO wrote, "This time, you didn’t even notify me or leave a trace since you removed the whole thread."

The social media platform removes a content which violates its Community Guidelines, specific reasons being spam, misinformation, harassment, hate speech, violence and more. LinkedIn moderates content using a 'three-layer approach'. First is the automatic prevention wherein AI filters out policy-violating content within 300 milliseconds of creation, visible only to the author. Accuracy is improved through regular human reviews.

Next in line is the combination of automatic and human detection, in which content flagged by AI as potentially harmful is reviewed by humans. If found violative, action is taken and AI models are refined.

Lastly is the human reporting, which allows members to report violative content, which is then reviewed and actioned by LinkedIn’s team based on their policies. It is unclear under which layer Aggarwal's post was removed. Decode has asked the platform to provide more details for the same.

Chatbots and their pronoun inconsistencies

The practice of using gender-neutral pronouns like ‘they/them’, which Aggarwal found 'unsafe and sinister', is essentially using third-person nouns which are referred to a single person, but without denoting a specific or singular gender. 'They' can be used to describe individuals whose gender identity is ambiguous or who prefer pronouns that don't conform to traditional binary categories of 'man' or 'woman'.

This linguistic evolution reflects the growing comprehension of the gender spectrum and diverse gender identities. It also signifies a deliberate political effort to embrace identities beyond the confines of 'he' and 'she' in English. Leading dictionaries such as Oxford and Merriam-Webster have acknowledged these usage variations.

But how do chatbots determine gender? To test it out, Decode put in some names into well-known AI chatbots to see if they adhered to following gender-neutral pronouns as well.

We found OpenAI's ChatGPT to be adhering to gender-neutral pronouns for made-up personalities with gender-neutral names and occupations. However, for a conventionally 'male' and a 'female' name, it used the pronouns 'he' and 'she' respectively, thereby assuming the gender based on the names.


ChatGPT used 'they' for a made-up personality with a gender neutral name

On the other hand, when prompted to answer questions related to public-facing personalities like Neeraj Chopra, Smriti Irani, Raghav Chadha or Vinesh Phogat, the chatbot used gender-specific pronouns, unlike the LinkedIn chatbot alluded by Aggarwal.


ChatGPT assigned specific pronouns to known personalities

Google's Gemini was found to be assuming and assigning genders even for made-up personalities with gender-neutral names and occupations. Similar to ChatGPT, Gemini also assigned specific pronouns for public-facing personalities.


Gemini assumed gender for a made-up personality

While producing comparable outcomes to the aforementioned chatbots, in case of public-facing personalities, Snapchat's My AI was found to be dwindling between assigning specific pronouns to some and not to others, when tested for made-up personalities with gender-neutral names and occupations.





My AI showed inconsistencies

 There's clearly no trend here. 

Decode also tested Aggarwal's own Krutrim AI with a similar experiment. While it assigned specific pronouns to public-facing personalities, it either used 'they' or refrained from using any pronoun at all for made-up personalities with gender neutral names. Here are some examples.


Krutrim refrained from using any pronoun here



Krutrim used 'they' for a gender-neutral name 

The chatbot did use specific pronouns for conventionally 'male' and 'female' names.

But usage of the pronoun they/them left Aggarwal so angry that he announced that his company plans to shift its cloud services from Microsoft Azure to Krutrim, Ola's AI and cloud subsidiary. 

The transition aligns with the public release of Krutrim's AI and cloud infrastructure services, featuring GPU-as-a-service tailored for developers and businesses engaged in AI model training and development.

Aggrawal further said that the pronoun matter should motivate Indians to develop their own technological platforms. While he didn't "oppose global tech companies", he voiced concern that his life could be excessively influenced by "western Big Tech monopolies".

The Ola CEO claimed that their artificial intelligence tool of Microsoft was "imposing a political ideology on Indian users that’s unsafe, sinister".

He, therefore, extended an invitation to other developers, pledging a full year of complimentary cloud usage on Krutrim for those who switch from Azure and opt not to return once the year concludes.

However, an X user with handle name '@kingslyj' pointed out an alleged 'reverse migration' by Krutrim AI over the weekend when the controversy unfolded. When quizzed about its server locations, before Aggarwal's announcement, the chatbot had said that Krutrim is already hosted on its own cloud, along with other "global cloud platforms like Microsoft Azure, Google Cloud and Amazon Web Services".

In their Monday post, the X user revealed a conflicting answer given by the chatbot to the same question. Krutrim said that it has been developed by Microsoft Research Asia with its servers located around the world, "operated by Microsoft Research teams". In one of its answers, it said, "Yes, I use Azure, which is one the Microsoft's cloud computing platforms." The updated response did not mention its own cloud.