India’s debate on children’s online safety is often framed as a simple question: should social media for minors be banned? A recent roundtable organised by BOOM and Decode called 'Should We Log Out Our Children?', bringing together teenagers, digital rights activists, mental health professionals, lawyers, educators, researchers and technologists, revealed a far more nuanced reality.
Across discussions, one idea emerged repeatedly: the problem is not merely children using social media, but the systems designed to maximise engagement, extract data, and profit from children’s attention without sufficient accountability.
The Psychological Impact
Teen participants themselves described the psychological and emotional impact of algorithm-driven platforms. Keisha, a 15-year-old social media user and Teen Ambassador of BOOM’s Teen Fact-Checking Network India, reflected on how “constant hyperconnectivity” to trends and updates had “stifled” her creativity and led to what she called a “pre-life crisis” — a sense of confusion about identity at a very young age. Others described recommendation systems as “loops” that continuously push users toward increasingly addictive or extreme content.
Mental health professionals and researchers echoed these concerns with scientific context. Nishtha Agarwal, a psychologist, explained how social media’s “algorithmic based mechanisms” are particularly harmful to developing brains. The discussion highlighted how features designed for instant gratification — infinite scroll, autoplay reels, rapid story transitions and engagement metrics such as likes and comments — can affect impulse control, emotional regulation and self-worth in children and teenagers.
Algorithmic Risks
Participants also warned that algorithmic systems are not neutral. Searches around anxiety, depression or loneliness can lead vulnerable young users toward self-harm or harmful communities. Violent, sexualised or hateful content can quickly dominate recommendation feeds after minimal interaction. Several breakout groups identified this recommendation architecture — not simply “screen time” — as one of the central safety concerns.
The Challenge of Restrictions
At the same time, many participants argued that blanket bans may not solve the problem.
Teenagers themselves pointed out that restrictions often push young users toward alternative or less moderated spaces. As one participant observed, young users are “troubleshooters” who will inevitably find workarounds. Others warned that bans risk shifting responsibility away from platforms and onto parents or children themselves, without addressing the business incentives driving addictive design.
Equity and Access
The discussion also foregrounded inequalities in digital access. Uma Subramanian, co-founder of the RATI Foundation, cautioned that asking children to simply “log out” is a privileged position in a country where millions still share devices and internet access is unevenly distributed across gender and class lines. In many households, girls receive less access to devices and data than boys. This raised concerns that poorly designed regulations could deepen existing inequalities rather than improve safety.
Proposed Framework
Rather than focusing solely on prohibition, participants proposed a range of practical interventions to make online spaces safer for children:
- Greater transparency and accountability around recommendation algorithms
- Stronger moderation of abusive, violent and sexualised content
- Age-appropriate content filters and warning labels
- Restrictions on manipulative design features such as autoplay and endless scrolling
- Better safeguards against anonymous harassment and impersonation
- Increased transparency around data collection and AI training systems
- Media literacy education in schools to help children understand algorithms, misinformation and AI systems
- More open conversations between parents, educators and children about online harms and digital wellbeing
Another recurring concern was the rapid rise of AI chatbots and generative AI tools. Participants noted that AI systems can amplify misinformation, encourage emotional dependency, collect sensitive data and operate without adequate safeguards for minors. Several speakers stressed that public understanding of AI has not kept pace with its adoption.
The consultation ultimately concluded that child safety online cannot rely on a single solution. Effective regulation must move beyond simplistic debates around bans and instead focus on platform accountability, safer product design, digital literacy and meaningful support systems for children.
Join the Stakeholder Consultation
The next phase of this consultation seeks public input through a survey exploring how platforms, policymakers, schools, parents and civil society can collectively build a safer internet for young people in India.
Share your actionable suggestions for the government and Big Tech by filling this Google Form.