BOOM

Trending Searches

    SUPPORT
    BOOM

    Trending News

      • Fact Check 
        • Politics
        • Business
        • Entertainment
        • Social
        • Sports
        • World
      • ScamCheck
      • Explainers
      • News 
        • All News
      • Decode 
        • Impact
        • Scamcheck
        • Life
        • Voices
      • Media Buddhi 
        • Digital Buddhi
        • Senior Citizens
        • Resources
      • Web Stories
      • BOOM Research
      • BOOM Labs
      • Deepfake Tracker
      • Videos 
        • Facts Neeti
      • Home-icon
        Home
      • About Us-icon
        About Us
      • Authors-icon
        Authors
      • Team-icon
        Team
      • Careers-icon
        Careers
      • Internship-icon
        Internship
      • Contact Us-icon
        Contact Us
      • Methodology-icon
        Methodology
      • Correction Policy-icon
        Correction Policy
      • Non-Partnership Policy-icon
        Non-Partnership Policy
      • Cookie Policy-icon
        Cookie Policy
      • Grievance Redressal-icon
        Grievance Redressal
      • Republishing Guidelines-icon
        Republishing Guidelines
      • Fact Check-icon
        Fact Check
        Politics
        Business
        Entertainment
        Social
        Sports
        World
      • ScamCheck-icon
        ScamCheck
      • Explainers-icon
        Explainers
      • News-icon
        News
        All News
      • Decode-icon
        Decode
        Impact
        Scamcheck
        Life
        Voices
      • Media Buddhi-icon
        Media Buddhi
        Digital Buddhi
        Senior Citizens
        Resources
      • Web Stories-icon
        Web Stories
      • BOOM Research-icon
        BOOM Research
      • BOOM Labs-icon
        BOOM Labs
      • Deepfake Tracker-icon
        Deepfake Tracker
      • Videos-icon
        Videos
        Facts Neeti
      Trending Tags
      TRENDING
      • #Bihar Elections 2025
      • #Lok Sabha
      • #Narendra Modi
      • #Rahul Gandhi
      • #Asia Cup 2025
      • #BJP
      • #Deepfake
      • #Artificial Intelligence
      • Home
      • Decode
      • Interview: Can AI Comprehend The...
      Decode

      Interview: Can AI Comprehend The Complexities Of Indian Courtrooms?

      As Kerala High Court mandates Adalat AI to transcribe witness depositions, Decode spoke to lawyer Leah Verghese to understand the implications of AI in courtrooms.

      By -  Hera Rizwan |
      13 Nov 2025 11:12 AM IST
    • Boomlive
      Listen to this Article
      Interview: Can AI Comprehend The Complexities Of Indian Courtrooms?

      In a cramped Indian courtroom, the clatter of a typewriter still dictates the rhythm of justice. The stenographer hunches over the keys, lawyers pause mid-argument and witnesses wait restlessly. Nothing moves until the typing stops. For decades, this has been India's judicial reality—paper-heavy, procedure-bound, and notoriously slow.

      Now, that reality is being disrupted by Artificial Intelligence. Kerala has quietly ushered in what some call a judicial revolution: from November 1, every witness deposition in the state's district courts will be "primarily recorded" by Adalat AI, a voice-to-text transcription system. Kerala becomes the first Indian state to formally substitute the stenographer's keyboard with algorithmic processing.

      This transition represents a significant push toward paperless, digitised courts reliant on artificial intelligence and natural language processing (NLP).

      However, Kerala's approach carries a notable caveat. The High Court has issued detailed guidelines on the "responsible and restricted" use of AI, explicitly acknowledging that "most AI tools produce erroneous, incomplete, or biased results". Every transcript, the policy mandates, must be "meticulously verified by judicial officers".

      The Technology Behind the Transition

      Adalat AI, developed by Supreme Court lawyer Utkarsh Saxena and AI engineer Arghya Bhattacharya, claims to streamline court proceedings through real-time automated transcription, trained on Indian legal terminology and accent patterns. Now operational across nine states, the platform asserts to have reduced case timelines by up to 50%. Its advisory board includes retired justices D.K. Jain and S. Muralidhar, alongside senior advocates Rebecca John and Arvind Datar.

      If stenographers step aside and transcripts go digital, the question remains: has the judiciary moved too hastily? Can a machine fully grasp the nuance, tone, or linguistic complexity of a multilingual courtroom, particularly in regional languages like Malayalam?

      To examine these issues, Decode spoke with Leah Verghese, lawyer and social scientist at civil society organisation DAKSH. Verghese has long studied the intersection of law, technology, and judicial reform. She earlier warned that AI transcription tools, despite their promise, risk introducing subtle yet consequential errors—such as misinterpreting a claimant's name "Noel" as "no" and hence need human oversight.

      In this conversation, she addresses what the adoption of AI transcription means for accuracy, accessibility, and accountability in India's justice system.

      Edited excerpts from the interview:

      How ready are Indian courts, technologically, institutionally, and culturally, to adopt AI-driven tools like this?

      Even after 15 years of the eCourts project, many courts still rely heavily on physical files and manual workflows. Digital case files exist in some places, but without fully integrated digital processes, much data, especially older records, remains handwritten or poorly digitised. AI tools need machine-readable data to function effectively, so this patchy digitisation is a major challenge.

      There are currently no standard frameworks for assessing AI before deployment. Moreover, committees deciding on technology, such as the Supreme Court e-Committee and High Court Committees, are mostly composed of judges who may not have technical expertise to evaluate AI tools or understand the risks. There is a need to have experts in technology, data science and ethics in these committees.

      Training and capacity building are also key. For court staff and judges used to manual processes, AI isn’t just a new tool, it’s a fundamental shift in workflow. Without a basic understanding of how AI works, its limits, and where biases can arise, they risk missing errors in its outputs. There is also a risk of automation bias where users of the AI tool begin to place excessive trust in it, accepting its results without sufficient scrutiny.

      What are the most serious risks if AI makes errors in court?

      One of the most pressing risks lies in language itself. AI transcription tools may mishear or incorrectly transcribe names, terms, or phrases, especially when dealing with regional dialects, accents, or local pronunciations. If the AI tool’s training data does not sufficiently include these variations, certain communities could face systemic disadvantages, as their words might consistently be misrepresented in official court records.

      Over time, these repeated errors can accumulate, creating structural bias that subtly undermines fairness and accuracy in the judicial process. Such mistakes are not just minor inconveniences; they can affect how testimony is understood, how evidence is recorded, and even the outcomes of cases.

      These linguistic and dialect-related risks can be mitigated, but only if there is robust human oversight at critical points, ensuring that AI outputs are carefully verified and corrected before they become part of the official record.

      What gaps or failures do you see in how courts procure, integrate, and monitor AI tools?

      The way courts procure AI tools in India is often informal and ad-hoc. Decisions can be based on interactions with certain vendors, and sometimes courts rely on long pilot programmes rather than systematic evaluation. Since there are no formal technical assessment frameworks in place, it is difficult for courts to measure whether an AI tool is genuinely improving outcomes or just adding complexity.

      Many AI tools are even provided free of cost or on a non-commercial basis. This can create dependence on the vendor, especially if there are no requirements for audits, accountability, or transparency in how the tool operates. Without these safeguards, courts have limited means to check the accuracy or reliability of the AI outputs, which could affect case records or proceedings.

      Looking ahead, a structured approach is important. Courts should continuously monitor and measure the accuracy of AI tools, conduct regular technical audits, and use carefully designed pilot programmes to assess whether a tool truly delivers reliable results in practice. Only through such ongoing evaluation can AI adoption in courts be made responsible and effective.

      What essential safeguards should be in place before wider AI deployment?

      Before AI can be widely deployed in courts, there is a need for clear safeguards and governance frameworks. Courts must define the risks, outline approved and prohibited uses, and set protocols for handling sensitive proceedings. Without these, AI adoption could introduce errors, bias, or ethical lapses.

      Technology committees should also be multidisciplinary, including judges, tech experts, data governance specialists, and information security professionals. Judicial academies should train judges and staff on AI fundamentals, limitations, bias, hallucinations, and responsible use, so that human oversight is informed and effective.

      Additionally, AI vendors must provide transparent explanations of how their tools work, maintain audit trails of key decisions, and log model updates. For high-risk systems, meaningful human oversight at every critical decision point is essential to ensure AI supports, rather than replaces, human judgment.

      Also Read:Interview: “Zoho Migration Highlights Swadeshi Jingoism Over Digital Sovereignty”
      Also Read:
      Interview: An Odisha Farmer Has Been Barred From Filing RTI For A Year
      Also Read:Inside Musk’s Wikipedia: Grokipedia Spins Conspiracy Theories On Taj Mahal, Delhi Riots


      Tags

      Artificial IntelligencelawKerala High Court
      Read Full Article

      Next Story
      X

      Subscribe to BOOM Newsletters

      👉 No spam, no paywall — but verified insights.

      Please enter a Email Address
      Subscribe for free!

      Stay Ahead of Misinformation!

      Please enter a Email Address
      Subscribe Now🛡️ 100% Privacy Protected | No Spam, Just Facts
      By subscribing, you agree with the Terms & conditions and Privacy Policy connected to the offer

      Thank you for subscribing!

      You’re now part of the BOOM community.

      Or, Subscribe to receive latest news via email
      Subscribed Successfully...
      Copy HTMLHTML is copied!
      There's no data to copy!