BOOM

Trending Searches

    SUPPORT
    BOOM

    Trending News

      • Fact Check 
        • Fast Check
        • Politics
        • Business
        • Entertainment
        • Social
        • Sports
        • World
      • Law
      • Explainers
      • News 
        • All News
      • Decode 
        • Impact
        • Scamcheck
        • Life
        • Voices
      • Media Buddhi 
        • Digital Buddhi
        • Senior Citizens
        • Videos
      • Web Stories
      • BOOM Research
      • BOOM Labs
      • Deepfake Tracker
      • Videos 
        • Facts Neeti
      • Home-icon
        Home
      • About Us-icon
        About Us
      • Authors-icon
        Authors
      • Team-icon
        Team
      • Careers-icon
        Careers
      • Internship-icon
        Internship
      • Contact Us-icon
        Contact Us
      • Methodology-icon
        Methodology
      • Correction Policy-icon
        Correction Policy
      • Non-Partnership Policy-icon
        Non-Partnership Policy
      • Cookie Policy-icon
        Cookie Policy
      • Grievance Redressal-icon
        Grievance Redressal
      • Republishing Guidelines-icon
        Republishing Guidelines
      • Fact Check-icon
        Fact Check
        Fast Check
        Politics
        Business
        Entertainment
        Social
        Sports
        World
      • Law-icon
        Law
      • Explainers-icon
        Explainers
      • News-icon
        News
        All News
      • Decode-icon
        Decode
        Impact
        Scamcheck
        Life
        Voices
      • Media Buddhi-icon
        Media Buddhi
        Digital Buddhi
        Senior Citizens
        Videos
      • Web Stories-icon
        Web Stories
      • BOOM Research-icon
        BOOM Research
      • BOOM Labs-icon
        BOOM Labs
      • Deepfake Tracker-icon
        Deepfake Tracker
      • Videos-icon
        Videos
        Facts Neeti
      Trending Tags
      TRENDING
      • #Operation Sindoor
      • #Pahalgam Terror Attack
      • #Narendra Modi
      • #Rahul Gandhi
      • #Waqf Amendment Bill
      • #Arvind Kejriwal
      • #Deepfake
      • #Artificial Intelligence
      • Home
      • Explainers
      • Did AI Write The Exam? Jindal Law...
      Explainers

      Did AI Write The Exam? Jindal Law Student’s Fight May Set Academic Rules

      The student argues that AI tools are not plagiarism unless they infringe on copyright and claims the university lacked clear AI guidelines. His case highlights the need for clearer academic policies as AI use among students grows.

      By -  Hera Rizwan
      Published -  8 Nov 2024 10:26 AM IST
    • Boomlive
      Listen to this Article
      Did AI Write The Exam? Jindal Law Student’s Fight May Set Academic Rules

      AI in Academia: Law Student Sues University Over AI Plagiarism Allegation

      • Kaustubh Shakkarwar, an LLM student at Jindal Global Law School, has filed a lawsuit against the university after being accused of submitting AI-generated responses in an exam resulting in him failing the exam
      • The lawsuit highlights the growing need for clear, comprehensive AI policies in academic institutions, especially as AI tools become more widely used by students.
      • Legal experts emphasise the need for broader understanding and updated policies on AI, cautioning that AI detection tools can be unreliable.

      When Kaustubh Shakkarwar, an LLM student at Jindal Global Law School, took the end-term exam for the subject "Law and Justice in the Globalising World," he expected his work to be evaluated like that of any other student. Instead, on June 25, he received an unexpected response from the university’s Unfair Means Committee (UMC): they alleged that 88% of his exam responses were “AI-generated” and promptly marked him as having failed the subject.

      Shakkarwar, who is pursuing a Masters in Intellectual Property and Technology Laws, contested the accusation. He filed a lawsuit against the university.

      In the lawsuit against OP Jindal Global University, he argued that his exam answers were entirely his own and not generated by any artificial intelligence (AI) tool. The exam in question was administered online, and students were required to submit their responses electronically.

      Shakkarwar argued that the university did not provide any clear guidelines regarding the use of AI in academic submissions, and therefore, he should not be penalised. He emphasised that AI itself does not inherently equate to plagiarism unless its use infringes upon copyright laws.

      The Punjab and Haryana High Court has since directed the university to respond to Shakkarwar’s petition, setting up a high-stakes legal debate about the role of AI in academic work.

      Interestingly, the stakes go beyond a single exam for Shakkarwar. The law student is developing an AI-based platform aimed at supporting litigation services. His platform 'fidy.ai' is touted to help users navigate legal processes by translating vernacular legal documents into English, sending timely notifications about their court cases (such as reminders shortly before hearings), filing trademarks, and monitoring for any similar trademarks that could affect their claims.

      Ironically, the same technology that powers his platform has now cast a shadow over his own academic integrity.

      Also Read:APAAR ID: Is India Ready for a National Digital ID System for Students?

      What does the petition say?

      The petition states that the university accused Shakkarwar of plagiarism, claiming that 88% of his exam responses resembled AI-generated text.

      Shakkarwar asserts that his original, human-generated responses were unfairly labeled as AI-generated, with no opportunity to appeal the decision. He claims he requested a formal review, but the university’s Controller of Examinations denied it, asserting that there was no reason to establish a review committee.

      Shakkarwar’s legal counsel has also argued that despite numerous requests, the university had failed to provide the "First Ordinance"—a key regulatory document that outlines student policies "under Section 26" of the university's rules. This ordinance, which details university procedures for exams, governance, and discipline, is typically published in the Haryana Government Gazette as a public document, as per the Haryana Private Universities Act.

      Shakkarwar’s counsel claims that the ordinance has not been shared with him or made available on the university’s website, raising questions about transparency.

      In a statement shared with BOOM, O.P. Jindal Global University has criticised the student's "factually incorrect, misleading and prejudiced statements" made on social and online media about a matter that is still sub judice. It further claimed these statements were made "with the malicious intention to influence public opinion and thereby decision-making."

      "The University, in addition to pursuing the legal matter, would also approach other relevant regulatory bodies/authorities to report the professional misconduct of the Petitioner (who also happens to be an Advocate and an officer of the court)," it read.

      Also Read:JioHotstar Saga: A Tale Of Two Child Philanthropists And One Developer's Dream

      Future of AI in Education: A Call for Clear Guidelines

      Shakkarwar’s case highlights the need for universities to develop clear policies on AI use in academic settings. His argument that Jindal Global Law School had no explicit policy on AI usage adds to an ongoing debate: should educational institutions establish guidelines on how students can and cannot use AI?

      Speaking to BOOM, lawyer Nandita Saikia emphasised that the solution lies in a comprehensive approach to understanding and regulating AI, rather than solely focusing on AI-specific policies. "What we need is a comprehensive approach to understanding what AI is and how it should be used," she said.

      She further suggested that universities should update their existing plagiarism and copyright policies to account for the growing influence of AI tools in student work.

      To illustrate this point, Saikia provided an example: "AI tools that generate citations in the correct format are widely used in academia. It’s difficult to argue that such tools are unethical. However, using Generative AI to write an essay and passing it off as your own is clearly unethical. Additionally, if the AI-generated essay copies or paraphrases existing scholarly work, it could result in plagiarism or copyright infringement."

      Dominic Karunesudas, an AI-ML and cybersecurity expert, echoed these concerns, arguing that universities need clear, practical AI policies that can guide both students and faculty. He pointed towards leading US institutions that make it clear to students that they are fully responsible for the output generated using AI tools.

      According to Karunesudas, “The universities must mandate that faculty familiarise themselves with guidelines on AI and academic integrity,” so they can provide clear instructions to students on acceptable academic behavior in an AI-driven world.

      AI Detection Tools: Accuracy and Limitations

      The limitations of AI-detection tools is also a factor in this case.

      AI tools can misidentify human-written text as AI-generated, leading to what experts describe as "false positives." This risk is not hypothetical: Turnitin, a widely-used plagiarism detection service, recently reported an increase in false positives soon after launching its AI detection tool. The issue is particularly prevalent when the tool detects less than 20% AI-generated writing in a document, casting doubt on these systems' reliability.

      According to Karunesudas, international organizations such as UNESCO and the Institute of Electrical and Electronics Engineers (IEEE) have issued ethical frameworks for AI, yet these guidelines are voluntary and lack enforceable accountability.

      “AI detection tools should be used carefully as they may not always be reliable,” Saikia advised, suggesting that universities need to be cautious when implementing these systems in academic settings.

      Highlighting the limitations of AI detection systems, Karunesudas noted that these tools have "incorrectly accused students of cheating" in the past and are likely to do so in the future as well.

      Shakkarwar’s petition brings an important issue to the forefront: as AI technology evolves, academic institutions need transparent, accessible guidelines that address ethical considerations and AI’s potential impact on learning. This case may, in fact, shape the future standards for academic integrity in the age of AI in India and beyond.

      Also Read:How A Facebook Group Became An Underground Library For Indian Students

      Update- The Punjab and Haryana High Court dismissed Kaustubh Shakkarwar's petition on November 18, after the University issued a revised transcript that passed the student in the subject in question, rendering the plea infructous.

      Tags

      artificial intelligence systemslawcopyrightAcademia
      Read Full Article
      Next Story
      Our website is made possible by displaying online advertisements to our visitors.
      Please consider supporting us by disabling your ad blocker. Please reload after ad blocker is disabled.
      X

      Subscribe to BOOM Newsletters

      👉 No spam, no paywall — but verified insights.

      Please enter a Email Address
      Subscribe for free!

      Stay Ahead of Misinformation!

      Please enter a Email Address
      Subscribe Now🛡️ 100% Privacy Protected | No Spam, Just Facts
      By subscribing, you agree with the Terms & conditions and Privacy Policy connected to the offer

      Thank you for subscribing!

      You’re now part of the BOOM community.

      Or, Subscribe to receive latest news via email
      Subscribed Successfully...
      Copy HTMLHTML is copied!
      There's no data to copy!