BOOM

Trending Searches

    SUPPORT
    BOOM

    Trending News

      • Fact Check 
        • Fast Check
        • Politics
        • Business
        • Entertainment
        • Social
        • Sports
        • World
      • Law
      • Explainers
      • News 
        • All News
      • Decode 
        • Impact
        • Scamcheck
        • Life
        • Voices
      • Media Buddhi 
        • Digital Buddhi
        • Senior Citizens
        • Videos
      • Web Stories
      • BOOM Research
      • BOOM Labs
      • Deepfake Tracker
      • Videos 
        • Facts Neeti
      • Home-icon
        Home
      • About Us-icon
        About Us
      • Authors-icon
        Authors
      • Team-icon
        Team
      • Careers-icon
        Careers
      • Internship-icon
        Internship
      • Contact Us-icon
        Contact Us
      • Methodology-icon
        Methodology
      • Correction Policy-icon
        Correction Policy
      • Non-Partnership Policy-icon
        Non-Partnership Policy
      • Cookie Policy-icon
        Cookie Policy
      • Grievance Redressal-icon
        Grievance Redressal
      • Republishing Guidelines-icon
        Republishing Guidelines
      • Fact Check-icon
        Fact Check
        Fast Check
        Politics
        Business
        Entertainment
        Social
        Sports
        World
      • Law-icon
        Law
      • Explainers-icon
        Explainers
      • News-icon
        News
        All News
      • Decode-icon
        Decode
        Impact
        Scamcheck
        Life
        Voices
      • Media Buddhi-icon
        Media Buddhi
        Digital Buddhi
        Senior Citizens
        Videos
      • Web Stories-icon
        Web Stories
      • BOOM Research-icon
        BOOM Research
      • BOOM Labs-icon
        BOOM Labs
      • Deepfake Tracker-icon
        Deepfake Tracker
      • Videos-icon
        Videos
        Facts Neeti
      Trending Tags
      TRENDING
      • #Operation Sindoor
      • #Pahalgam Terror Attack
      • #Narendra Modi
      • #Rahul Gandhi
      • #Waqf Amendment Bill
      • #Arvind Kejriwal
      • #Deepfake
      • #Artificial Intelligence
      • Home
      • Explainers
      • How AI Impacts Legal Systems With...
      Explainers

      How AI Impacts Legal Systems With Its Fake Laws

      The legal regulators and courts around the world are emphasising the need for responsible adoption of AI tools by lawyers and the establishment of clear guidelines to uphold the integrity of the legal profession.

      By - The Conversation |
      Published -  13 March 2024 1:05 PM IST
    • Boomlive
      Listen to this Article
      How AI Impacts Legal Systems With Its Fake Laws

      Michael Legg, UNSW Sydney and Vicki McNamara, UNSW Sydney

      We’ve seen deepfake, explicit images of celebrities, created by artificial intelligence (AI). AI has also played a hand in creating music, driverless race cars and spreading misinformation, among other things.

      It’s hardly surprising, then, that AI also has a strong impact on our legal systems.

      It’s well known that courts must decide disputes based on the law, which is presented by lawyers to the court as part of a client’s case. It’s therefore highly concerning that fake law, invented by AI, is being used in legal disputes.

      Not only does this pose issues of legality and ethics, it also threatens to undermine faith and trust in global legal systems.

      Also Read:How AI's Deceptive Marketing Crafted Chaos At A Willy Wonka Fest

      How do fake laws come about?

      There is little doubt that generative AI is a powerful tool with transformative potential for society, including many aspects of the legal system. But its use comes with responsibilities and risks.

      Lawyers are trained to carefully apply professional knowledge and experience, and are generally not big risk-takers. However, some unwary lawyers (and self-represented litigants) have been caught out by artificial intelligence.

      AI models are trained on massive data sets. When prompted by a user, they can create new content (both text and audiovisual).

      Although content generated this way can look very convincing, it can also be inaccurate. This is the result of the AI model attempting to “fill in the gaps” when its training data is inadequate or flawed, and is commonly referred to as “hallucination”.

      In some contexts, generative AI hallucination is not a problem. Indeed, it can be seen as an example of creativity.

      But if AI hallucinated or created inaccurate content that is then used in legal processes, that’s a problem – particularly when combined with time pressures on lawyers and a lack of access to legal services for many.

      This potent combination can result in carelessness and shortcuts in legal research and document preparation, potentially creating reputational issues for the legal profession and a lack of public trust in the administration of justice.

      Also Read:AI In Warfare: Study Finds LLM Models Escalate Violence In War Simulations

      It’s happening already

      The best known generative AI “fake case” is the 2023 US case Mata v Avianca, in which lawyers submitted a brief containing fake extracts and case citations to a New York court. The brief was researched using ChatGPT.

      The lawyers, unaware that ChatGPT can hallucinate, failed to check that the cases actually existed. The consequences were disastrous. Once the error was uncovered, the court dismissed their client’s case, sanctioned the lawyers for acting in bad faith, fined them and their firm, and exposed their actions to public scrutiny.

      Despite adverse publicity, other fake case examples continue to surface. Michael Cohen, Donald Trump’s former lawyer, gave his own lawyer cases generated by Google Bard, another generative AI chatbot. He believed they were real (they were not) and that his lawyer would fact check them (he did not). His lawyer included the cases in a brief filed with the US Federal Court.

      Fake cases have also surfaced in recent matters in Canada and the United Kingdom.

      If this trend goes unchecked, how can we ensure that the careless use of generative AI does not undermine the public’s trust in the legal system? Consistent failures by lawyers to exercise due care when using these tools has the potential to mislead and congest the courts, harm clients’ interests, and generally undermine the rule of law.

      Also Read:OpenAI's Rules Breached: 'AI Girlfriends' Swarm GPT Store Hours After Launch

      What’s being done about it?

      Around the world, legal regulators and courts have responded in various ways.

      Several US state bars and courts have issued guidance, opinions or orders on generative AI use, ranging from responsible adoption to an outright ban.

      Law societies in the UK and British Columbia, and the courts of New Zealand, have also developed guidelines.

      In Australia, the NSW Bar Association has a generative AI guide for barristers. The Law Society of NSW and the Law Institute of Victoria have released articles on responsible use in line with solicitors’ conduct rules.

      Many lawyers and judges, like the public, will have some understanding of generative AI and can recognise both its limits and benefits. But there are others who may not be as aware. Guidance undoubtedly helps.

      But a mandatory approach is needed. Lawyers who use generative AI tools cannot treat it as a substitute for exercising their own judgement and diligence, and must check the accuracy and reliability of the information they receive.

      In Australia, courts should adopt practice notes or rules that set out expectations when generative AI is used in litigation. Court rules can also guide self-represented litigants, and would communicate to the public that our courts are aware of the problem and are addressing it.

      The legal profession could also adopt formal guidance to promote the responsible use of AI by lawyers. At the very least, technology competence should become a requirement of lawyers’ continuing legal education in Australia.

      Setting clear requirements for the responsible and ethical use of generative AI by lawyers in Australia will encourage appropriate adoption and shore up public confidence in our lawyers, our courts, and the overall administration of justice in this country.The Conversation

      Michael Legg, Professor of Law, UNSW Sydney and Vicki McNamara, Senior Research Associate, Centre for the Future of the Legal Profession, UNSW Sydney

      This article is republished from The Conversation under a Creative Commons license. Read the original article.

      Also Read:How Political Strategists Are Planning To Use AI In 2024 Elections


      Tags

      lawFacebook
      Read Full Article
      Next Story
      Our website is made possible by displaying online advertisements to our visitors.
      Please consider supporting us by disabling your ad blocker. Please reload after ad blocker is disabled.
      X

      Subscribe to BOOM Newsletters

      👉 No spam, no paywall — but verified insights.

      Please enter a Email Address
      Subscribe for free!

      Stay Ahead of Misinformation!

      Please enter a Email Address
      Subscribe Now🛡️ 100% Privacy Protected | No Spam, Just Facts
      By subscribing, you agree with the Terms & conditions and Privacy Policy connected to the offer

      Thank you for subscribing!

      You’re now part of the BOOM community.

      Or, Subscribe to receive latest news via email
      Subscribed Successfully...
      Copy HTMLHTML is copied!
      There's no data to copy!