BOOM

Trending Searches

    SUPPORT
    BOOM

    Trending News

      • Fact Check 
        • Politics
        • Business
        • Entertainment
        • Social
        • Sports
        • World
      • ScamCheck
      • Explainers
      • News 
        • All News
      • Decode 
        • Impact
        • Scamcheck
        • Life
        • Voices
      • Media Buddhi 
        • Digital Buddhi
        • Senior Citizens
        • Resources
      • Web Stories
      • BOOM Research
      • BOOM Labs
      • Deepfake Tracker
      • Videos 
        • Facts Neeti
      • Home-icon
        Home
      • About Us-icon
        About Us
      • Authors-icon
        Authors
      • Team-icon
        Team
      • Careers-icon
        Careers
      • Internship-icon
        Internship
      • Contact Us-icon
        Contact Us
      • Methodology-icon
        Methodology
      • Correction Policy-icon
        Correction Policy
      • Non-Partnership Policy-icon
        Non-Partnership Policy
      • Cookie Policy-icon
        Cookie Policy
      • Grievance Redressal-icon
        Grievance Redressal
      • Republishing Guidelines-icon
        Republishing Guidelines
      • Fact Check-icon
        Fact Check
        Politics
        Business
        Entertainment
        Social
        Sports
        World
      • ScamCheck-icon
        ScamCheck
      • Explainers-icon
        Explainers
      • News-icon
        News
        All News
      • Decode-icon
        Decode
        Impact
        Scamcheck
        Life
        Voices
      • Media Buddhi-icon
        Media Buddhi
        Digital Buddhi
        Senior Citizens
        Resources
      • Web Stories-icon
        Web Stories
      • BOOM Research-icon
        BOOM Research
      • BOOM Labs-icon
        BOOM Labs
      • Deepfake Tracker-icon
        Deepfake Tracker
      • Videos-icon
        Videos
        Facts Neeti
      Trending Tags
      TRENDING
      • #Bihar Elections 2025
      • #Lok Sabha
      • #Narendra Modi
      • #Rahul Gandhi
      • #Asia Cup 2025
      • #BJP
      • #Deepfake
      • #Artificial Intelligence
      • Home
      • Explainers
      • AI at Work Isn’t Always Helpful:...
      Explainers

      AI at Work Isn’t Always Helpful: How to Avoid ‘Workslop’

      “Workslop” is a term for AI-generated work that looks productive at first glance but is actually low-quality and lacks meaningful substance.

      By -  The Conversation
      Published -  17 Oct 2025 3:17 PM IST
    • Boomlive
      Listen to this Article
      AI at Work Isn’t Always Helpful: How to Avoid ‘Workslop’

      Steven Lockey, Melbourne Business School and Nicole Gillespie, The University of Melbourne; Melbourne Business School

      Have you ever used artificial intelligence (AI) in your job without double-checking the quality or accuracy of its output? If so, you wouldn’t be the only one.

      Our global research shows a staggering two-thirds (66%) of employees who use AI at work have relied on AI output without evaluating it.

      This can create a lot of extra work for others in identifying and correcting errors, not to mention reputational hits. Just this week, consulting firm Deloitte Australia formally apologised after a A$440,000 report prepared for the federal government had been found to contain multiple AI-generated errors.

      Against this backdrop, the term “workslop” has entered the conversation. Popularised in a recent Harvard Business Review article, it refers to AI-generated content that looks good but “lacks the substance to meaningfully advance a given task”.

      Beyond wasting time, workslop also corrodes collaboration and trust. But AI use doesn’t have to be this way. When applied to the right tasks, with appropriate human collaboration and oversight, AI can enhance performance. We all have a role to play in getting this right.

      The rise of AI-generated ‘workslop’

      According to a recent survey reported in the Harvard Business Review article, 40% of US workers have received workslop from their peers in the past month.

      The survey’s research team from BetterUp Labs and Stanford Social Media Lab found on average, each instance took recipients almost two hours to resolve, which they estimated would result in US$9 million (about A$13.8 million) per year in lost productivity for a 10,000-person firm.

      Those who had received workslop reported annoyance and confusion, with many perceiving the person who had sent it to them as less reliable, creative, and trustworthy. This mirrors prior findings that there can be trust penalties to using AI.

      Invisible AI, visible costs

      These findings align with our own recent research on AI use at work. In a representative survey of 32,352 workers across 47 countries, we found complacent over-reliance on AI and covert use of the technology are common.

      While many employees in our study reported improvements in efficiency or innovation, more than a quarter said AI had increased workload, pressure, and time on mundane tasks. Half said they use AI instead of collaborating with colleagues, raising concerns that collaboration will suffer.

      Making matters worse, many employees hide their AI use; 61% avoided revealing when they had used AI and 55% passed off AI-generated material as their own. This lack of transparency makes it challenging to identify and correct AI-driven errors.

      What you can do to reduce workslop

      Without guidance, AI can generate low-value, error-prone work that creates busywork for others. So, how can we curb workslop to better realise AI’s benefits?

      If you’re an employee, three simple steps can help.

      1. start by asking, “Is AI the best way to do this task?”. Our research suggests this is a question many users skip. If you can’t explain or defend the output, don’t use it

      2. if you proceed, verify and work with AI output like an editor; check facts, test code, and tailor output to the context and audience

      3. when the stakes are high, be transparent about how you used AI and what you checked to signal rigour and avoid being perceived as incompetent or untrustworthy.


      What employers can do

      For employers, investing in governance, AI literacy, and human-AI collaboration skills is key.

      Employers need to provide employees with clear guidelines and guardrails on effective use, spelling out when AI is and is not appropriate.

      That means forming an AI strategy, identifying where AI will have the highest value, being clear about who is responsible for what, and tracking outcomes. Done well, this reduces risk and downstream rework from workslop.

      Because workslop comes from how people use AI – not as an inevitable consequence of the tools themselves – governance only works when it shapes everyday behaviours. That requires organisations to build AI literacy alongside policies and controls.

      Organisations must work to close the AI literacy gap. Our research shows that AI literacy and training are associated with more critical AI engagement and fewer errors, yet less than half of employees report receiving any training or policy guidance.

      Employees need the skills to use AI selectively, accountably and collaboratively. Teaching them when to use AI, how to do so effectively and responsibly, and how to verify AI output before circulating it can reduce workslop.The Conversation

      Steven Lockey, Postdoctoral Research Fellow, Melbourne Business School and Nicole Gillespie, Chair in Trust, Professor of Management, The University of Melbourne; Melbourne Business School

      This article is republished from The Conversation under a Creative Commons license. Read the original article.

      Tags

      Artificial Intelligence
      Read Full Article
      Next Story
      X

      Subscribe to BOOM Newsletters

      👉 No spam, no paywall — but verified insights.

      Please enter a Email Address
      Subscribe for free!

      Stay Ahead of Misinformation!

      Please enter a Email Address
      Subscribe Now🛡️ 100% Privacy Protected | No Spam, Just Facts
      By subscribing, you agree with the Terms & conditions and Privacy Policy connected to the offer

      Thank you for subscribing!

      You’re now part of the BOOM community.

      Or, Subscribe to receive latest news via email
      Subscribed Successfully...
      Copy HTMLHTML is copied!
      There's no data to copy!