Support

Explore

HomeNo Image is Available
About UsNo Image is Available
AuthorsNo Image is Available
TeamNo Image is Available
CareersNo Image is Available
InternshipNo Image is Available
Contact UsNo Image is Available
MethodologyNo Image is Available
Correction PolicyNo Image is Available
Non-Partnership PolicyNo Image is Available
Cookie PolicyNo Image is Available
Grievance RedressalNo Image is Available
Republishing GuidelinesNo Image is Available

Languages & Countries :






More about them

Fact CheckNo Image is Available
ScamCheckNo Image is Available
ExplainersNo Image is Available
NewsNo Image is Available
DecodeNo Image is Available
Media BuddhiNo Image is Available
Web StoriesNo Image is Available
BOOM ResearchNo Image is Available
BOOM LabsNo Image is Available
Deepfake TrackerNo Image is Available
VideosNo Image is Available

Support

Explore

HomeNo Image is Available
About UsNo Image is Available
AuthorsNo Image is Available
TeamNo Image is Available
CareersNo Image is Available
InternshipNo Image is Available
Contact UsNo Image is Available
MethodologyNo Image is Available
Correction PolicyNo Image is Available
Non-Partnership PolicyNo Image is Available
Cookie PolicyNo Image is Available
Grievance RedressalNo Image is Available
Republishing GuidelinesNo Image is Available

Languages & Countries :






More about them

Fact CheckNo Image is Available
ScamCheckNo Image is Available
ExplainersNo Image is Available
NewsNo Image is Available
DecodeNo Image is Available
Media BuddhiNo Image is Available
Web StoriesNo Image is Available
BOOM ResearchNo Image is Available
BOOM LabsNo Image is Available
Deepfake TrackerNo Image is Available
VideosNo Image is Available
Explainers

AI at Work Isn’t Always Helpful: How to Avoid ‘Workslop’

“Workslop” is a term for AI-generated work that looks productive at first glance but is actually low-quality and lacks meaningful substance.

By -  The Conversation |

17 Oct 2025 3:17 PM IST

Steven Lockey, Melbourne Business School and Nicole Gillespie, The University of Melbourne; Melbourne Business School

Have you ever used artificial intelligence (AI) in your job without double-checking the quality or accuracy of its output? If so, you wouldn’t be the only one.

Our global research shows a staggering two-thirds (66%) of employees who use AI at work have relied on AI output without evaluating it.

This can create a lot of extra work for others in identifying and correcting errors, not to mention reputational hits. Just this week, consulting firm Deloitte Australia formally apologised after a A$440,000 report prepared for the federal government had been found to contain multiple AI-generated errors.

Against this backdrop, the term “workslop” has entered the conversation. Popularised in a recent Harvard Business Review article, it refers to AI-generated content that looks good but “lacks the substance to meaningfully advance a given task”.

Beyond wasting time, workslop also corrodes collaboration and trust. But AI use doesn’t have to be this way. When applied to the right tasks, with appropriate human collaboration and oversight, AI can enhance performance. We all have a role to play in getting this right.

The rise of AI-generated ‘workslop’

According to a recent survey reported in the Harvard Business Review article, 40% of US workers have received workslop from their peers in the past month.

The survey’s research team from BetterUp Labs and Stanford Social Media Lab found on average, each instance took recipients almost two hours to resolve, which they estimated would result in US$9 million (about A$13.8 million) per year in lost productivity for a 10,000-person firm.

Those who had received workslop reported annoyance and confusion, with many perceiving the person who had sent it to them as less reliable, creative, and trustworthy. This mirrors prior findings that there can be trust penalties to using AI.

Invisible AI, visible costs

These findings align with our own recent research on AI use at work. In a representative survey of 32,352 workers across 47 countries, we found complacent over-reliance on AI and covert use of the technology are common.

While many employees in our study reported improvements in efficiency or innovation, more than a quarter said AI had increased workload, pressure, and time on mundane tasks. Half said they use AI instead of collaborating with colleagues, raising concerns that collaboration will suffer.

Making matters worse, many employees hide their AI use; 61% avoided revealing when they had used AI and 55% passed off AI-generated material as their own. This lack of transparency makes it challenging to identify and correct AI-driven errors.

What you can do to reduce workslop

Without guidance, AI can generate low-value, error-prone work that creates busywork for others. So, how can we curb workslop to better realise AI’s benefits?

If you’re an employee, three simple steps can help.

  1. start by asking, “Is AI the best way to do this task?”. Our research suggests this is a question many users skip. If you can’t explain or defend the output, don’t use it

  2. if you proceed, verify and work with AI output like an editor; check facts, test code, and tailor output to the context and audience

  3. when the stakes are high, be transparent about how you used AI and what you checked to signal rigour and avoid being perceived as incompetent or untrustworthy.


What employers can do

For employers, investing in governance, AI literacy, and human-AI collaboration skills is key.

Employers need to provide employees with clear guidelines and guardrails on effective use, spelling out when AI is and is not appropriate.

That means forming an AI strategy, identifying where AI will have the highest value, being clear about who is responsible for what, and tracking outcomes. Done well, this reduces risk and downstream rework from workslop.

Because workslop comes from how people use AI – not as an inevitable consequence of the tools themselves – governance only works when it shapes everyday behaviours. That requires organisations to build AI literacy alongside policies and controls.

Organisations must work to close the AI literacy gap. Our research shows that AI literacy and training are associated with more critical AI engagement and fewer errors, yet less than half of employees report receiving any training or policy guidance.

Employees need the skills to use AI selectively, accountably and collaboratively. Teaching them when to use AI, how to do so effectively and responsibly, and how to verify AI output before circulating it can reduce workslop.

Steven Lockey, Postdoctoral Research Fellow, Melbourne Business School and Nicole Gillespie, Chair in Trust, Professor of Management, The University of Melbourne; Melbourne Business School

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Tags: