SecDevOps.comSecDevOps.com
Why Rushing AI Adoption Leads to Low-Quality ‘Workslop’

Why Rushing AI Adoption Leads to Low-Quality ‘Workslop’

The New Stack(3 weeks ago)Updated 3 weeks ago

New AI solutions continue to emerge, and models are steadily improving. Yet, despite the rapid pace of AI innovation, the technology still isn’t as advanced as many people believe. This gap is often...

New AI solutions continue to emerge, and models are steadily improving. Yet, despite the rapid pace of AI innovation, the technology still isn’t as advanced as many people believe. This gap is often due to misleading marketing, particularly around “agentic AI” operations, as well as human error and limited technical expertise. Either way, we’re not quite yet at a place where AI can be trusted to run fully autonomously, making decisions or developing code independent of humanoversight.As the technology currently stands, human involvement, education and awareness are essential to ensure safe and secure use of AI in the enterprise. In the past few weeks, there have been multiple instances of AI falling short of expectations, delivering outputs with errors, inaccuracies, “hallucinations” or wiping databases completely. For example, global consulting firm Deloitte came under fire after delivering an inaccurate report to the Australian government littered with fictional research. Not only was the report aimed at making technical recommendations to streamline sensitive government processes, but the AI use on behalf of the consulting firm was undisclosed. In July, an AI coding assistant from the tech firm Replit went rogue, wiping out the production database of SaaS startup SaaStr. These instances are prompting conversations around not just secure and effective AI use, but also around “workslop,” or low-quality AI outputs that either inefficiently or incorrectly address user generated prompts. AI workslop is often a result of organizations rushing to adopt AI before fully understanding or educating employees on how to use the emerging technologies properly. AI’s potential is vast. But with the current growth rate of AI far outpacing enablement around the technology, the result is the proliferation of low- or poor-quality work at scale — and that’s a significant challenge. The good news is that it can be overcome. To avoid falling into workslop or other error-prone AI traps, organizations should ensure they’re prioritizing awareness, transparency and education among employees to help them discern where to use AI as it currently stands and how to use AI safely and transparently to drive positive business outcomes without introducing added risks. What’s at Risk With Workslop? The consequences posed by workslop are easy to discern: . reputational or professional damage, losses in customer trust, and in more extreme cases —like using AI to autonomously generate and deploy erroneous code — data leakage and critical lapses in security. There’s a lot at stake when it comes to surrounding or integrating AI with sensitive business assets. However, there are also many ways to use AI to drive greater business efficiency and augment the work done by human personnel without putting business at risk. When it comes to avoiding common workslop traps, and differentiating quality from poor AI outputs, there are two things that matter most: First, find the right tool for the job. Large language models (LLMs) are great at a lot of tasks, but the hype has pushed people to use them everywhere. The reality is that for every singular use case where an LLM works well, there are many more where traditional approaches work substantially better. We’re just using the wrong technology a lot of the time. Second, prompt engineering matters. When prompting, contextualize questions in terms of frameworks like RACE (role, action, context and execution) for optimal outcomes. Good prompting can drastically reduce hallucinations, but most people aren’t trained in this. It’s simple to write prompts that seem accurate and then result in poor outputs. Companies need to properly train teams on how to effectively prompt and how to validate outputs for accuracy. Steering Clear of Slop When it comes to making the most of AI, education is the answer. Not just in prompt engineering, but in teaching employees how to validate outputs, recognize hallucinations and discern when traditional approaches might work better than AI actions or recommendations. AI use at work has nearly doubled in the past two years, and as AI grows increasingly mainstream in corporate environments, leadership must ensure that employees are prepped, briefed and educated on its proper and secure use to harness it accordingly. To eliminate workslop, set clear guidelines, boundaries and expectations with employees. Outline which tools are appropriate for corporate use, which are the best fit to achieve certain tasks, and underscore which tools are off the table. Challenge employees to think critically when it comes to using outputs generated by AI. For example, any developer using AI to generate code knows that quality is inconsistent. Even in many cases of the newer or fast-moving code bases, less than half may be usable in their suggested form. Right now, AI should be treated as a primarily informative resource. Think of it as “version 1” for the content or code generated. It still requires human oversight to comb through and validate responses, but it gets the effort closer to the desired outcome faster. For organizations looking to harness AI more moving forward, our current circumstances require honesty and transparency about what AI can and can’t do, discipline in how organizations implement it and a commitment to quality that sometimes means moving slower than the hype cycle demands. That’s hard when everyone else seems to be racing ahead, but it’s the right call. Like any good teacher emphasizes: Double-check your work. That applies to using AI. A little human review goes a long way. The post Why Rushing AI Adoption Leads to Low-Quality ‘Workslop’ appeared first on The New Stack.

Source: This article was originally published on The New Stack

Read full article on source →

Related Articles