SecDevOps.comSecDevOps.com
AI Agents Are Morphing Into the ‘Enterprise Operating System’

AI Agents Are Morphing Into the ‘Enterprise Operating System’

The New Stack(1 weeks ago)Updated 1 weeks ago

Most of the conversation around AI agents today revolves around bots writing code. This didn’t come out of nowhere; Software engineering is the most common use case for AI systems, and code-writing...

Most of the conversation around AI agents today revolves around bots writing code. This didn’t come out of nowhere; Software engineering is the most common use case for AI systems, and code-writing tools are reaching eye-popping valuations. But inside companies, something more fundamental is shifting: AI agents are becoming internal “operating systems” that connect and orchestrate data flows between software tools, changing the way we all work, not just the engineers. At Block, our engineers built an AI agent framework called goose and released it as an open source tool for anyone to use with any large language model. Initially designed for writing code, we quickly realized that for goose to reach its full potential, it needed a standard way to communicate with the dozens of tools that people use daily. Recognizing this same challenge, Anthropic was developing what would become the Model Context Protocol (MCP). We began collaborating early in MCP’s development to help shape this open standard that bridges AI agents with real-world tools and data. Today, 60% of our workforce — around 6,000 employees — use goose weekly. It serves as a central conductor, reading and synthesising data across dozens of MCP-powered extensions including Slack, Google Drive, Snowflake, Databricks, Jira and others. Just months ago, it would take days of manual labor to read Snowflake dashboards, pull context from recent Slack chatter and generate a weekly Google Doc with insights and flagged anomalies. Now humans orchestrate this process in minutes, directing goose to the relevant data while applying judgment about what matters most. Unlike the headlines, this isn’t a story about AI replacing jobs. At Block, we believe the shift is about redistributing access to problem-solving. The Compression Effect: Becoming More Self-Sufficient Most companies rely on handoffs. A product manager submits a ticket. An engineer builds it. A support team flags a recurring issue. A developer scripts a fix. These workflows protect quality, but they slow things down. AI agents like goose are collapsing that distance by helping people take action on their own instead of waiting on others. Take customer support escalations. In the past, when a support agent noticed an unusual spike in refunds, they would file an escalation ticket and wait three to five days for the data team to pull transaction analysis, receive raw spreadsheets, manually create a summary and post findings to Zendesk. Now that same agent asks goose to “analyse the last 30 days of refund spikes” and within 30 seconds receives a complete analysis with patterns identified and an automatically generated Zendesk-ready summary. By allowing users to choose a preferred model and by connecting to internal tools, goose enables teams to move from idea to prototype without waiting in a queue. A support agent can surface a dashboard. A security analyst can write a detection rule. A designer can test live functionality based on user feedback. None of this requires code expertise. This kind of access was previously off-limits to most employees. That’s starting to change. What’s Next: Building Guardrails and Resilience Goose is part of a wider shift within Block and at other forward-thinking companies: recognising that AI’s most valuable role may not just be in what it builds for users, but in what it unlocks for teams. By lowering the barrier to experimentation, internal AI tools are giving people the confidence to test, iterate and solve problems themselves. This doesn’t remove the need for engineers. If anything, it strengthens their impact. It clears the backlog. It reduces bottlenecks. And it makes the space for more complex, strategic work to get done. As with any new expansion of capabilities like this, this type of transformation requires careful design. At Block, we’ve implemented specific policies that govern how these AI connections work across our company. Any tool that handles sensitive information requires legal approval before it can be deployed. We maintain curated lists of approved extensions, so employees can only install tools that have passed our security review. And we’ve built smart boundaries directly into the tools themselves. Some automatically avoid accessing confidential databases, while others separate what users can read versus what they can modify. These aren’t bureaucratic barriers; they’re design choices that let teams move fast while keeping important information secure. The long-term opportunity isn’t just speed or cost savings. It’s resilience. Companies that embrace this shift will be less dependent on rigid workflows and more responsive to the people closest to the problem. They’ll be able to move faster without compromising safety, and solve at the edge without losing control at the core. That’s what we’re learning with goose. And that’s the direction we believe enterprise AI is headed. It may not make headlines, but it’s changing the way organizations function at their core. The post AI Agents Are Morphing Into the ‘Enterprise Operating System’ appeared first on The New Stack.

Source: This article was originally published on The New Stack

Read full article on source →

Related Articles