Like most of us from the U.S. South, Alabama-born developer Mike Ryan has a deep reverence for Waffle House. That’s why the Google Developer Expert and co-creator of NgRx named his latest open source...
Like most of us from the U.S. South, Alabama-born developer Mike Ryan has a deep reverence for Waffle House. That’s why the Google Developer Expert and co-creator of NgRx named his latest open source project Hashbrown.
Hashbrown is an agent framework, in the same vein as Mastra, LangGraph and Google’s agent development kit, but with one key difference: It runs in the browser.
“All these run primarily in the back end, and there’s a lot of infrastructure to show these agents in your user interface, whether that’s in the browser or whatever,” Ryan said. “But there’s not really an agent framework that actually runs in the browser.”
Hashbrown for Generative UI in the Browser
Hashbrown was released in May. The work began, though, with a fascination with small language models.
“The reason I thought this would be interesting is because browser vendors — particularly Edge from Microsoft and Chrome from Google — they’re shipping now and behind those experimental flags, small language models actually run on the device that you can access through the browser’s APIs,” he said. “I’ve found it really interesting to build agents that run in the browser that potentially will get to use these small language models in the future.”
Hashbrown makes it easy to extend frontend applications with chat-based AI assistants, observed Angular consultant Manfred Steyer of Angular Architects.
“It handles complex tasks such as LLM [large language models] integration and tool calling, allowing developers to focus on the core business value,” Steyer wrote. “In just a few steps, you can create an assistant that controls user interactions, triggers application functions, and responds contextually.”
It can use an LLM from an AI provider to do very specific things for the user interface (UI), creating a generative UI, he explained. That can include filling out forms on the user’s behalf, showing really simple shortcuts, or rendering charts, tables and graphs.
Jason Lengstorf of Code TV, left, interviewed Mike Ryan about Hashbrown in July of this year.
“It’s an ambitious project where we haven’t hit version 1 yet, but we’ve had a lot of momentum behind us and a lot of support from the community as we’ve gone about building this framework,” Ryan said.
Technically, Hashbrown is a set of core and framework specific packages for the UI, along with LLM SDK wrappers for Node backends, according to its documentation.
“Hashbrown makes it easy to embed intelligence in your React or Angular components,” the documentation states. “Use Hashbrown to generate user interfaces, turn natural language into structured data, and predict your user’s next action.”
Its offers a core package that is framework agnostic, but it also offers React and Angular packages. It uses React for reactivity.
“You could use just Hashbrown’s core package without bringing in framework flavor, but that piece of Hashbrown that does the generative UI — that would be pretty hard to use without having a framework, because those frameworks are really bringing in those component models that we are gluing to the LLM,” Ryan said.
Ryan also hopes to support Vue in the future and is looking for a developer to assist with that process.
Hashbrown is platform agnostic in that it can use any supported LLM provider. It offers Node adapters documentation for:
OpenAI
Azure OpenAI
Anthropic
Amazon Bedrock
Google Gemini
Writer
Ollama
Streaming Support
LLMs are actually slow, Ryan said. You have to wait for it to generate results in chunks. That’s why Hashbrown supports streaming out of the box, which Ryan said should be “table stakes” for an AI agent framework.
“Streaming is important for a generative UI library, because you want to start showing results to the user pretty much as fast as the LLM can think of them,” he said. “If you’re going to wait for that entire generation to complete before you show a result, then users could be waiting [for] minutes before they see anything on their screen.”
To support streaming, the Hashbrown team created Skillet, a schema library similar to Zod, but optimized for LLMs. While Zod allows developers to describe any valid TypeScript, Ryan noted that LLMs currently only handle about 5% of those schema types.
“There’s no schema you can describe with Skillet that an open AI model can’t generate data for, and that’s a really important design goal of Skillet,” he said.
The second unique feature of Skillet is that it has support for streaming a keyword in your schema, he added. What that lets developers do is, on a string or an array, the developer can say this is supposed to be streamed in. Then as the LLM generates data that conforms to that schema skill, Skillet parses that JSON as it’s coming in and feeds it to the developer, he explained. To support this, they also built their own JSON parser.
It creates a “really nice developer experience” for developers as they use streaming data out of an LLM, Ryan said.
“They can write a very little amount of schema code, and suddenly they’re getting really nice streaming results in from an LLM,” he said. “I really don’t think I’ve seen any other framework or library do it yet.”
JavaScript Runtime Included
Hashbrown also ships with its own JavaScript runtime, which Ryan said surprises a lot of developers.
“It’s the most hare-brained feature of Hashbrown,” he joked.
There’s a conversation around tool calling and MCP in the industry, he said, as people are realizing that the current tool call model is pretty limited.
“The LLM can make one call. It can maybe do a few calls in parallel,” he said. “But what gets really interesting is when you let the LLMs actually write a little bit of code against a defined set of functions, like supercharged tool calling or supercharged MCP.”
That’s why he added a JavaScript runtime to Hashbrown: It is enabling technology that lets developers safely expose functions into that runtime.
“There’s no schema you can describe with Skillet that an open AI model can’t generate data for, and that’s a really important design goal of Skillet.”
– Mike Ryans, Hashbrown creator
While developers can ask the LLM to generate JavaScript that calls those functions and then executes those functions, or executes that code safely in the web browser, developers do not want to run untrusted code in the web browser because of the security risk it creates, he said. The security flaws include the ability to access browser APIs, tamper with authentication, and “do really destructive things,” he said.
Hashbrown took QuickJS — a C-based JavaScript runtime — and compiled it to WebAssembly. The scripts run inside a WebAssembly container, which provides isolation and security so developers can execute untrusted code without risk.
Developers can check out the following sample apps to see how Hashbrown works:
The team’s Smart Home app shows conversational UIs with Hashbrown’s UI agent.
The Finance app shows using the JavaScript runtime to create charts on the fly.
In the future, the team hopes to support multi-agent orchestration that would allow developers to use Hashbrown with other AI agent frameworks — such as LangGraph— to build a more powerful application than either could create alone.
“We’re hoping to release this in a couple of weeks, but there’s nothing that stops you from creating a LangGraph agent that does a lot of research and analysis of data sets, and then having a Hashbrown agent to call that LangGraph agent, and coordinate and communicate with each other,” he said.
His ultimate goal, though, is to create an open source community around Hashbrown. To that end, he’s looking for developers willing to contribute to the project.
The post Run AI Agents in the Browser With the Hashbrown Framework appeared first on The New Stack.