Poolside’s Laguna XS.2 brings free open agentic coding

American startup Poolside releases Laguna XS.2 under Apache 2.0, enabling local, private agentic coding with new tools like pool and shimmer—plus a temporary free API for its larger Laguna M.1.
AI competition has been speeding up, but one launch has a different feel: less about exclusivity and more about putting capable models directly in developers’ hands.
Poolside. a San Francisco AI startup founded in 2023. has introduced Laguna XS.2. a 33B-parameter open model optimized for local. “agentic” coding workflows.. The announcement matters because it arrives at a moment when many of the most talked-about model upgrades still live behind closed doors—or at prices that keep most developers on the sidelines.
Laguna XS.2: open weights built for local agentic coding
Poolside says Laguna XS.2 is released under the permissive Apache 2.0 license. which allows developers to use. modify. and redistribute the weights. including in commercial settings.. Practically. that opens a path for teams that need privacy. predictable costs. and control over where computation happens—especially if they want to run models without sending code and prompts to a third-party service.
The model is designed for agentic coding. meaning it’s aimed at tasks beyond “chat.” Instead of only generating text. it’s positioned to write code. use tools. and take actions inside a development workflow.. Poolside pairs the model with “pool. ” a terminal-based coding agent meant to manage tool-calling and execution in a local environment. and “shimmer. ” a web-first. mobile-optimized development workspace with interactive previews.
A key selling point is portability.. Poolside’s shimmer concept is built around an instant-on virtual machine sandbox that can iterate on web apps. APIs. and command-line interfaces quickly.. In a product demo described by Poolside’s team. shimmer can even run on a smartphone in split-screen mode—signaling an ambition that goes beyond desktop-only development.
How Laguna XS.2 differs from the bigger model
The release includes two new Laguna models: Laguna M.1 and Laguna XS.2.. Laguna M.1 is positioned as a higher-end option—a 225B parameter Mixture of Experts model with 23B active parameters—targeted at “high-consequence” environments such as enterprise and government. where long-horizon software engineering and careful planning are critical.
But access is different.. Poolside is making Laguna M.1 temporarily available for free through its API and through third-party distribution channels. including OpenRouter. Ollama. and Baseten.. In contrast, only Laguna XS.2 ships with open weights now.. That split suggests a deliberate go-to-market strategy: let developers experiment broadly with the open model while keeping the larger system more tightly controlled. at least for the moment.
Both models are described as being trained from scratch rather than fine-tuned from existing base models. For the market, that distinction matters because training choices shape how a model reasons, handles code tasks, and behaves in complex tool-using settings.
Why open agentic coding is gaining attention
The broader AI market has been leaning toward two poles: closed. premium proprietary systems and open-ish alternatives that may offer weights or constraints but not the same level of freedom.. Apache 2.0 is an unmistakable signal—Poolside is not simply releasing a research artifact.. It’s inviting builders to run it, adapt it, and integrate it into real pipelines.
That’s a meaningful shift for software teams.. With local inference. developers can keep source code and logs within their own environment. reduce dependency risk on a vendor API. and tune performance to the hardware they already have.. For startups and internal engineering groups. the difference between “try it once” and “deploy it routinely” is often the licensing layer.
Agentic coding also changes how development effort gets allocated.. If an agent can plan steps. execute tool actions. and iterate with feedback—rather than just drafting snippets—then the workflow starts to resemble pair programming with a system that can work across files and run checks.. That can compress time from idea to working implementation, but it also raises the bar for sandboxing, safety, and verification.
The training approach behind the Laguna family
Poolside describes its training pipeline through a specialized internal workflow called “Model Factory. ” built around a software system it calls “Titan.” The company also highlights “Muon. ” an optimizer it claims helps training converge faster than standard methods. and “AutoMixer. ” a mechanism used to select and mix training data combinations.
The point for builders is not the internal branding; it’s the outcome. Poolside says the training process uses very large-scale token volumes, with a portion of data coming from synthetic sources designed to teach specific skills that are harder to acquire from real-world text alone.
After that pre-training phase. the models are further shaped through reinforcement learning in what Poolside describes as an isolated digital “gym” that rewards successful software engineering behavior.. In essence, it’s an approach built around verifiable progress: successful bug fixes, working code execution, and correct multi-step solutions.
Performance signals and what they imply
Poolside’s benchmark reporting suggests the open model is “punching up” for its size. The company highlights SWE-bench Pro results for Laguna M.1 and Laguna XS.2, and it also points to Terminal-Bench 2.0 for terminal reasoning performance.
Even without treating any single benchmark as the whole story. the pattern is clear: Poolside is aiming to make agentic coding capability accessible on consumer hardware.. XS.2 is built to run locally with practical quantization options—Poolside notes 4-bit quantization support and frames a 24GB–32GB VRAM class requirement for many setups.
That matters economically, because it reframes who can participate. Instead of reserving advanced agentic tooling for teams with large GPU budgets or recurring API costs, local deployment can allow smaller engineering organizations to build and iterate faster.
The risk, of course, is that “local” performance can vary widely depending on quantization settings, available memory, and sandbox quality. Still, Poolside’s emphasis on paired tools—pool and shimmer—signals an understanding that models alone don’t deliver a full developer product.
Open weights under Apache 2.0: a strategic bet
The most consequential decision in this launch is licensing.. Apache 2.0 is permissive enough for both research experimentation and commercial adoption without royalties.. Poolside frames the release as a contribution to a broader push for “strong open-weight models” in the West. using community evaluation and fine-tuning as a faster feedback loop than a closed ecosystem.
From a business perspective, this can be a powerful flywheel.. If developers integrate Laguna XS.2 into their toolchains—IDEs. agent frameworks. internal coding assistants—Poolside’s model becomes a platform choice rather than a one-off experiment.. Over time. that can strengthen the ecosystem around its agentic interfaces and increase demand for related services. including access paths to the larger Laguna M.1.
For now, Laguna XS.2 positions Poolside as a company trying to win the next phase of software development: where agents don’t just generate code, but operate in the environment where code is built, tested, and shipped—locally, privately, and with fewer licensing barriers.