1Password sees AI as both threat and tool

AI security – As AI coding and agent tools spread, 1Password is building defenses—on-device audits, stricter agent monitoring, and safer vault access—to prevent faster mistakes.
For a company built around information security, AI isn’t a distant trend—it’s a daily operating problem.
1Password is framing generative AI as both a productivity engine and a new source of risk. especially as password managers face pressure from two directions at once: attackers using AI to scale scams. and developers using AI to ship faster code that may be sloppy or unsafe.. The core tension is familiar in cybersecurity. but it’s intensified now—AI can accelerate vulnerability discovery and secure development. while also enabling “vibe-coded” applications that accidentally expose credentials.
At the center of 1Password’s approach is a practical idea: rather than treating AI risk as a theoretical compliance checkbox. the company wants enterprise customers to understand their “blast radius” inside their own ecosystems.. The company’s chief technology officer. Nancy Wang. describes an on-device agent designed to audit how AI models are being used across a company’s environment—and to surface issues before they become breaches.
For executives and security teams. this matters because AI use rarely stays confined to a single team or a single approved workflow.. A model may be integrated into one branch of development. an employee may test an unvetted tool. or software updates may go stale.. Misalignment can be subtle. and the cost of missing it tends to surface later as credential exposure. phishing success. or weak internal controls.. Wang points to a real scenario where an enterprise team’s code work involved a Chinese-developed large language model that had drawn criticism over security considerations—an example used to trigger follow-up best-practice conversations and corrective action.
On-device audits to reduce credential exposure
1Password’s strategy leans on visibility that doesn’t require the company to “see” sensitive secrets.. Its software already encrypts saved credentials end-to-end, leaving no practical pathway for 1Password to view passwords even as autofill occurs.. Wang also describes how the company’s device agent can help detect signs of weak credential handling—such as credentials sitting unprotected or unencrypted on disk—then guide remediation by moving those credentials into a secure. encrypted vault.
The same logic applies to another common attack vector: personal devices used for work.. 1Password encourages employees to install a “Device Trust” agent on personal computers. aiming to close the gap where compliance is uneven and where family or business account configurations may be overlooked.. In real terms. these details can decide whether a stolen laptop leads to nothing more than inconvenience—or whether it becomes a direct line to account takeover.
From a business standpoint, this is also a product-market shift. Password managers have long sold convenience paired with security. Now they’re increasingly competing on governance—helping organizations enforce safer AI usage patterns and safer device behaviors.
Preventing AI agents from going off the rails
AI agents promise to do multi-step tasks with minimal supervision.. But unlike traditional software. many agent behaviors are harder to fully predict—especially when prompts evolve. context changes. or attackers manipulate inputs.. Wang argues that agents need systematic monitoring because their non-deterministic nature makes them capable of “errors” at scale. just faster than a human might make them.
Her view is that 1Password can turn observation into a learning loop: log what prompt an agent received. what it did with that prompt. and what output it generated.. Over time. those logs can inform models and agent behavior. improving recognition of patterns linked to phishing and insecure credential handling.
In February. 1Password introduced the Security Comprehension and Awareness Measure (SCAM) index for AI agent behavior and published its code under an open-source license.. The point isn’t only to label agents as safe or unsafe. but to establish a benchmark that can reflect real security comprehension—whether an agent recognizes a phishing link or demonstrates poor handling of credentials.
The company’s framing also extends beyond individual capability. Wang says “new identity standards” may be needed for agents—standards that reflect what the agent was created to do, what it’s actually doing, and whether its behavior “drifts” away from the original intent.
There’s a business implication here: as companies grant more permissions to AI tools—accessing internal resources, browsing systems, and interacting with enterprise workflows—the cost of misconfiguration grows. Monitoring and identity controls become just as strategic as encryption.
Building safer connections between AI apps and the vault
1Password also sees a near-term roadmap problem: AI apps are starting to integrate with password vaults. which means the vault can become part of an AI workflow rather than just a storage location.. Today. the company allows certain agentic tools to read from 1Password vaults. with longer-term plans to enable writing back into them.
That evolution raises a key question: how do you keep automation useful without turning the vault into a high-value target for compromised AI workflows or malicious instructions?. For consumers, it may look like convenience—tools pulling credentials for sign-ins.. For security teams. it’s a new governance layer. where permissions. connections. and audit trails have to keep pace with agent behavior.
Interestingly, 1Password’s own developer tooling suggests there’s demand for power-user workflows even outside the typical enterprise environment.. Wang notes that usage of the company’s command-line interface (CLI) has grown significantly. with the strongest growth coming from people using individual and family plans.. She attributes part of that rise to “vibe coding. ” a sign that the boundary between casual development and serious security considerations is narrowing.
Using AI inside 1Password—faster work, tougher testing
Like many software firms, 1Password is using AI coding tools to accelerate its own development process.. Cursor. GitHub Copilot. and Claude Code are used with human review at first—an approach that acknowledges a simple truth: AI can generate code quickly. but validation still matters.. Wang describes an early win where an agent helped refactor services away from a single MySQL database.. The task that might have taken human engineers four to five months was completed in about four weeks. demonstrating where AI can deliver immediate value.
But speed creates a new risk surface. 1Password is now moving toward more automated testing for agent-generated code. Wang describes “full agent loops” running in the background, where each coding agent’s output must pass a testing harness evaluation before merged requests enter the code repository.
Vulnerability discovery is another area where AI can help—Wang references work like project-style efforts designed to accelerate security findings.. Yet she cautions that discovering vulnerabilities is only half the job.. Once more issues are found faster, developers still have to harden systems, defend against new exploit paths, and prioritize remediation.
Her conclusion lands with a warning familiar to anyone working in security: AI is a mixed bag. The benefits are real, but the implementation is “gnarly and technical,” and the broader problem—building defenses that keep up with fast-changing tools—still isn’t solved by acceleration alone.
For readers, the takeaway is straightforward: AI may make software development and security tasks quicker, but password security—and the systems around it—still depends on governance, monitoring, and disciplined testing. In an era of faster mistakes, those fundamentals become the differentiator.
Google bets on enterprise agents: Gemini Enterprise Agent Platform goes live