Five OWASP AI Lists, One Practitioner Problem

I was in a meeting recently where someone asked a simple question: “Which OWASP list should we use for our AI security review?” Nobody could answer it. Not because the people in the room were incompetent. The opposite, actually — they’d all read the lists, which is precisely why they couldn’t answer. There are five of them now. Five OWASP AI security lists. Each one a Top 10, except the one that’s a 200-page guide. They overlap, contradict, and occasionally talk past each other. When someone finally pulled up Matt Adams’ OWASP AI Top 10 Comparator — a tool that exists specifically because the proliferation problem is bad enough to need its own website — the room collectively sighed. ...

April 2, 2026 · 14 min · Napat Boonsaeng

DeerFlow vs OpenClaw Security Analysis (AI Experiment)

TL;DR for busy operators Three minutes, top to bottom: DeerFlow is powerful and highly composable: LangGraph runtime, FastAPI gateway, MCP extensibility, skills, channels, memory, subagents, sandbox modes, custom agents, and a guardrails layer for pre-tool-call authorization. This is not a toy stack. Power comes with a steep security responsibility curve: the docs and config make it easy to run in insecure ways — skip ingress auth, overexpose API routes, enable high-impact tools broadly, or run local sandbox in shared contexts, and you’re asking for trouble. OpenClaw is more opinionated operationally about channel policies, trust boundaries, gateway hardening, and tool restriction baselines for a personal-assistant model. Clearer security defaults out of the box. Runtime reality matters: DeerFlow can run in constrained environments, but full-stack convenience depends on host prerequisites (nginx/docker/toolchain), and no configured model means no actual agent run. Bottom line: treat DeerFlow as a programmable power framework, not a safe appliance. Explicitly harden ingress, authz, tools, sandbox mode, MCP secrets, and channel trust before exposing it to real users. Why this analysis exists Most AI-agent platform writeups make one of two mistakes: ...

March 27, 2026 · 12 min · Napat Boonsaeng

The USB-C Metaphor Hides the Hard Part

Threat Modeling MCP in the Real World People like to describe MCP as “USB-C for AI.” It’s a good line. It explains why people care. USB-C made hardware interoperability easier. MCP makes tool interoperability easier. Build once, connect everywhere, move faster. The problem with good metaphors is that they are usually true in one way and dangerously false in another. USB-C looks like a cable problem. MCP looks like a protocol problem. But the hard part isn’t the connector. The hard part is delegation. When an AI client connects to tools through MCP, it is not just moving data. It is moving authority: who can read what, who can trigger what, and under which identity. That shift is what many threat models miss. They evaluate MCP like an integration layer, when they should evaluate it like an authorization fabric. Why this matters now Standards compress engineering cost. They also compress attacker learning curves. Before MCP, every integration had custom quirks. That was messy for developers and inconvenient for attackers. With standardization, we gain velocity and lose diversity. A weakness in common implementation patterns becomes reusable across many environments. This doesn’t mean MCP is unsafe. It means MCP is now important enough to threat model as first-class infrastructure. The teams that do this early will avoid the coming cycle: rapid adoption, soft defaults, then expensive retrofitting under incident pressure. ...

March 22, 2026 · 8 min · Napat Boonsaeng