<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Agent-Security on Napat&#39;s Inverse Blog</title>
    <link>/tags/agent-security/</link>
    <description>Recent content in Agent-Security on Napat&#39;s Inverse Blog</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <lastBuildDate>Tue, 31 Mar 2026 20:30:00 +0700</lastBuildDate>
    <atom:link href="/tags/agent-security/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Inside the Machine: What a Leaked Agentic Code Tool Reveals About AI Security</title>
      <link>/ai-analysis/inside-the-machine-what-agentic-code-tool-source-reveals-about-ai-security/</link>
      <pubDate>Tue, 31 Mar 2026 20:30:00 +0700</pubDate>
      <guid>/ai-analysis/inside-the-machine-what-agentic-code-tool-source-reveals-about-ai-security/</guid>
      <description>&lt;p&gt;In March 2026, someone extracted the complete source code of Claude Code from an npm package and published it to GitHub. No modifications. No commentary. Excluding generated code, lock files, and test fixtures — roughly 512,000 lines of TypeScript, dumped into a repository with a single commit.&lt;/p&gt;
&lt;p&gt;How this happened is itself a security lesson. Anthropic published version 2.1.88 of their npm package with a production source map file — &lt;code&gt;cli.js.map&lt;/code&gt;, weighing in at 59.8 MB — that contained the original TypeScript source, comments and all. A misconfigured &lt;code&gt;.npmignore&lt;/code&gt; or a build pipeline that skipped artifact scanning, depending on who you ask. The file was there for anyone to extract. Security researcher Chaofan Shou was the first to notice.&lt;/p&gt;</description>
    </item>
    <item>
      <title>The Agent Security Gap</title>
      <link>/2026-03-30-the-agent-security-gap/</link>
      <pubDate>Mon, 30 Mar 2026 10:05:00 +0700</pubDate>
      <guid>/2026-03-30-the-agent-security-gap/</guid>
      <description>&lt;h2 id=&#34;why-adversarial-prompt-engineering-is-not-the-problem--and-what-actually-is&#34;&gt;Why adversarial prompt engineering is not the problem — and what actually is&lt;/h2&gt;
&lt;p&gt;In early 2023, a group of researchers demonstrated something that made security people uncomfortable and product people dismissive.&lt;/p&gt;
&lt;p&gt;They showed that a language model could be instructed to do things its creators never intended, not by the person using it, but by content it was asked to process.&lt;/p&gt;
&lt;p&gt;The paper was called &amp;ldquo;Not what you&amp;rsquo;ve signed up for.&amp;rdquo; The attack was called indirect prompt injection.&lt;/p&gt;
&lt;p&gt;Three years later, the industry still has not fully absorbed the lesson.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&#34;the-fixation-on-prompt-injection&#34;&gt;The fixation on prompt injection&lt;/h2&gt;
&lt;p&gt;If you follow AI security discourse, you would think prompt injection is the central problem. It dominates conference talks. It tops the OWASP list. It generates endless proof-of-concept videos.&lt;/p&gt;
&lt;p&gt;And it should get attention. It is a real vulnerability.&lt;/p&gt;
&lt;p&gt;But the fixation on prompt injection obscures a more important truth: prompt injection is a symptom, not the disease.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
