<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>AI Experiments on Napat&#39;s Inverse Blog</title>
    <link>/ai-analysis/</link>
    <description>Recent content in AI Experiments on Napat&#39;s Inverse Blog</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <lastBuildDate>Tue, 31 Mar 2026 20:30:00 +0700</lastBuildDate>
    <atom:link href="/ai-analysis/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Inside the Machine: What a Leaked Agentic Code Tool Reveals About AI Security</title>
      <link>/ai-analysis/inside-the-machine-what-agentic-code-tool-source-reveals-about-ai-security/</link>
      <pubDate>Tue, 31 Mar 2026 20:30:00 +0700</pubDate>
      <guid>/ai-analysis/inside-the-machine-what-agentic-code-tool-source-reveals-about-ai-security/</guid>
      <description>&lt;p&gt;In March 2026, someone extracted the complete source code of Claude Code from an npm package and published it to GitHub. No modifications. No commentary. Excluding generated code, lock files, and test fixtures — roughly 512,000 lines of TypeScript, dumped into a repository with a single commit.&lt;/p&gt;
&lt;p&gt;How this happened is itself a security lesson. Anthropic published version 2.1.88 of their npm package with a production source map file — &lt;code&gt;cli.js.map&lt;/code&gt;, weighing in at 59.8 MB — that contained the original TypeScript source, comments and all. A misconfigured &lt;code&gt;.npmignore&lt;/code&gt; or a build pipeline that skipped artifact scanning, depending on who you ask. The file was there for anyone to extract. Security researcher Chaofan Shou was the first to notice.&lt;/p&gt;</description>
    </item>
    <item>
      <title>DeerFlow vs OpenClaw Security Analysis (AI Experiment)</title>
      <link>/ai-analysis/deer-flow-openclaw-security-analysis-experiment/</link>
      <pubDate>Fri, 27 Mar 2026 12:20:00 +0700</pubDate>
      <guid>/ai-analysis/deer-flow-openclaw-security-analysis-experiment/</guid>
      <description>An opinionated practitioner&amp;#39;s deep dive into DeerFlow and OpenClaw security architectures — what works, what breaks at runtime, where the real risk lives, and how to harden before production.</description>
    </item>
    <item>
      <title>Securing OpenClaw from All Angles: A Practitioner Deep Dive</title>
      <link>/ai-analysis/openclaw-security-deep-dive-all-angles/</link>
      <pubDate>Sat, 21 Mar 2026 21:35:00 +0700</pubDate>
      <guid>/ai-analysis/openclaw-security-deep-dive-all-angles/</guid>
      <description>&lt;p&gt;OpenClaw is not a “chatbot deployment.” It is a high-privilege automation control plane that can read files, run commands, browse sites, call APIs, and operate across messaging channels.&lt;/p&gt;
&lt;p&gt;That means your security model must be closer to &lt;strong&gt;platform security&lt;/strong&gt; than to “prompt quality.”&lt;/p&gt;</description>
    </item>
    <item>
      <title>ZeroDayBench Replication: What Actually Holds Up in Practice</title>
      <link>/ai-analysis/zerodaybench-replication-field-notes/</link>
      <pubDate>Sat, 21 Mar 2026 18:48:00 +0700</pubDate>
      <guid>/ai-analysis/zerodaybench-replication-field-notes/</guid>
      <description>&lt;p&gt;One of the stranger things about AI security is how many people trust benchmark scores they would never trust anywhere else.&lt;/p&gt;
&lt;p&gt;If someone told you a new static analyzer catches 90% of vulnerabilities, your first question would be: 90% of what? In what code? Under what assumptions? What did it miss? But when an LLM benchmark shows a leaderboard, people often skip those questions and go straight to conclusions.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
