<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Ai on vnykmshr</title><link>https://blog.vnykmshr.com/writing/tags/ai/</link><description>Recent content in Ai on vnykmshr</description><generator>Hugo</generator><language>en</language><lastBuildDate>Wed, 15 Apr 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://blog.vnykmshr.com/writing/tags/ai/index.xml" rel="self" type="application/rss+xml"/><item><title>The easy half</title><link>https://blog.vnykmshr.com/writing/the-easy-half/</link><pubDate>Wed, 15 Apr 2026 00:00:00 +0000</pubDate><guid>https://blog.vnykmshr.com/writing/the-easy-half/</guid><description>&lt;p&gt;&amp;ldquo;The first AI that can earn its own existence, replicate, and evolve without needing a human.&amp;rdquo; That&amp;rsquo;s the pitch on the repo. I read the code this week. The engineering is real. The issue tracker is honest.&lt;/p&gt;
&lt;p&gt;First the engineering. It deserves credit. Orchestrator state machine with a DAG planner. Parent-child colony with typed messaging. Multi-chain wallet, self-modification with git audit, command-injection tests. Somebody thought hard. It shows.&lt;/p&gt;
&lt;p&gt;Then I read issue #300. A user ran it for 14 days. Completed 276 goals. Spent $39.26 on inference. Earned $0.00. Goals like &amp;ldquo;Create live proposal batch #265&amp;rdquo; and &amp;ldquo;Create deposit-ready close batch.&amp;rdquo; The agent looped on self-addressed sales artifacts because that&amp;rsquo;s all an LLM without customers can do. The survival pressure was supposed to force invention. It produced busywork.&lt;/p&gt;</description></item><item><title>Trust boundaries</title><link>https://blog.vnykmshr.com/writing/trust-boundaries/</link><pubDate>Fri, 20 Mar 2026 00:00:00 +0000</pubDate><guid>https://blog.vnykmshr.com/writing/trust-boundaries/</guid><description>&lt;p&gt;I use coding agents on my own private repos every day. Security research, side projects, things I wouldn&amp;rsquo;t put on a public GitHub. Not something I&amp;rsquo;d do blindly with work source code though.&lt;/p&gt;
&lt;p&gt;So when someone turns off WiFi to prove the agent needs a network connection, I get it. But that&amp;rsquo;s the architecture. It&amp;rsquo;s on the pricing page. The agent works on your local files, the reasoning runs on a remote model. Both true, neither a secret.&lt;/p&gt;</description></item><item><title>What compounds</title><link>https://blog.vnykmshr.com/writing/what-compounds/</link><pubDate>Fri, 20 Mar 2026 00:00:00 +0000</pubDate><guid>https://blog.vnykmshr.com/writing/what-compounds/</guid><description>&lt;p&gt;Something shifted. Not the AI thing &amp;ndash; everyone noticed that. What counts as proof.&lt;/p&gt;
&lt;p&gt;Used to be your resume, your title, the logo. Still opens doors. But the gap between &amp;ldquo;I can do X&amp;rdquo; and &amp;ldquo;here&amp;rsquo;s the commit&amp;rdquo; got wide enough that both sides feel it. A merged PR has a commit hash. A CVE has a number. A library someone depends on has a git log. Credentials got easier to claim. Artifacts didn&amp;rsquo;t.&lt;/p&gt;</description></item><item><title>The loop</title><link>https://blog.vnykmshr.com/writing/the-loop/</link><pubDate>Sat, 14 Mar 2026 00:00:00 +0000</pubDate><guid>https://blog.vnykmshr.com/writing/the-loop/</guid><description>&lt;p&gt;A handful of Go libraries on GitHub. MIT licensed, anyone can use them for anything, that was always the deal.&lt;/p&gt;
&lt;p&gt;But the deal isn&amp;rsquo;t about the license. It&amp;rsquo;s about the loop.&lt;/p&gt;
&lt;p&gt;Someone uses your thing, hits an edge case, opens an issue. Sometimes they send a fix. You review it, learn how people actually use what you built, catch a pattern you missed. That back and forth is the whole point. Code just sits there without it.&lt;/p&gt;</description></item><item><title>The detection trap</title><link>https://blog.vnykmshr.com/writing/detection-trap/</link><pubDate>Sun, 08 Mar 2026 00:00:00 +0000</pubDate><guid>https://blog.vnykmshr.com/writing/detection-trap/</guid><description>&lt;p&gt;Read &lt;a href="https://www.techdirt.com/2026/03/06/were-training-students-to-write-worse-to-prove-theyre-not-robots-and-its-pushing-them-to-use-more-ai/"&gt;something recently&lt;/a&gt; about students deliberately making their writing &amp;ldquo;imperfect&amp;rdquo; so AI detectors don&amp;rsquo;t flag it. Removing polish, flattening style, adding imperfections on purpose. Their work got good enough to look suspicious.&lt;/p&gt;
&lt;p&gt;We&amp;rsquo;re doing the same thing with code reviews.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve been on both sides of this. Written a clean abstraction, consistent naming, proper error boundaries, and watched someone in review go &amp;ldquo;this looks generated.&amp;rdquo; Years of caring about consistency and now consistency is the tell.&lt;/p&gt;</description></item><item><title>Repeat yourself</title><link>https://blog.vnykmshr.com/writing/repeat-yourself/</link><pubDate>Wed, 18 Feb 2026 00:00:00 +0000</pubDate><guid>https://blog.vnykmshr.com/writing/repeat-yourself/</guid><description>&lt;p&gt;If you repeat your prompt, the model gives you a better answer. Not a smarter model, not a bigger context window, not chain of thought &amp;ndash; you say the same thing twice and it works better. &lt;a href="https://arxiv.org/abs/2512.14982"&gt;Google researchers tested this&lt;/a&gt; across Gemini, GPT, Claude, DeepSeek &amp;ndash; 47 wins out of 70 benchmarks, zero losses.&lt;/p&gt;
&lt;p&gt;In a transformer, token 1 can&amp;rsquo;t see token 50. Causal masking &amp;ndash; each token only attends to what came before it. The first words of your prompt are always processed with the least context. They&amp;rsquo;re flying blind. When you repeat the prompt, the second copy&amp;rsquo;s early tokens can attend to the entire first copy. You&amp;rsquo;re giving the beginning of your question the context it never had.&lt;/p&gt;</description></item><item><title>Coding with LLMs</title><link>https://blog.vnykmshr.com/writing/coding-with-llms/</link><pubDate>Tue, 13 Jan 2026 00:00:00 +0000</pubDate><guid>https://blog.vnykmshr.com/writing/coding-with-llms/</guid><description>&lt;p&gt;The tool changed. The craft did not.&lt;/p&gt;
&lt;p&gt;Six months with AI assistants.&lt;/p&gt;
&lt;p&gt;They write code faster than I read it. That is the problem.&lt;/p&gt;
&lt;p&gt;Fast at CRUD. Slow at concurrency. Good at common patterns. Bad at your specific constraints.&lt;/p&gt;
&lt;p&gt;The bugs are quieter now. No syntax errors. No obvious mistakes. Just wrong assumptions buried in correct-looking code.&lt;/p&gt;
&lt;p&gt;I review more carefully than before. Code I did not write but will debug in production.&lt;/p&gt;</description></item></channel></rss>