<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Go on vnykmshr</title><link>https://blog.vnykmshr.com/writing/tags/go/</link><description>Recent content in Go on vnykmshr</description><generator>Hugo</generator><language>en</language><lastBuildDate>Sat, 15 Nov 2025 00:00:00 +0000</lastBuildDate><atom:link href="https://blog.vnykmshr.com/writing/tags/go/index.xml" rel="self" type="application/rss+xml"/><item><title>autobreaker: adaptive circuit breaking</title><link>https://blog.vnykmshr.com/writing/autobreaker/</link><pubDate>Sat, 15 Nov 2025 00:00:00 +0000</pubDate><guid>https://blog.vnykmshr.com/writing/autobreaker/</guid><description>&lt;p&gt;The &lt;a href="https://blog.vnykmshr.com/writing/circuit-breaking-go/"&gt;circuit breaker post&lt;/a&gt; from last year used a common trigger: trip after N consecutive failures. This works when traffic is predictable. It falls apart when it&amp;rsquo;s not.&lt;/p&gt;
&lt;p&gt;At 10,000 requests per second, 10 failures is noise &amp;ndash; a 0.1% error rate. A static threshold trips the circuit on what&amp;rsquo;s essentially a healthy service. At 10 requests per second, 10 failures is total collapse &amp;ndash; 100% error rate over one interval. The same threshold that false-positives under high traffic is too slow to protect under low traffic.&lt;/p&gt;</description></item><item><title>Replacing OCR with Gemini</title><link>https://blog.vnykmshr.com/writing/gemini-ocr/</link><pubDate>Thu, 10 Jul 2025 00:00:00 +0000</pubDate><guid>https://blog.vnykmshr.com/writing/gemini-ocr/</guid><description>&lt;p&gt;The previous post covered an &lt;a href="https://blog.vnykmshr.com/writing/fixing-ocr-addresses/"&gt;address sanitizer&lt;/a&gt; that fixes mangled OCR output using multi-strategy matching. It works, but it&amp;rsquo;s treating a symptom. A smarter OCR step would make most of it unnecessary.&lt;/p&gt;
&lt;p&gt;Traditional OCR extracts characters, then downstream code figures out what they mean. A separate pipeline handles structure, validation, error correction. The address sanitizer is part of that pipeline. It exists because the OCR engine doesn&amp;rsquo;t understand what it&amp;rsquo;s reading.&lt;/p&gt;</description></item><item><title>Fixing OCR addresses</title><link>https://blog.vnykmshr.com/writing/fixing-ocr-addresses/</link><pubDate>Sat, 05 Jul 2025 00:00:00 +0000</pubDate><guid>https://blog.vnykmshr.com/writing/fixing-ocr-addresses/</guid><description>&lt;p&gt;OCR on government documents works well until you look at the address fields.&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;State: &amp;#34;DKI JAKRTA&amp;#34;
City: &amp;#34;JAKRTA PUSAT&amp;#34;
District: &amp;#34;MENTNG&amp;#34;
Village: &amp;#34;MENTENG&amp;#34;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Dropped vowels, character substitutions, truncated names. Indonesian place names get mangled in predictable ways &amp;ndash; &amp;lsquo;A&amp;rsquo; becomes &amp;lsquo;R&amp;rsquo;, characters vanish mid-word. The OCR engine reads the image fine. It just can&amp;rsquo;t spell.&lt;/p&gt;
&lt;p&gt;The problem: take these broken strings and map them back to real administrative divisions. Province, city, district, village &amp;ndash; the hierarchy matters, and every level needs to resolve correctly.&lt;/p&gt;</description></item><item><title>Circuit breaking in Go</title><link>https://blog.vnykmshr.com/writing/circuit-breaking-go/</link><pubDate>Sat, 28 Sep 2024 00:00:00 +0000</pubDate><guid>https://blog.vnykmshr.com/writing/circuit-breaking-go/</guid><description>&lt;p&gt;A service calls a dependency. The dependency is slow or down. The service waits, ties up a goroutine, maybe a connection. Multiply that by every request in flight, and the caller is now as broken as the dependency it called.&lt;/p&gt;
&lt;p&gt;Circuit breaking stops this. Instead of waiting on something that&amp;rsquo;s failing, stop calling it. Let it recover. Try again later.&lt;/p&gt;
&lt;h2 id="three-states"&gt;Three states&lt;/h2&gt;
&lt;p&gt;A circuit breaker wraps external calls and tracks their outcomes.&lt;/p&gt;</description></item><item><title>Scout, plan, wait</title><link>https://blog.vnykmshr.com/writing/scout-plan-wait/</link><pubDate>Tue, 20 Aug 2024 00:00:00 +0000</pubDate><guid>https://blog.vnykmshr.com/writing/scout-plan-wait/</guid><description>&lt;p&gt;The legacy codebase still runs the business. It is not small. It is six vertical functions deployed as separate services, sharing a data layer and a code tree, so &amp;ldquo;service&amp;rdquo; is a deployment unit here, not a boundary. It reads like a place rather than an architecture &amp;ndash; rooms we know the shortcuts of, walls not quite where a greenfield build would put them. It has been running for years. It works.&lt;/p&gt;</description></item><item><title>Redis caching patterns</title><link>https://blog.vnykmshr.com/writing/redis-caching-patterns/</link><pubDate>Thu, 20 Jun 2024 00:00:00 +0000</pubDate><guid>https://blog.vnykmshr.com/writing/redis-caching-patterns/</guid><description>&lt;p&gt;Put Redis in front of a database and reads get fast. The cost is a cache layer that&amp;rsquo;s now load-bearing, and a set of failure modes that come with that.&lt;/p&gt;
&lt;p&gt;Three write patterns, three hard problems. The patterns determine consistency. The problems determine whether your cache layer is a net positive or a source of outages.&lt;/p&gt;
&lt;h2 id="write-patterns"&gt;Write patterns&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Cache-aside&lt;/strong&gt; (lazy loading). The application checks cache on read. On miss, it reads from the database and populates cache. Writes go directly to the database; cache entries are either invalidated or left to expire.&lt;/p&gt;</description></item><item><title>The compiler as first reviewer</title><link>https://blog.vnykmshr.com/writing/the-compiler-as-first-reviewer/</link><pubDate>Wed, 18 Jul 2018 00:00:00 +0000</pubDate><guid>https://blog.vnykmshr.com/writing/the-compiler-as-first-reviewer/</guid><description>&lt;p&gt;When I joined, a few of us who had recently come in would skip the daily standups. Clock in, clock out, heads down. The rest of the team wasn&amp;rsquo;t sure what we were working on. This went on for a while. We were reading the existing codebase and quietly rewriting pieces in Go &amp;ndash; proving it could handle production before anyone had to commit to it.&lt;/p&gt;
&lt;p&gt;That changed when Go got formally introduced. The direction came from above &amp;ndash; Go was the language for the new services. Now it wasn&amp;rsquo;t a side experiment, it was the stack. And the PRs were coming in faster than anyone could review them all.&lt;/p&gt;</description></item><item><title>The GraphQL buffer</title><link>https://blog.vnykmshr.com/writing/the-graphql-buffer/</link><pubDate>Fri, 20 Apr 2018 00:00:00 +0000</pubDate><guid>https://blog.vnykmshr.com/writing/the-graphql-buffer/</guid><description>&lt;p&gt;The GraphQL gateway started as a practical problem. We had mobile apps, web clients, and a growing number of backend services. Every client talked to every backend directly. When a new backend came up or an old one changed its API, every client needed updating. The gateway was supposed to fix that &amp;ndash; one schema, one endpoint, clients talk to GraphQL, GraphQL talks to backends.&lt;/p&gt;
&lt;p&gt;We built it in Go, starting from a fork of &lt;code&gt;graphql-go&lt;/code&gt;. The fork grew over time &amp;ndash; custom resolvers, caching layers, request batching, things we needed that the upstream didn&amp;rsquo;t have. We&amp;rsquo;d sync the fork every few months, but our changes kept growing. Five of us on the team, and most of the early days went into getting other teams to migrate their APIs onto the gateway. We built the base, got teams to add and own their own modules, then moved into a gatekeeping role &amp;ndash; reviewing what went in, making sure the schema stayed coherent.&lt;/p&gt;</description></item><item><title>The same tree, twice</title><link>https://blog.vnykmshr.com/writing/the-same-tree-twice/</link><pubDate>Fri, 22 Sep 2017 00:00:00 +0000</pubDate><guid>https://blog.vnykmshr.com/writing/the-same-tree-twice/</guid><description>&lt;p&gt;I am building the promocode engine again.&lt;/p&gt;
&lt;p&gt;This is the second time. The first was in Node.js, at a previous company, on top of a small library called &lt;code&gt;business-rules&lt;/code&gt;. The engine worked. I thought the shape was brilliant.&lt;/p&gt;
&lt;p&gt;I am building it again in Go, from scratch, and the shape is exactly the same.&lt;/p&gt;
&lt;h2 id="the-first-time"&gt;The first time&lt;/h2&gt;
&lt;p&gt;The first promocode engine was an AND/OR decision tree. A rule had conditions in groups, groups nested inside groups, leaves that compared a fact to a value. A rule was &lt;em&gt;data&lt;/em&gt; in the database &amp;ndash; a flattened tree stored across MySQL rows &amp;ndash; but it evaluated against facts that had to be computed live.&lt;/p&gt;</description></item><item><title>Designing a wallet</title><link>https://blog.vnykmshr.com/writing/designing-a-wallet/</link><pubDate>Sun, 20 Aug 2017 00:00:00 +0000</pubDate><guid>https://blog.vnykmshr.com/writing/designing-a-wallet/</guid><description>&lt;p&gt;I&amp;rsquo;m building a wallet service in Go. Users add money from their bank, the balance sits in the wallet, and they spend it on the platform. Small payments &amp;ndash; the kind where going through a full bank authentication flow every time is more friction than the transaction is worth.&lt;/p&gt;
&lt;p&gt;The pitch is simple: top up once, spend without thinking. No OTP for every payment. No redirect to the bank&amp;rsquo;s page. One click, done.&lt;/p&gt;</description></item><item><title>The first service</title><link>https://blog.vnykmshr.com/writing/carving-the-first-service/</link><pubDate>Wed, 15 Mar 2017 00:00:00 +0000</pubDate><guid>https://blog.vnykmshr.com/writing/carving-the-first-service/</guid><description>&lt;p&gt;The monolith is Perl. No framework I can identify &amp;ndash; just a large codebase that&amp;rsquo;s been growing for years. One Perl line does an alarming amount. I&amp;rsquo;m not a Perl developer and had to read through the code several times before I was confident I understood what it was doing.&lt;/p&gt;
&lt;p&gt;Production goes down for days sometimes. When it does, the team spends hours tracing through the code to figure out what broke. That&amp;rsquo;s the context. The system works, mostly. When it doesn&amp;rsquo;t, nobody quite knows why.&lt;/p&gt;</description></item><item><title>Switching to Go</title><link>https://blog.vnykmshr.com/writing/switching-to-go/</link><pubDate>Wed, 07 Dec 2016 00:00:00 +0000</pubDate><guid>https://blog.vnykmshr.com/writing/switching-to-go/</guid><description>&lt;p&gt;I&amp;rsquo;d been building production services with Node.js since 2013. Custom blog engines, API services, real-time backends. The ecosystem was rich, iteration was fast, and for moderate-load services, it worked well.&lt;/p&gt;
&lt;p&gt;The limits showed up at scale. A latency-sensitive payment processing API exposed three problems that were manageable at lower traffic but compounded at higher volumes.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;GC pauses.&lt;/strong&gt; Node&amp;rsquo;s garbage collector introduced latency spikes that were unpredictable and hard to tune. For payment processing, where consistent response times matter more than peak throughput, this was the biggest concern. A p99 spike during a GC pause means a user staring at a spinner while their payment hangs.&lt;/p&gt;</description></item></channel></rss>