I’d been building production services with Node.js since 2013. Custom blog engines, API services, real-time backends. The ecosystem was rich, iteration was fast, and for moderate-load services, it worked well.

The limits showed up at scale. A latency-sensitive payment processing API exposed three problems that were manageable at lower traffic but compounded at higher volumes.

GC pauses. Node’s garbage collector introduced latency spikes that were unpredictable and hard to tune. For payment processing, where consistent response times matter more than peak throughput, this was the biggest concern. A p99 spike during a GC pause means a user staring at a spinner while their payment hangs.

CPU-bound work. Node excels at I/O-bound concurrency. Financial calculations, data validation, complex business logic – these are CPU-bound. The event loop blocks. Worker threads existed but felt bolted on, not native.

Async complexity at scale. Callbacks gave way to Promises, with async/await on the horizon but not yet mainstream. Each iteration improved readability, but reasoning about complex async flows in a large codebase remained harder than it should have been. Error handling in particular – a swallowed rejection in a Promise chain could silently drop a transaction.

Why Go

Go’s goroutine model was the initial draw. Lightweight concurrency without the mental overhead of managing an event loop or thread pool:

go func() {
    processPayment(order)
}()

Thousands of goroutines on a single process, scheduled by the runtime, communicating through channels. The concurrency model maps naturally to the problem: one goroutine per request, blocking I/O that’s actually non-blocking underneath.

But concurrency wasn’t the only factor. Static typing caught errors at compile time that would have reached production in JavaScript. go fmt eliminated style debates. The race detector found data races in tests that would have been intermittent production bugs.

The toolchain was opinionated and complete. In Node.js, assembling a comparable setup required ESLint, Prettier, TypeScript, a test framework, a coverage tool – each with its own configuration. Go shipped with formatting, testing, benchmarking, profiling, and race detection built in.

Migration results

The payment API was one of the first services migrated. After the cut-over:

  • p95 latency dropped by a multiple, not a percentage
  • Memory usage stopped being variable – it held at a consistent low regardless of load
  • The same hardware absorbed substantially more concurrent traffic
  • p99 latency became predictable – no GC-induced spikes

The memory behaviour was the most operationally significant. Consistent memory usage means predictable capacity planning. No more over-provisioning to absorb GC headroom.

What we gave up

Iteration speed. Node.js was faster for prototyping. JavaScript’s flexibility, the npm ecosystem, the ability to test ideas without a compile step – these are real advantages when you’re exploring a problem space. Go is better for the service that runs in production for years. Node.js is better for the prototype you write in an afternoon.

Ecosystem breadth. In 2016, Go’s library ecosystem was thinner than npm’s. Things that were an npm install away sometimes required writing from scratch in Go. Dependency management was GOPATH and vendoring tools like glide – workable but primitive compared to npm.

Team ramp-up. Static typing, explicit error handling, and composition over inheritance required adjustment. Engineers coming from JavaScript found Go’s verbosity frustrating initially. The error handling pattern especially:

result, err := processPayment(order)
if err != nil {
    log.Printf("payment failed for order %s: %v", order.ID, err)
    return handlePaymentError(err)
}

It reads as boilerplate at first. The verbosity is the point: every error is handled explicitly at the call site. No silent failures, no swallowed exceptions. After a few weeks, most of the team came around on this.

Where each fits

Go for long-running services with predictable performance requirements. Payment processing, API gateways, background workers, anything where reliability and consistent latency matter more than development speed.

Node.js for rapid prototyping, JavaScript-heavy full-stack applications, and services that lean heavily on npm’s ecosystem. Also for teams where JavaScript expertise is deep and the performance requirements don’t push past what the runtime can deliver.

The mistake is treating this as a religious question. It’s an engineering trade-off with measurable dimensions: latency characteristics, memory profile, concurrency model, ecosystem maturity, team familiarity. The right answer depends on what you’re building and who’s building it.

In production, the payment API became known as the boring service – nobody had reason to talk about it because it just worked. The migration cost was real: weeks of rewriting, team training, toolchain changes. The return was worth it.