Engineering

How I deployed this blog for under $10 a month

A no-frills production stack for a personal site: Next.js + Payload in a Docker image on Cloud Run, Postgres and media on Supabase, and Cloudflare absorbing the read traffic so the origin barely wakes up.

Every personal site is a deployment problem in disguise. The code is the easy part — the hard part is finding a stack that costs almost nothing when nobody is reading, scales to a hug-of-death without a rewrite, and does not turn into a second job to operate. This is the stack I landed on for this site, and the reasoning behind each choice.

The shape is small on purpose: Cloudflare in front, a single Docker image of Next.js plus Payload running on Google Cloud Run, and Supabase for both Postgres and media storage. No Kubernetes, no separate CMS host, no S3 bucket policy gymnastics. The whole thing is one container talking to one managed database, with a CDN doing the heavy lifting on read traffic.

Cloud Run is the centerpiece. It scales to zero when nothing is happening, charges in 100ms increments when it is, and gives me a real Linux container with no cold-start hacks for SSR. I run it with one minimum instance in production — that costs roughly five to ten dollars a month and removes the only real downside of a serverless platform for a public-facing site, which is the first-visit cold start.

Supabase replaces two things at once. Postgres lives behind their Supavisor pooler, which matters more than people think: a serverless runtime like Cloud Run can spin up dozens of instances during a traffic spike, and a vanilla Postgres connection on port 5432 will exhaust its connection limit long before you notice. The pooled connection on port 6543 in transaction mode is the one to use. Their object storage exposes an S3-compatible endpoint, so Payload can write uploads through the standard S3 storage adapter and serve them straight from the Supabase CDN.

The single most important pre-flight change was getting media off the local filesystem. Cloud Run containers have an ephemeral disk — anything written there disappears when the instance recycles, which it will, often. Payload defaults to a local static directory for uploads, which works beautifully in development and silently loses data the moment you deploy. Swapping in the S3 storage adapter and pointing it at Supabase Storage is a ten-line change that prevents an entire category of disaster.

Cloudflare is where the cost story actually closes. With proxied DNS in front of the Cloud Run URL and a single cache rule that says "cache every path that is not /admin or /api", the public site starts serving HTML out of the edge instead of waking up the origin on every request. Long-cache rules for /_next/static and /_next/image push the edge hit ratio higher still. The Cloud Run logs go from steady traffic to a quiet trickle of cache misses, and the bill follows the logs.

The piece that catches people out is cache invalidation. If you publish a post and the page is cached at the edge for an hour, the post does not appear for an hour. The fix is a small Payload afterChange hook on Posts, Projects, and the global documents that calls the Cloudflare purge API for the affected paths. Five minutes of work, and publish-to-live becomes instant again.

The Dockerfile is boring on purpose. A three-stage build — install dependencies, run next build with the standalone output flag, copy the standalone bundle into a thin runner image — produces something around two hundred megabytes. Build-time arguments bake the public server URL into the client bundle, secrets are pulled at runtime from Google Secret Manager rather than baked into the image, and the container runs as a non-root user. Nothing clever, which is the point.

The thing I would tell anyone setting this up for the first time is to sequence it carefully. Get the repo changes in first — storage adapter, standalone output, healthcheck route, dockerignore — and verify the Docker image runs locally against a real Supabase database before you ever deploy. Stand up Cloud Run with the *.run.app URL and confirm that the admin panel works, that uploads land in Supabase Storage, and that no part of the system is silently relying on the local filesystem. Only then put Cloudflare in front. Skipping the local-Docker step is how people end up debugging a broken production deploy at midnight.

On a steady month this site runs for less than the price of a coffee. Cloudflare on the free plan, Supabase on the free tier, Cloud Run with a single warm instance and almost no billable request time on top. The architecture is also embarrassingly easy to scale up if traffic ever justifies it — raise the max instance count, move to Supabase Pro, turn on Cloudflare Image Resizing — without changing a single line of application code. That, more than the price tag, is what I like about it.