
Migrating a Next.js portfolio off Vercel to Coolify
Cutover steps from Vercel to a homelab Coolify, what broke, and why operational coherence beat the symbolic cost savings.
The site is jdmcasanova.com, a Next.js 16 single-page portfolio. It was on Vercel free tier, so the direct savings here were symbolic. The move was about consolidating onto the same homelab where I already run draftedby.com, the production preparemescours.fr cutover, the EdTech preprods (DraftMyLesson, PrzygotujLekcje, CreaClases), insights.draftedby.com (Umami), and the internal webhooks.
I had one afternoon. Here is exactly what I did, what broke, and why I think the trade-off is worth it for anyone running a small fleet of Next.js sites.
Prep the repo for self-host
The first step was making the Next.js build produce a standalone output that can run in a container without the full Node.js runtime overhead. I added output: 'standalone' to next.config.mjs. That tells Next.js to generate a .next/standalone folder with only the files needed at runtime.
Caught one Vercel-ism before the first deploy: app/opengraph-image.tsx was using runtime = "edge". Edge runtime expects Vercel's edge infrastructure. Inside a Node-standalone Docker image it has no runtime to run on. Switched to runtime = "nodejs" and the OG image route flipped from dynamic to a normal static page in the build output.
I also replaced the Google Analytics snippet with a small UmamiScript server component that no-ops if the env vars are unset. Umami is self-hosted on the same homelab at insights.draftedby.com. The component is roughly this:
import Script from "next/script";
export default function UmamiScript() {
const id = process.env.NEXT_PUBLIC_UMAMI_WEBSITE_ID;
const host = process.env.NEXT_PUBLIC_UMAMI_HOST;
if (!id || !host) return null;
return (
<Script
defer
src={`${host}/script.js`}
data-website-id={id}
strategy="afterInteractive"
/>
);
}
The Dockerfile is multi-stage. The deps stage runs npm ci with frozen lockfile. The builder stage bakes NEXT_PUBLIC_UMAMI_* env vars at build time so they are present in the client bundle. The runner stage copies .next/standalone, .next/static, and public, then runs node server.js as the node user. Base image is node:24-slim.
Provision Coolify side
I already had Coolify running on the homelab. The tricky part was the deploy key. I could not reuse the existing SSH key from the draftedby.com repo because GitHub enforces uniqueness of deploy keys per public key across all repos for one user. I generated a new ed25519 keypair, uploaded the private key to Coolify, and added the public key to the resume-website repo as a read-only deploy key.
I created a new Coolify project and production environment, then a private-deploy-key application pointing to [email protected]:jdmcasanova/resume-website.git. Build pack was Dockerfile, ports exposed was 3000.
The first deploy via the Coolify API took about 25 seconds. The layers were already cached on the host from previous builds, so only the new code needed to be compiled. I did an internal smoke test from my LAN using a Host header because hairpin NAT was blocking the direct sslip URL.
Wire Umami
I created a new website on the self-hosted Umami instance via its REST API. The endpoint is POST /api/websites with an auth token. I got back a website-id UUID. Then I set NEXT_PUBLIC_UMAMI_WEBSITE_ID and NEXT_PUBLIC_UMAMI_HOST as build-time environment variables in the Coolify application settings.
This step was straightforward. The Umami instance was already running, so it was just a matter of registering the new site and passing the ID to the build.
DNS cutover via Cloudflare Tunnel
The site reused an existing Cloudflare Tunnel that already handled traffic for draftedby.com, preparemescours.fr, the preprods, insights, and webhooks. I renamed the tunnel from 'homelab' to 'drafted-by' since that is what is actually on it now.
I added one ingress rule in the tunnel config: jdmcasanova.com -> http://127.0.0.1:80. This goes before the catch-all 404 rule. Traefik on the Coolify host routes by Host header, so it needs to see the original hostname.
I updated the Coolify application FQDN to http://jdmcasanova.com (not https). Cloudflare proxy terminates TLS at the edge. The container side is plain HTTP. This avoids the complexity of managing TLS certificates on the homelab.
I restarted the app so Traefik picked up the new Host label. Then I deleted the apex A record pointing at Vercel (216.198.79.1) and created a CNAME at apex pointing to <tunnel-id>.cfargotunnel.com with proxied=true. Cloudflare flattens CNAMEs at apex automatically, so this works without an A record.
What broke and what was honest about
The first FQDN PATCH was blocked by a guardrail in my own automation, not by Coolify. The agent doing the cutover had a hook that flags production changes on shared infra and waits for an explicit go. Annoying for two seconds, exactly the right behaviour for a one-shot DNS flip. I confirmed and it walked the cutover sequence cleanly.
DeepSeek's first generation for this blog hallucinated a Hetzner AX102 setup and made-up dollar costs even when nothing in the brief mentioned either. That is the risk with any LLM-generated infra writing. Either fact-check every concrete detail after the fact, or feed the model a tight brief and edit aggressively. I wrote this one from memory and the actual deploy logs (see the welcome note for what this blog is and is not).
The hook of self-hosting is not that it is cheaper. For a single static portfolio on Vercel free tier, the cost was already zero. The win is operational coherence. I now have one place to operate all my sites. One set of deploy keys, one tunnel config, one monitoring dashboard, one set of build pipelines.
The actual trade-off
Self-hosting a Next.js site on a homelab is not for everyone. You need to be comfortable with Docker, with Traefik routing by Host header, and with the fact that a power outage or a flaky upstream link takes down everything at once. Cloudflare Tunnel handles TLS at the edge so the certificate problem goes away, but you trade it for trusting a single tunnel daemon to stay healthy.
The win for me is operational coherence, not cost. The same Coolify panel deploys every site. The same Umami instance tracks every site. The same drafted-by tunnel routes every site. Adding a new app means one project + one Cloudflare Tunnel ingress rule + one CNAME, and the rest is shared.
Vercel is fine for a single side project. For a solo founder with a handful of production Next.js apps already on shared infra, pulling the orphans onto the same homelab pays off the day you have to debug something at 11pm and you only have one place to look.
That is the honest takeaway. No savings, no revolution. One more service moved to a single place I already operate.