JDM
← All posts
6 min read

One Cloudflare Tunnel for a Fleet of Small Next.js Apps

I route 10+ hostnames through a single cloudflared daemon on a Coolify host. Here is the config shape and why it works better than one tunnel per app.

cloudflare-tunnelcoolifytraefiksolo-founder

I run one Cloudflare Tunnel for all my production sites. It is called 'drafted-by' and it fronts 10+ hostnames across the fleet. Every hostname is a CNAME at apex pointing to <tunnel-id>.cfargotunnel.com with proxied=true. The tunnel daemon (cloudflared) runs on the same Coolify host. Traefik on that host routes by Host header to the right container.

This pattern took me about six months to settle on. I started with one tunnel per app, then consolidated. Here is the actual config shape I run, the trade-offs, and the two-minute onboarding for a new app.

Why one tunnel instead of one per app

The obvious alternative is one cloudflared instance per app. Each gets its own systemd unit, its own connection pool, its own config file. That works. But for a fleet of small Next.js apps run by one person, the overhead adds up.

A single binary, a single systemd unit to monitor. Less surface area for tunnel-specific failures. If cloudflared has a memory leak in a particular version, I catch it once instead of 10 times. The shared connection pool to Cloudflare's edge means each ingress rule is just a config entry, not a network process. Adding a new app is one Coolify deploy, one ingress rule in tunnel config, one CNAME at apex. Two minutes once you have the pattern memorized.

The blast radius trade-off is real: one daemon crashes, every site is down. In practice, cloudflared is rock-stable. I have had it running for months without a restart. systemd auto-restarts on failure. Cloudflare's edge pools four connections per tunnel, so a daemon hiccup is usually transparent for 30 seconds while a new connection establishes. I have not noticed a site outage from tunnel failure in over a year.

The tunnel ingress shape I actually run

The config is managed via Cloudflare's API rather than a local YAML on the host (which means edits are atomic, versioned via the API, and don't depend on me remembering to keep /etc/cloudflared/config.yaml in git). The shape would look like this if you wrote it as YAML, though:

tunnel: drafted-by

ingress:
  - hostname: webhooks.draftedby.com
    service: http://192.168.0.102:3210

  - hostname: preparemescours.fr
    service: http://127.0.0.1:80

  - hostname: preprod.preparemescours.fr
    service: http://127.0.0.1:3206

  - hostname: insights.draftedby.com
    service: http://127.0.0.1:80

  - hostname: draftedby.com
    service: http://127.0.0.1:3100

  - hostname: jdmcasanova.com
    service: http://127.0.0.1:80

  - service: http_status:404

To add a hostname I PUT /accounts/{account}/cfd_tunnel/{id}/configurations with the full ingress array. Cloudflare's edge picks up the change in seconds. Same daemon, no restart.

Two routing strategies coexist in this config. Most production apps go through port 80, where Traefik reads the Host header and routes to the correct container. That is the default for any Coolify project with a FQDN set. Traefik handles the Host-based routing, TLS is already terminated at Cloudflare's edge, so Traefik only sees HTTP traffic.

Preprod environments use direct ports. preprod.preparemescours.fr goes straight to port 3206, bypassing Traefik entirely. I do this because preprod apps often run different container versions or have experimental configs. I want isolated upstreams so a preprod crash does not affect the Traefik routing table. Each preprod gets its own port, Coolify auto-assigns them. I just copy the port number into the tunnel config.

The catch-all 404 at the bottom is critical. Without it, a DNS request for an unlisted hostname hits the Coolify host and returns a confusing Traefik error page. With the 404, it returns a clean HTTP 404 from Cloudflare's edge. I learned this the hard way when a stale DNS record pointed to my tunnel and I spent an hour debugging a mysterious 502.

The DNS side: CNAME everywhere, proxied=true

Every hostname gets a CNAME record pointing to <tunnel-id>.cfargotunnel.com with the orange cloud enabled. CNAME at apex works because Cloudflare flattens it for ANAME-like resolution. Subdomains use plain CNAMEs to the same target. Same shape across the fleet.

The key detail is proxied=true. Without it, Cloudflare returns the cfargotunnel target as a raw DNS response, which does not route through Cloudflare's edge to the tunnel. With proxied=true, the edge handles the connection. TLS termination happens there, not on my homelab. No Let's Encrypt renewal, no certificate management on the Coolify host.

The Vercel-to-Coolify cutover I described in the migration post was just deleting the old A record (Vercel anycast IP) and adding the apex CNAME to the tunnel target. The tunnel part was the easiest step.

Why not Caddy or Nginx public-facing instead

If you have a static IP, you can expose services directly with Caddy or Nginx and skip the tunnel entirely. I do not have that luxury. My residential ISP assigns a dynamic IP and blocks port 80 and 443 on consumer plans. I could use dynamic DNS and a reverse proxy on a VPS, but that adds another server to manage.

Cloudflare Tunnel solves both problems in one. The daemon makes an outbound connection to Cloudflare's edge, so no open ports on my home network. TLS termination at Cloudflare's edge means no certificate renewal on my end. The only requirement is that cloudflared can reach the internet, which it can through any NAT or CGNAT setup.

The trade-off: I am dependent on Cloudflare's edge being up. In practice, Cloudflare's uptime is better than my homelab's uptime. I have had more outages from my ISP's modem rebooting than from Cloudflare's edge.

A note on naming

I originally named the tunnel 'homelab' because it started as a way to expose a few internal tools. As I added production sites, the name became inaccurate. The tunnel routes the Drafted By suite, not the whole homelab. I renamed it to 'drafted-by' once I noticed.

Tunnel name is pure metadata. The rename was a single PATCH /accounts/{account}/cfd_tunnel/{id} with { "name": "drafted-by" }. No new credentials file, no new tunnel ID, no DNS updates required. The CNAME records still resolve through the same <tunnel-id>.cfargotunnel.com target, because the tunnel ID does not change with a rename. If you are starting fresh, just pick a name that matches the application set, not the hardware.

Adding a new app: the two-minute checklist

When I launch a new site, the process is mechanical. First, deploy the app in Coolify. It auto-assigns a port and a FQDN. Second, add an ingress rule to the tunnel config. If the app uses Traefik routing, point to port 80. If it is a preprod or experimental app, point to the direct port. Third, add a CNAME record at the domain's DNS provider pointing to the tunnel ID with proxied=true.

That is it. The tunnel daemon picks up the config change within a few seconds. Cloudflare's edge starts routing traffic to the new hostname within a minute. No new binaries, no new systemd units, no new connection pools.

The honest trade-off

One tunnel means one point of failure. If cloudflared has a bug that causes a crash loop, every site is down until I fix it. The mitigation is systemd auto-restart and Cloudflare's connection pool, but it is not zero-downtime. If you need absolute isolation between apps, run separate tunnels.

For my scale, the simplicity wins. I can reason about the entire ingress layer in one config file. I can restart the tunnel daemon and know exactly what will break and what will not. I have not needed to debug a tunnel issue in the six months since I consolidated. That is the kind of boring infrastructure I want for a one-person operation.