JDM
← All posts
5 min read

Self-hosting MinIO for free S3-compatible storage on your homelab

I replaced AWS S3 with a self-hosted MinIO instance on Coolify. Zero storage bill, zero egress costs, same SDK. Here is how I set it up and the trade-offs.

minioself-hostedstoragehomelabsolo-founder

I run six production sites alone. Every one of them needs somewhere to store images, PDFs, and lesson archives. For two years I paid AWS S3 for that. Then I looked at the bill for egress alone and decided there had to be a better way.

There is. It is called MinIO. It speaks the S3 API natively, runs on a single LXC container, and costs me exactly zero dollars per month.

This post is the exact setup I use at /opt/drafted-minio/ on a Coolify-managed LXC container. It serves as the S3-compatible warm-standby for the Drafted By suite: PrepareMesCours, DraftMyLesson, PrzygotujLekcje, Carriva, and the CC monorepo. Apps talk to it via the standard AWS S3 SDK. They do not know they are talking to a homelab box instead of us-east-1.

Why MinIO instead of AWS S3

The reasons are boring and practical.

First, cost. I have zero recurring storage bill for image uploads, PDF exports, and lesson archives. Those volumes are not huge, maybe a few hundred gigabytes total, but the monthly S3 bill was creeping toward $20 just for storage. The egress was worse. Every time a teacher downloaded a PDF, AWS charged me per gigabyte. For a small EdTech site where teachers stream PDFs to classrooms, that adds up fast.

Second, latency. MinIO runs on the same homelab as the apps. Backend reads are localhost-fast. When a lesson archive is generated and written to disk, the next request reads it in microseconds. No network hop to Virginia or Frankfurt.

Third, the API is a drop-in replacement. The apps set four environment variables: S3_ENDPOINT, S3_BUCKET, S3_ACCESS_KEY, S3_SECRET_KEY. That is it. The same code that talked to AWS S3 now talks to MinIO. No path-style fallback gymnastics, no custom middleware. The SDK handles the endpoint change transparently.

The setup

I run MinIO inside a Coolify-managed Docker container. Coolify handles the container lifecycle, health checks, and logs. I do not touch the server directly unless something breaks.

Here is the deployment outline. First, I created a new service in Coolify using the minio/minio:latest image. I mounted /opt/drafted-minio/data:/data as a persistent volume. That is where all bucket data lives.

Second, I set MINIO_ROOT_USER and MINIO_ROOT_PASSWORD as environment variables. These are the admin credentials. I rotate them every few months. Coolify stores them encrypted in its internal database.

Third, I exposed two ports: 9000 for the S3 API and 9001 for the web console. The console is locked behind Cloudflare Access so only my IP can reach it. The API port is internal to the homelab network, not exposed to the internet.

Fourth, I created buckets. You can do this via the web console at port 9001, or via the mc command-line tool. I prefer mc because it is scriptable. I set an alias and created buckets for each app:

mc alias set local http://<minio-host>:9000 <root-user> <root-password>
mc mb local/pmc-lessons
mc mb local/dml-exports
mc mb local/pl-archives
mc mb local/cc-assets

Fifth, I created scoped access keys per app. Each key has read-write access to exactly one bucket. No key can touch another app's data. This is done through the web console under the Identity menu. Create a new user, assign a policy that restricts access to one bucket, then generate the access key and secret key.

Sixth, I set the environment variables in each app's Coolify configuration. The endpoint is http://<minio-host>:9000. The bucket name matches the one I created. The access key and secret key are the scoped credentials.

That is the entire setup. It took about 30 minutes, most of which was waiting for the Docker image to pull.

The trade-offs

Self-hosting MinIO is not free in the total-cost-of-ownership sense. You pay in time and attention instead of dollars. Here are the trade-offs I live with.

No multi-region replication. The community edition of MinIO does not support bucket replication across instances. I run a single instance on one machine. If that machine dies, storage is down until I restore from backup. I handle this with daily backups to a NAS, which I will describe below.

No glacier tier or object lifecycle transitions. MinIO supports lifecycle rules for deletion and tiering, but on a single homelab there is no cold storage hardware. I cannot move old objects to a cheaper tier automatically. I just delete old exports manually when they accumulate.

IAM is simpler than AWS. There are no STS roles, no assume-role policies, no cross-account access. It is users and access keys with inline policies. That is fine for a solo operation. I do not need a complex permission model for one person.

Uptime is on me. My homelab has been more reliable than AWS in terms of billing surprises, but less reliable in terms of raw uptime. Power outages, kernel updates, and the occasional Docker daemon restart happen. I accept that.

The backup story

Every Sunday at 2am, a systemd timer runs a script that mirrors the MinIO data directory to my QNAP NAS over the local network. The command is simple:

mc mirror --watch /opt/drafted-minio/data/ /mnt/nas/minio-backup/

The --watch flag is not strictly needed for a cron job, but I keep it in case I ever run the script manually. The NAS has its own RAID configuration and an offsite snapshot to a remote location. So I have three copies of every bucket: the live MinIO data, the NAS backup, and the offsite snapshot.

This is not S3's 11 nines of durability. It is good enough for lesson archives and PDF exports that can be regenerated from the database if necessary. The database itself gets its own backup pipeline.

When you should not self-host MinIO

There are cases where MinIO is the wrong answer.

If you serve public images at scale, use Cloudflare R2 or AWS S3 with a CDN in front. MinIO on a homelab cannot handle the bandwidth of thousands of concurrent image requests. Your server will saturate its uplink and users will see timeouts.

If you need compliance certifications like HIPAA or SOC 2 Type II, buy AWS. MinIO has enterprise features for that, but the community edition does not. And even the enterprise edition requires you to manage the infrastructure yourself. For a solo founder, the audit cost alone is not worth it.

If your team does not already operate Linux storage, do not start with MinIO. You will need to understand filesystem permissions, Docker volumes, network configuration, and backup strategies. That is a skillset, not a checkbox.

The boring correct answer

For a solo founder running a few SaaS sites where storage is bounded in the gigabyte-to-terabyte range and egress matters, MinIO is the boring correct answer. It costs nothing, it speaks the same API as AWS, and it runs on hardware you already have.

I set it up once and forgot about it. The backups work. The apps work. The AWS bill went to zero.

That is the kind of win that keeps me building.