Running Supabase on Stackie

coderbants's avatar coderbants

We’re shipping our first built-in stack, and it’s the one most of you have been asking for: Supabase. The whole suite — Postgres, Auth, REST, Realtime, Storage, Studio, Mailpit, the imgproxy, the lot — comes up with one command:

stackie up -s supabase

That’s it. No docker-compose.yml, no image pulls, no warning lights from Docker Desktop about your fans. Postgres, GoTrue, PostgREST, Realtime, Storage, the Studio dashboard, Mailpit and imgproxy all start as native processes on your machine, wired together exactly the way npx supabase start would wire them — but without the Linux VM in the middle.

Why this is the first stack

We’ve been working with Supabase locally for a long time. It’s the canonical example of a managed-service shape of dependency: most of you don’t run your own auth server, your own object storage, or your own CDN-backed image proxy in production — those are someone else’s problem on Supabase Cloud. But locally, you still need the same set of services humming along, because debugging an auth flow against a stub doesn’t tell you anything useful.

Running npx supabase start on Docker Desktop has been the default. It works. Supabase being our most-requested service was therefore also the place Stackie could deliver the biggest win.

What you get

The supabase stack mirrors the surface of npx supabase start — same services, same default ports, same auth flow, same Studio dashboard. Routing into the stack happens through a single API gateway on localhost:54321 and the rest of the services live on the standard Supabase ports.

ServiceDefault portBehind the gateway
API Gateway54321(entrypoint)
Postgres54322direct
Studio (dashboard)54323/*
GoTrue (auth)54332/auth/v1/*
PostgREST54333/rest/v1/*
Realtime54334/realtime/v1/*
Storage54335/storage/v1/*
Postgres Meta54336(Studio uses this)
Mailpit (web / SMTP)54324 / 54325direct
imgproxy54338(Storage uses this)

If you’ve used Supabase locally before, this is the layout you’d expect. The shared JWT secret, anon key, service-role key and Postgres password all default to the documented Supabase values, and override cleanly via --var flags on stackie up when you want non-defaults.

Footprint

You can guess where this is going if you’ve read the introducing post, but the full Supabase numbers are worth printing here. Idle, with every Supabase feature enabled:

StackieDocker Desktop
Peak memory during startup~800 MB2–4 GB (VM-reserved)
Idle resident memory~200 MB2–4 GB (VM-reserved)
Cold startsecondsVM boot
Filesystem (migrations, seed)nativeVirtioFS / 9P translation
Battery and fans🏖️😕

Supabase starts at around 800 MB in our testing, quickly settling to a baseline of 200 MB across all the Supabase services running natively, end-to-end, with the dashboard open.

The raw numbers only tell part of the story. As Stackie blocks are native processes, they’re also managed directly by your host’s native scheduler. Memory is reclaimed instantaneously between calls, and filesystem and network access don’t transit through a proxy layer. The headroom this opens up even on a lower spec laptop is signficant. For macOS, our reference machine is a base model Macbook Neo with 8 GB RAM.

You can see the the full Supabase stack running as a series of native processes in the example above, and we also have a fairly context-heavy agent session running in Claude Code, yet the machine isn’t under pressure at all.

Why Kong became Traefik

If you’ve inspected npx supabase start you’ll know it routes its API surface through Kong. We’ve opted instead to use Traefik as the API gateway instead.

The reason is platform support. Traefik publishes first-class binaries for macOS (arm64 + x86_64), Linux (arm64 + x86_64) and Windows (x86_64), and runs predictably on each.

Kong’s local development story leans heavily on its container image — fine for Docker, problematic for native processes on three platforms across multiple architectures.

Functionally the swap is invisible. The same paths route to the same services:

/auth/v1/*    -> GoTrue
/rest/v1/*    -> PostgREST
/realtime/v1/* -> Realtime  (with Host header rewriting for tenant routing)
/storage/v1/* -> Storage
/*            -> Studio  (catch-all)

Path stripping, host-header rewriting for Realtime tenant resolution, and route priorities all match what the Supabase Kong configuration produces, so client code that targets the stack URL keeps working unchanged.

How the stack ships across platforms

The other piece that makes this approach possible is Supercache. Every block in the Supabase stack — Postgres 18, Erlang/OTP for Realtime, the Studio Next.js bundle, Storage’s Node.js bundle, imgproxy, Mailpit — is pre-compiled per platform and optionally served from our edge network.

When you run stackie up -s supabase for the first time, what comes down the wire is a set of platform-native artifacts, not source for your laptop to compile.

That’s the difference between waiting 30 seconds for a stack to come up and waiting 10 minutes for builds when versions change.

Realtime’s Elixir release alone would compile from source for 20+ minutes on a clean machine; from Supercache it lands as a pre-built tarball in seconds. The same goes for the rest of the components. We do that build work once from the original source, index the built archives for each platform, and co-ordinate with Stackie to install the packages from the edge.

Here be dragons!

Our support for Supabase is early and still developing. We’re keen to hear your feedback and hope you’ll reach out to us with any bugs you find on our issue tracker.

What’s next?

Supabase is the first built-in stack but not the last. Other stacks are being planned and we think the community will be really excited about these plans. When we’re ready to announce them this blog will be the canonical place to hear about it.

stackie up -s supabase

That’s the shortest dev environment you’ll set up today. If you’ve not yet tried Stackie, this is a good moment to jump in.

Follow along on GitHub or subscribe to the RSS feed for what comes next.