Stackie is a development stack orchestrator that runs services natively on Windows, macOS, and Linux without a container runtime, a virtual machine, or a hypervisor. You describe your stack — Postgres, Supabase, Redis, whatever you need locally — in a small YAML file, check it in alongside your code, and Stackie installs the dependencies, starts the services in the right order, and gives you a tray icon and a dashboard to monitor them.
This post explains who Stackie is for, why we built it instead of reaching for Docker, and what it looks like in practice.
Key facts
- What it is: a native development stack orchestrator (think
docker compose, but without Docker). - Where it runs: Windows, macOS, and Linux. Same YAML stack file on all three.
- What it replaces: Docker Desktop and
docker composefor local development of services you don’t operate yourself in production. - Memory footprint: ~15 MB for an idle Postgres instance; ~200 MB for a fully featured Supabase stack.
- Install:
curl -fsSL https://stackie.dev/install.sh | sh(see the install page for Windows).
Who Stackie is for
Stackie is built for developers and teams who lean on modern managed services but don’t operate those services themselves in the cloud.
Most application teams today need a Postgres database — but many don’t run their own Postgres in production. They reach for a managed service like Amazon Aurora, Supabase, or Neon. They need object storage, but they use S3 or R2. They need a queue, but they use SQS or Upstash. Unless infrastructure is your product, every hour you spend running it yourself is an hour taken from the thing your customers actually pay you for.
Local development, however, still has to look like the real thing. You need a Postgres on your laptop to run migrations against. You need Supabase running locally to debug an auth flow. The default answer for the last decade has been docker compose up, which spins up a Linux VM, a hypervisor, a daemon, and several gigabytes of overhead — all to host services you have no intention of operating yourself.
Stackie fits this gap. Here’s a complete stack file with a Postgres database and a Mailpit SMTP server for local email testing:
my-app:
blocks:
postgres:
vars:
USER: "myapp"
DB: "myapp"
PASSWORD: "dev"
serves:
PORT: 5432
mailpit:
serves:
SMTP: 1025
HTTP: 8025
The top-level key is the stack name. Each block is referenced by its short name (Stackie automatically resolves postgres to stackie.postgres), and the named ports under serves are the ones the block exposes — PORT for Postgres, SMTP and HTTP for Mailpit’s mail receiver and web UI.
Check it in. Hand it to a teammate on Windows, Linux, or macOS. They run stackie up and both services come up natively, in seconds, with no container runtime in sight.
Why wouldn’t we use Docker for this?
Docker is the hammer that makes everything look like a nail — and to be fair, it’s a very good hammer. As an infrastructure tool, it’s the proven Swiss-army knife for shipping the same workload to many machines without the it works on my machine problem. That portability is what containers were built for, and they are excellent at it.
The trouble is that local development isn’t quite the same problem.
On Linux, Docker is fairly cheap. It can run rootless, it doesn’t need a virtual machine, and the kernel features it relies on — namespaces, cgroups, overlayfs — are available natively.
On macOS and Windows the story is very different. Both platforms run containers inside a Linux VM managed by a hypervisor (Apple’s Virtualization framework on macOS; WSL2 / Hyper-V on Windows), and that VM has a long list of side effects.
1. Filesystem virtualisation
Containers expect a Linux filesystem. Your laptop has APFS or NTFS. Bridging that gap means shuttling every file read and write across a virtualised filesystem boundary, and the cost shows up in everything from npm install to test suites that touch a lot of files.
Docker has rewritten the macOS file-sharing layer multiple times — from osxfs to gRPC-FUSE to VirtioFS — precisely because each generation has been a known performance bottleneck. Docker’s own synchronised file shares documentation describes the feature as one that “improves bind mount performance and reduces CPU load on hosts and inside the VM by avoiding the overhead of caching shared files in memory and constantly polling for changes.”
That overhead is what you’re paying the rest of the time. Stackie doesn’t have it. A Stackie-managed service reads the filesystem your operating system already exposes, at native speed.
2. VM-reserved memory
A hypervisor needs memory of its own. When Docker Desktop or WSL2 starts, the host machine’s RAM is effectively split into two pools — one for your applications, browser, and editor, and one ring-fenced for the Linux VM that runs your containers. The VM is allocated a budget, and that budget sits outside the host’s reach.
Modern hypervisors do try to soften this with memory ballooning — a technique where the guest VM voluntarily gives memory back to the host when it isn’t using it, and asks for more when it is. The virtio-balloon specification describes this as cooperative reclamation: the host requests pages, the guest suggests which to release.
The catch on Linux guests is that the kernel is, by design, an aggressive memory consumer. It happily fills any unused RAM with page cache, dentry cache, and inode cache — because from the kernel’s point of view, that memory is doing useful work. The kernel has no idea the host wants the memory back. Balloon drivers can claw some of it free, but reclamation is slow, lossy, and frequently partial. WSL2 has shipped multiple iterations of an experimental memory reclaim mode precisely because the default behaviour leaves Windows starved for RAM long after the workload has finished.
The practical result is familiar: you start a single small service, your laptop fans spin up, and 4 GB of RAM has quietly disappeared into a VM that is mostly idle.
Stackie services are just OS processes. When they free memory, the operating system sees it instantly. There is no guest kernel in the middle deciding to keep the page cache warm. You can watch this happen live: open Activity Monitor on macOS or Task Manager on Windows, then start and stop a Stackie service — the memory is allocated and returned to the host in real time. Run the same experiment on Docker Desktop and the VM holds onto its allocation long after the workload has finished.
3. Battery drain
Hypervisors don’t sleep the way native processes do. The VM is always running a kernel, a scheduler, and a set of background services, even when nothing in your stack is doing real work. On macOS, Docker Desktop is one of the most consistent items in top’s energy column; on Windows, Vmmem shows up the same way. Apple’s own Activity Monitor documentation calls out the Energy Impact and Avg Energy Impact columns as the right way to see this, and a containerised dev stack is rarely a kind neighbour to either number.
Stackie services are processes. When they’re idle, the operating system idles them. When you close the lid, they sleep. There is no kernel underneath them keeping the lights on.
What this looks like in practice
The numbers are the easiest way to make the point.
- A single Postgres database started with Stackie comes up in about a second and sits at roughly 15 MB of resident memory while idle.
- A full Supabase stack — Postgres, GoTrue, PostgREST, Realtime, Storage, Studio, the works — peaks at around 800 MB during startup as each service initialises, then settles to about 200 MB with every feature enabled.
- The same Supabase stack in Docker Desktop typically reserves 2–4 GB for the Linux VM before any service has even started, and pays the filesystem-virtualisation tax on every migration, seed, and hot reload.
We’ve been running this on the new MacBook Neo base model — 8 GB of RAM, no swap pressure, no fans, no problem — with the full Supabase stack at 200 MB and a Claude Code agent humming alongside at 600 MB. There’s still over 7 GB to absorb the OS, the editor, the browser, and everything else. It’s a transformatively different experience from the “close everything else before you start the dev environment” dance that Docker Desktop tends to demand on the same hardware.
Screenshot placeholder — will be added prior to publication.
One job, done well
We’re working hard to solve one core problem well. Stackie is a simple, unobtrusive orchestrator that lives in your tray and manages your development stacks quickly and without fuss.
If your team’s services already run on someone else’s infrastructure in production, your laptop probably doesn’t need to pretend to be a tiny data centre to develop against them.
Follow along on GitHub or subscribe to the RSS feed for what comes next.