If you already have a docker-compose.yml and you’re curious what it looks like to move to Stackie, the good news is that the path is short. Stackie ships a one-shot converter that takes your existing compose file and emits a native Stackie stack file you can run with stackie up. Most teams have it producing a working stack in under a minute.
The slightly longer answer — the one that actually matters — is what to do with the parts of your compose file that aren’t off-the-shelf services. Maybe you have a Dockerfile that builds your own application. Maybe you have a sidecar that lives next to a managed service. Maybe you’ve got some glue that’s only meaningful in a containerised world. We’ve watched teams take a few different routes here and they all work, so this post covers both halves: the easy bit (running the converter), and the more interesting bit (what to do about your own code).
Along the way, we’ll show off some of the things you get for free once you’ve made the jump — native debugging without crossing a VM boundary, a memory profile your laptop’s fans will thank you for, and an MCP integration that hands an AI agent a complete view of your stack without any glue code on your part.
The easy part: stackie convert
If you’ve installed Stackie, you already have the converter. Point it at your compose file:
stackie convert -f docker-compose.yml -o stackie-stack.yml
That’s it. The output is a stackie-stack.yml you can run with stackie up -f stackie-stack.yml. Each service in your compose file becomes a block in the new stack:
| Docker Compose | Stackie Stack |
|---|---|
image: postgres:16 | stackie.postgres block, version: "16" |
environment: | environment: overrides |
ports: ["5432:5432"] | serves: { PORT: 5432 } |
service map key (db:) | name: override on the block |
Multiple -f flags merge compose files left-to-right, the same way docker compose does it, so if you’ve got a compose.yml plus a compose.override.yml you can pass both.
A handful of compose features don’t have a perfect home in the Stackie schema yet — command:, volumes:, restart:, healthcheck:, working_dir: — so the converter prints a warning and skips them. Stackie blocks ship with their own health checks, restart behaviour, and lifecycle defaults that usually replace what compose users wired up by hand, so most of the time the warnings turn out to be moot. Read the warnings carefully on your first conversion, and if you spot something you actually rely on, file an issue — those are the cases that drive the schema forward.
The interesting part: what about your code?
The converter handles the third-party services in your compose file — Postgres, Redis, Mailpit, the rest. What it doesn’t handle is the first-party part: your own application image, the one you docker build from a Dockerfile that lives next to your code.
We don’t support docker build directly, and we’re not planning to — the previous post gets into why, but the short version is that it depends on a Linux kernel and filesystem that doesn’t fit Stackie’s native-process model. So if your compose file currently builds your own service, you have a choice to make. Two paths, both of which we’ve watched teams take successfully:
Path 1: Lean into serverless for new app code
If your team is already shipping production on a managed cloud, the question worth asking is whether the application code needs to live in a container during local development. JavaScript runs as source on every major serverless platform — AWS Lambda, Cloudflare Workers, Azure Functions, Google Cloud Functions, Vercel, Netlify, Supabase Edge Functions — and Python is first-class on most of them. Go and Rust have native runtimes on Lambda, Google Cloud Functions and Vercel; on the rest, you typically reach for WASM (Workers) or a custom-handler shim (Azure).
Where a container image is the deploy artefact, the build belongs in your CI or release pipeline rather than your laptop. The local inner loop runs as a native process under Stackie, and the image gets packaged once at the testing-and-release phase from the same source. You get breakpoints, profiling, hot reload — the works — without docker build ever entering your dev workflow, and the deploy path is a single platform-native command instead of a build-push-pull dance.
This is the path we’ve seen pay off the fastest for teams whose app is mostly request handlers and integrations. It also tends to play nicely with the managed-services framing the first post lays out: if your DB is already on Aurora and your storage is already on S3, your handlers want to live close to those services rather than inside your laptop’s container runtime.
Path 2: Push the build into your IaC layer
For teams where the application really does need to be a container in production — think a long-running daemon with native dependencies, or anything orchestrated by Kubernetes — the cleanest move is to drop the build: directive from docker-compose.yml and let Terraform (or Pulumi, or whatever you use) build the image as part of the deploy pipeline.
The pattern looks like:
- Local development runs your app as a native binary under Stackie. Iteration is fast, debugging is direct, and the rest of the stack is the same one your CI/CD will build against.
- The Dockerfile and image build live in your IaC repo (or a
terraform/directory in the same repo), close to the deployment that consumes them. - CI builds and pushes the image. Production pulls it. Local development never sees a
docker buildagain.
This collapses two responsibilities — running this thing locally and packaging this thing for production — that always wanted to live in different places anyway. The first one becomes Stackie’s job, the second one becomes your IaC’s job, and the boundary between them is a lot less leaky than the one a hybrid docker-compose.yml was trying to maintain.
We’ve seen both of these paths land for different teams. There isn’t a “correct” answer — it depends on where you ship and what your code is shaped like. The point is that not supporting docker build in Stackie isn’t a missing feature; it’s a fork in the road, and there’s a clean way down either branch.
What you actually get out of the move
We covered the headline numbers in the introducing post — a single Postgres at ~15 MB, a full Supabase stack at ~200 MB, no VM, no hypervisor, no fans-up-the-instant-you-open-the-laptop. That story is the easy one to tell. Two more deserve a callout in the migration context, because they’re the ones you’ll feel as soon as you switch.
Native debugging, without crossing a VM
When your app runs as a native OS process, your debugger sees a real PID. No VM port forwarding, no remote-debug protocol over a virtualised network, no fragile sidecar that has to be wired up before breakpoints work. You attach lldb, gdb, your IDE’s debugger, your language’s profiler, and they connect to the actual process — same as any other binary on your machine.
In practical terms this means:
- VS Code’s Run and Debug “Attach to Process” picks up your service in the dropdown the second it starts.
- IntelliJ / RubyMine / PyCharm step debugging works without container-runtime plugins.
- Native profilers (Instruments on macOS, perf on Linux, ETW on Windows) can sample your service like any other process — including the kernel-side time, which is invisible from inside a container.
- File watchers and hot-reload that depended on filesystem events now actually receive events at the speed the OS produces them, instead of whatever VirtioFS managed to relay.
You stop reaching for docker exec to poke at things and you start treating the service the way you’d treat a native binary. Because that’s what it is.
MCP that already knows your whole stack
This is the one we were most excited to ship. The MCP server is built into the Stackie daemon, and the moment a stack starts, every block in it is exposed to your local AI agent — Claude, Codex, Gemini, anything that speaks MCP — with the full context of the codebase it lives next to.
There’s no plugin to install, no glue script to maintain, no separate tools server to keep in sync with your stack. Stackie auto-enrols itself with your local agent the first time you log in, and from then on the agent has live access to:
- Service logs for every running block, streaming in real time.
- Database schemas for every Postgres, MySQL, Mongo etc. in the stack.
- The block catalogue so the agent knows what’s available, what versions are supported, and what each block exposes.
- Daemon control — starting and stopping blocks, querying health, applying configuration changes — within the sandbox boundaries of each block.
What this looks like in a real session: you mention that the dashboard’s /users page has gotten noticeably slower over the last sprint. The agent attaches a native sampler to the running Node process, sees the wall time piling up in database I/O rather than CPU, tails the postgres log to confirm the handler is firing 200 small queries per request instead of one, opens the source to find the .map(async (id) => fetch(...)) that snuck in during a refactor, suggests collapsing it to a single WHERE id IN (...), applies the change in the running stack, re-samples to confirm the regression is gone, and offers you a PR. All in one session. All without you setting up a single thing beyond stackie up.
The reason this works so much better than it would over a Docker socket is twofold. The agent isn’t reasoning about a container — it’s reasoning about your stack: the Postgres schema and your application code sit in the same view, and bridging them isn’t a tooling problem, it’s just a query. And because the agent runs on the same host kernel as your services, it can actually see them. On macOS and Windows, Docker Desktop runs containers inside a Linux VM that’s opaque to the host — every container process collapses into a single line in Activity Monitor or Task Manager, native profilers like Instruments and ETW can’t trace into the VM, and host debuggers need a runtime plugin to attach to anything inside it. An MCP agent pointed at a Docker socket only gets what the daemon chooses to surface; the OS underneath stays dark. With Stackie every block is a normal OS process, so the agent can attach a debugger, sample a stack, count open file descriptors, or watch a syscall — anything you’d do for any other binary on your machine.
We covered the headline of this on the splash page under Agent Ready; the docs go into detail on tool surface, sandboxing, and which agents enrol automatically. If you’ve been waiting for the AI-pair-programming experience to catch up to the marketing, this is a meaningful step forward, and it costs you exactly zero extra setup once you’re on Stackie.
When you should not migrate
We’d be undermining ourselves if we pretended this was always the right move. Skip Stackie (or stay hybrid via Docker compatibility) if any of these apply:
- Your local stack depends on container internals — privileged containers, custom kernel modules, raw cgroup tweaks. Stackie doesn’t run those.
- Your team’s production deployment is built on Kubernetes manifests that you also use locally for parity, and the parity matters more than the iteration speed.
- The service you most need locally has no Stackie block yet and the converter would skip it. File an issue and keep that piece on Docker compatibility until your block lands.
Beyond those, the answer for most teams is: try the converter, run the resulting stack, and see how it feels. The cost of finding out is stackie convert -f docker-compose.yml.
Follow along on GitHub or subscribe to the RSS feed for what comes next.