Stopusingdockerdesktop Faster Alternative Level Up Coding

Gombloh
-
stopusingdockerdesktop faster alternative level up coding

Docker Desktop uses 800MB disk and 2-4GB RAM at idle — native alternatives boot in under 2 seconds vs 8-12 seconds for docker-compose. - Four lighter setups: virtualenv + Homebrew (simplest), pyenv + direnv (auto-activation), Nix (reproducible without VMs), systemd user services (Linux only). - Docker's macOS filesystem is 4-6x slower than native for small file writes due to VirtioFS overhead. - Keep Docker for production parity and multi-service integration tests, but use native tools for solo dev work to save memory and boost reload speed.

The 800MB Download You Probably Don’t Need Docker Desktop takes 800MB to install and uses 2-4GB of RAM at idle. For local development, that’s often massive overkill. I’m not saying Docker is bad — it’s great for production deploys and orchestration at scale. But for running a Python web server or a Postgres database on your laptop? There are lighter, faster options that boot in milliseconds instead of waiting for the Docker daemon to wake up.

This post walks through four setups I’ve used to replace Docker locally: virtualenv + system packages, pyenv + direnv, Nix, and plain systemd services. Each one boots faster, uses less memory, and gives you tighter control over what’s actually running. The tradeoff is you lose some reproducibility across machines, but for solo dev work or small teams on similar OSes, it’s usually worth it. Why Docker Feels Heavy for Local Work Docker runs a Linux VM on macOS and Windows.

That VM has overhead: disk I/O goes through a network filesystem (VirtioFS or gRPC-FUSE), volumes can be slow, and every container shares the same VM kernel but still allocates its own memory overhead. On my M1 MacBook, docker-compose up for a typical FastAPI + Postgres stack takes about 8-12 seconds from cold start. The same setup using Homebrew Postgres and a local virtualenv? 2 seconds, and that includes activating the venv and running uvicorn . The real kicker is RAM. Docker Desktop reserves memory even when no containers are running.

I’ve seen it hover at 1.5-2GB idle. For a 16GB laptop shared with Chrome, Slack, and VS Code, that’s a noticeable chunk. But the biggest issue isn’t speed or memory — it’s the mental model. Docker abstracts away the filesystem, the network, and the process tree. When something breaks (port conflicts, volume permissions, networking quirks), you’re debugging two layers: your app and Docker’s virtualization.

Alternative 1: virtualenv + System Packages (The Obvious One) This is the simplest replacement: install dependencies with your system package manager, use Python virtualenvs for application code. # Local dev without Docker # Install Postgres via Homebrew (macOS) or apt (Linux) # brew install postgresql@15 # brew services start postgresql@15 # Python dependencies in a venv python3 -m venv venv source venv/bin/activate pip install fastapi uvicorn psycopg2-binary # Run the app uvicorn main:app --reload This starts Postgres in ~100ms. Uvicorn starts in ~200ms. Total boot time: under half a second.

The downside? You’re now managing system-level dependencies. If you need Postgres 14 for one project and Postgres 15 for another, you’ll need to juggle multiple Homebrew services or manually switch pg_ctl data directories. It’s doable, but friction builds up. One trick I use: brew services makes this easier. You can start/stop services per-project: brew services start postgresql@15 brew services stop postgresql@14 Not perfect, but it works for 2-3 concurrent projects. Beyond that, you’re better off with the next approach.

Alternative 2: pyenv + direnv (Automatic Environment Switching) pyenv manages Python versions. direnv auto-activates environments when you cd into a directory. Together, they replace Docker’s “enter the container” workflow with something lighter.

Here’s the setup: # Install pyenv and direnv brew install pyenv direnv # Add to your shell config (~/.zshrc or ~/.bashrc) eval "$(pyenv init --path)" eval "$(direnv hook zsh)" # In your project directory pyenv install 3.11.5 pyenv local 3.11.5 echo 'layout python python3.11' > .envrc direnv allow # Now every time you cd into this directory: # - direnv activates the venv automatically # - pyenv ensures you're on Python 3.11.5 This gives you per-directory Python versions and auto-activated virtualenvs. No source venv/bin/activate needed.

For databases, you can extend .envrc to export connection strings: # .envrc layout python python3.11 export DATABASE_URL="postgresql://localhost/mydb" export REDIS_URL="redis://localhost:6379" Now your app reads the right config the moment you enter the directory. It’s not as isolated as Docker (you’re still using system Postgres), but it’s fast and transparent. One gotcha: direnv can be slow if your .envrc does too much work (e.g., installing packages). Keep it minimal — just env vars and layout commands.

Alternative 3: Nix (Reproducible Without VMs) Nix is a package manager that installs software into isolated, hashed directories. Each project gets its own dependency closure, but they all run natively (no VM overhead). Here’s a minimal shell.nix for a FastAPI + Postgres project: { pkgs ? import <nixpkgs> {} }: pkgs.mkShell { buildInputs = [ pkgs.python311 pkgs.python311Packages.fastapi pkgs.python311Packages.uvicorn pkgs.postgresql_15 ]; shellHook = '' export PGDATA=$PWD/postgres_data if [ !

d "$PGDATA" ]; then initdb --auth=trust fi pg_ctl -l $PWD/postgres.log start || true echo "Postgres running at localhost:5432" ''; } Run nix-shell , and it: 1. Downloads Python 3.11, FastAPI, Uvicorn, and Postgres 15 (cached if you’ve used them before) 2. Initializes a local Postgres data directory 3. Starts Postgres in the background 4. Drops you into a shell where everything is in $PATH Exit the shell, and Postgres stops. Re-enter, and it resumes from the same data directory.

It feels like a container, but it’s just native processes with isolated $PATH and $PGDATA . The magic is Nix’s hash-based store. If two projects need Python 3.11.5, they share the same /nix/store/abc123-python-3.11.5 directory. But if one project needs Postgres 14 and another needs Postgres 15, they get separate binaries — no conflicts. Downside? Nix has a steep learning curve. The Nix language is functional and lazy-evaluated, and error messages can be cryptic. I spent two hours debugging a missing pkgs. prefix once.

But for teams where “it works on my machine” is a real problem, Nix is the closest thing to Docker’s reproducibility without the VM overhead. Alternative 4: systemd User Services (Linux Only) If you’re on Linux, systemd can manage per-user services. This is my favorite option for long-running dev databases.

Here’s a user service for Postgres: # ~/.config/systemd/user/postgres-dev.service [Unit] Description=Postgres Dev Instance [Service] Type=forking ExecStart=/usr/bin/pg_ctl start -D %h/postgres_data -l %h/postgres.log ExecStop=/usr/bin/pg_ctl stop -D %h/postgres_data Restart=on-failure [Install] WantedBy=default.target Enable it: systemctl --user enable postgres-dev systemctl --user start postgres-dev Now Postgres starts on boot and restarts if it crashes. No Docker daemon, no manual pg_ctl commands.

You can do this for Redis, Elasticsearch, or any other service: # ~/.config/systemd/user/redis-dev.service [Unit] Description=Redis Dev Instance [Service] ExecStart=/usr/bin/redis-server --port 6380 --dir %h/redis_data Restart=on-failure [Install] WantedBy=default.target This approach only works on Linux, but if you’re already there, it’s the most “native” option. Services integrate with journalctl , logs go to the systemd journal, and you get automatic restarts for free. When You Still Need Docker I’m not advocating for deleting Docker entirely.

There are cases where it’s still the right tool: - Production parity: If your app runs in Kubernetes or ECS, testing locally in Docker catches container-specific bugs early (missing ENV vars, wrong base image, etc.). - Multi-service integration tests: Running 5+ services (app, db, cache, queue, mock S3) is easier with docker-compose than juggling systemd/Nix. - Team onboarding: docker-compose up is simpler to explain than “install Nix, runnix-shell , then configure direnv”.

macOS/Windows development for Linux targets: If you’re building for a Linux server but coding on a Mac, Docker gives you a Linux environment without dual-booting. But for solo dev work, or if everyone on your team uses similar OSes (all macOS, or all Ubuntu), the alternatives above are faster and lighter. Memory Comparison (Actual Numbers) I ran the same FastAPI + Postgres + Redis stack on my M1 MacBook (16GB RAM) using four approaches: Docker’s idle RAM includes the VM overhead even when no containers are running.

The native approaches only count Postgres, Redis, and Python processes. Cold start measures docker-compose up vs. starting services + activating venv + running uvicorn . Hot reload is the time from saving a Python file to seeing the change in the browser. The Nix cold start is slower because it checks hashes and symlinks binaries into $PATH . But once everything’s cached, subsequent runs are fast. One caveat: these numbers are on a 16GB M1. On a 32GB Intel Mac, Docker’s overhead might be less noticeable.

But on a 8GB Linux laptop, saving 1.5GB of RAM is the difference between smooth multitasking and constant swapping. The Filesystem Speed Issue Docker on macOS uses a network filesystem to share volumes between the host and the VM. This makes file I/O slower than native operations.

Here’s a quick benchmark writing 10,000 small JSON files: import json import time from pathlib import Path start = time.time() for i in range(10000): Path(f"data/file_{i}.json").write_text(json.dumps({"id": i})) print(f"Took {time.time() - start:.2f}s") On macOS Big Sur (M1): – Native filesystem: 2.1s – Docker volume (VirtioFS): 9.8s – Docker bind mount: 12.3s That’s 4-6x slower. For ML training with lots of small checkpoint writes, or web scraping saving thousands of HTML files, this adds up. Linux doesn’t have this problem — Docker on Linux uses native filesystem drivers.

But on macOS and Windows, it’s a real bottleneck. Configuration Management Without Dockerfiles One thing Docker does well: the Dockerfile is a declarative recipe for “here’s how to build this environment”. Without Docker, you need an equivalent. For simple projects, a README.md is enough: ## Setup 1. Install Python 3.11: `brew install [email protected]` 2. Install Postgres: `brew install postgresql@15` 3. Start Postgres: `brew services start postgresql@15` 4. Install dependencies: `pip install -r requirements.txt` 5.

Run: `uvicorn main:app --reload` For more complex setups, I use a Makefile : .PHONY: setup setup: brew install [email protected] postgresql@15 redis brew services start postgresql@15 redis python3.11 -m venv venv ./venv/bin/pip install -r requirements.txt .PHONY: run run: ./venv/bin/uvicorn main:app --reload .PHONY: clean clean: brew services stop postgresql@15 redis rm -rf venv Now make setup && make run replaces docker-compose up . It’s not as portable (assumes Homebrew on macOS), but for a team that’s already on macOS, it’s simpler. For Nix users, the shell.nix file is the declarative recipe.

Anyone with Nix installed can run nix-shell and get the exact same environment. What About Dependency Hell? The classic argument for Docker: “it avoids dependency conflicts”. If Project A needs Python 3.9 and Project B needs Python 3.11, containers keep them separate.

But pyenv + virtualenv does the same thing: cd project-a pyenv local 3.9.16 python -m venv venv source venv/bin/activate pip install -r requirements.txt cd ../project-b pyenv local 3.11.5 python -m venv venv source venv/bin/activate pip install -r requirements.txt Each project gets its own Python version and isolated package directory. No conflicts. The one case where Docker still wins: system-level dependencies. If Project A needs libpq-dev version 12 and Project B needs version 14, you’re stuck. Nix solves this (each package gets its own library version), but pyenv doesn’t.

In practice, I rarely hit this. Most Python/Node/Go projects don’t have deep system-level dependency conflicts. And when they do, I either use Nix or just run one project in Docker while keeping the rest native.

Port Conflicts Are Easier to Debug Natively With Docker, port conflicts look like this: ERROR: for postgres Cannot start service postgres: driver failed programming external connectivity on endpoint postgres_1: Bind for 0.0.0.0:5432 failed: port is already allocated You check docker ps , see nothing using port 5432, and then realize there’s a system Postgres running outside Docker. Or another container from a different project. Debugging this requires checking both Docker’s network namespace and the host.

With native processes, it’s just: lsof -i :5432 # COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME # postgres 823 user 5u IPv6 0x1234 0t0 TCP *:postgresql (LISTEN) You see the PID, kill it or change your app’s port. One command, no abstraction layers. FAQ Q: Can I mix Docker and native services (e.g., Docker Postgres but native Python)? Yes. Run docker run -d -p 5432:5432 postgres:15 , then connect to it from a native Python app at localhost:5432 .

This gives you an isolated database without Dockerizing your entire stack. I use this for databases I don’t want to configure (Elasticsearch, MongoDB) while keeping Python native for fast reloads. Q: What about Windows development? Windows is trickier. WSL2 gives you a Linux environment, and you can use the systemd or Nix approaches inside WSL. But file I/O between Windows and WSL2 is slow (similar to Docker’s macOS penalty). For pure Windows (no WSL), Docker might still be the easiest option, or consider using Scoop/Chocolatey for package management + virtualenvs.

Q: How do I share this setup with teammates who insist on Docker? Maintain both. Keep your docker-compose.yml for teammates who want it, but also document the native setup in a Makefile or shell.nix . In my experience, once people try the faster boot times, they start asking “how do I set up the native version?” The Real Question: Does Your Team Need Reproducibility or Speed? Docker optimizes for reproducibility. Nix also does this, but without the VM overhead.

pyenv + direnv optimizes for speed and simplicity, at the cost of some cross-platform fragility. If you’re working solo, or everyone on your team uses the same OS, the native approaches are almost always faster. If you’re onboarding juniors every month, or half the team is on Windows and half on macOS, Docker’s “just run docker-compose up ” is still valuable.

For my own projects, I default to Homebrew + virtualenv for simple apps, Nix for anything with complex system dependencies, and Docker only when I need to test the actual production container image. The 2GB of RAM I save by not running Docker Desktop? That’s 20 more Chrome tabs. And we all know that’s the real productivity metric. Did you find this helpful? Your support keeps this blog running and ad-free content coming.

☕ Buy me a coffeeMost Popular Posts - PaddleOCR vs EasyOCR vs Tesseract: Why PaddleOCR Is Slower (220 views) - Quant Feature Engineering with Pandas and TA Indicators (140 views) - PaddleOCR vs EasyOCR Benchmark: Speed, Size, and Accuracy (113 views) - JAX vs PyTorch: When JIT Speed Actually Matters (107 views) - Envelope Analysis vs FFT for Bearing Fault Detection (100 views) Leave a Reply

People Also Asked

STOP using Docker Desktop: Faster Alternative Nobody UsesSTOP using Docker Desktop: Here’s a Fast Alternative Nobody ...Stop Using Docker and Try One of These 4 Alternatives InsteadStop using default Docker Desktop: This lightweight ...You Don’t Need Docker for Local Dev: 4 Faster AlternativesTop 4 Free Docker Desktop Alternatives 2026 - bytebase.comTop 4 FreeDocker DesktopAlternatives 2026 - bytebase.comBest FreeDocker DesktopAlternatives in 2026 - Better Stack Commu…Stop usingdefaultDocker Desktop: This lightweightalternativeis allSTOP using Docker Desktop: Here’s a FastAlternativeNobody UsesBest Free Docker Desktop Alternatives in 2026 | Better Stack ...?

macOS/Windows development for Linux targets: If you’re building for a Linux server but coding on a Mac, Docker gives you a Linux environment without dual-booting. But for solo dev work, or if everyone on your team uses similar OSes (all macOS, or all Ubuntu), the alternatives above are faster and lighter. Memory Comparison (Actual Numbers) I ran the same FastAPI + Postgres + Redis stack on my M1 M...

STOP using Docker Desktop: Here’s a Fast Alternative Nobody ...?

The 800MB Download You Probably Don’t Need Docker Desktop takes 800MB to install and uses 2-4GB of RAM at idle. For local development, that’s often massive overkill. I’m not saying Docker is bad — it’s great for production deploys and orchestration at scale. But for running a Python web server or a Postgres database on your laptop? There are lighter, faster options that boot in milliseconds instea...

Stop Using Docker and Try One of These 4 Alternatives Instead?

macOS/Windows development for Linux targets: If you’re building for a Linux server but coding on a Mac, Docker gives you a Linux environment without dual-booting. But for solo dev work, or if everyone on your team uses similar OSes (all macOS, or all Ubuntu), the alternatives above are faster and lighter. Memory Comparison (Actual Numbers) I ran the same FastAPI + Postgres + Redis stack on my M1 M...

You Don’t Need Docker for Local Dev: 4 Faster Alternatives?

The 800MB Download You Probably Don’t Need Docker Desktop takes 800MB to install and uses 2-4GB of RAM at idle. For local development, that’s often massive overkill. I’m not saying Docker is bad — it’s great for production deploys and orchestration at scale. But for running a Python web server or a Postgres database on your laptop? There are lighter, faster options that boot in milliseconds instea...

Top 4 Free Docker Desktop Alternatives 2026 - bytebase.com?

For my own projects, I default to Homebrew + virtualenv for simple apps, Nix for anything with complex system dependencies, and Docker only when I need to test the actual production container image. The 2GB of RAM I save by not running Docker Desktop? That’s 20 more Chrome tabs. And we all know that’s the real productivity metric. Did you find this helpful? Your support keeps this blog running and...