You Don’t Need `venv` in Docker

Let’s be blunt.
If your Dockerfile starts with python -m venv
, you’ve already taken a wrong turn.
This may feel controversial, but it shouldn’t be: in 2025, we have better tools and practices for building Python apps in containers than layering virtual environments inside already-isolated environments.
Still, the pattern persists: teams set up venv
, activate it, install dependencies, maybe forget to deactivate
, and somehow think this adds robustness or clarity.
It doesn’t. It adds complexity, redundancy, and weight.
It’s time to retire this habit—and instead embrace a build pattern that’s faster, leaner, and better for both development and production.
Let’s walk through why venv
in Docker is a problem, what to use instead, and how to implement the new pattern in your own pipelines using uv
, wheel builds, and multi-stage Dockerfiles with BuildKit.
Why You Don’t Need venv
Inside Containers
Virtual environments in Python exist to isolate project dependencies from system packages. On a shared dev machine or CI runner, this makes perfect sense. But Docker is already providing:
- Filesystem isolation
- Process isolation
- Clean, repeatable environments
- No preinstalled system Python packages (in most slim base images)
That means the whole rationale for venv
—dependency isolation—is already addressed by the container itself.
Using venv
inside a Docker image adds:
- Redundant complexity: You’re nesting isolation inside isolation.
- Opaque paths: Installed packages end up in
lib/python3.x/site-packages
inside thevenv
, making debugging harder. - Extra size: You duplicate Python binaries, pip metadata, and
.pyc
caches. - Activation scripts: You add shell indirection to activate something that doesn’t need to exist.
Worse, using venv
inside multi-stage Docker builds can break layer caching or introduce inconsistencies if the interpreter or environment path changes between stages.
In short: when building Docker images for Python applications, the container is the environment. You don’t need another one inside it.
Introducing uv
: Modern Python Tooling for Clean Builds
uv
is a modern Python packaging tool that replaces several older components in the ecosystem:
pip
for installing dependenciespip-tools
for locking themvirtualenv
for isolating thembuild
for building wheels
It’s written in Rust, and it’s fast. Incredibly fast.
But what really makes uv
interesting for Docker use is its clean, modern workflow. You don’t need to bootstrap pip
, setuptools
, or wheel
manually. You don’t need to worry about activation or patching environments.
Just declare your dependencies in pyproject.toml
, and uv
takes care of resolution, locking, and packaging.
Here’s a typical workflow for preparing a Python app for container packaging:
uv pip install --system build
uv run -m build -w
This produces a wheel in dist/
that can be installed in a clean container image without any build dependencies.
This separation—building in one stage, installing in another—is what gives us small, production-grade images.
Build Wheels, Not State
Python wheels are the canonical distribution format for Python packages. Unlike pip install .
, which builds and installs in one opaque step (and can accidentally install dev files or dirty state), wheels are:
- Portable
- Reproducible
- Auditable
When you build a wheel using uv run -m build -w
, you freeze the application state in a way that can be reused, tested, cached, and validated independently of the source tree.
This aligns perfectly with Docker’s layering and immutability model. You build once, install once, and never have to worry about mismatches in source files, .pyc
conflicts, or leftover artifacts from previous builds.
Caveat: Keep Dev Dependencies Separate
A common pitfall when building wheels is accidentally including development tools in your runtime package.
Python supports optional dependencies in pyproject.toml
. Use them to isolate dev tools from runtime dependencies:
[project.optional-dependencies]
dev = ["pytest", "mypy", "ruff", "coverage"]
Then make sure your wheel is built in a clean environment—not in the same one where you lint or test. This ensures that your production image stays minimal and secure.
If you include testing or dev dependencies in the final wheel, you’ll inflate your image and potentially expose tools you didn’t intend to ship.
Use a multi-stage build. More on that shortly.
Caveat: Avoid .env
Files in Production
Another common mistake: relying on .env
files at runtime.
These files are useful in development, especially with tools like direnv
, dotenv
, or docker-compose
, but they should not be present in your production image.
Why?
- They’re often not versioned.
- They contain secrets in plaintext.
- They add indirection and inconsistency.
Instead, use environment variables directly—set by your orchestrator (e.g., docker run -e
, ECS task definitions, Kubernetes manifests, or CI secrets).
Your app should fail fast and loudly if a required variable isn’t set.
If you must support .env
locally, make it opt-in and dev-only.
Building It Right: Dockerfile with uv
and BuildKit
Here’s how to do it properly. A production-ready, multi-stage Dockerfile that:
- Builds a wheel in an isolated stage
- Uses BuildKit mount to avoid file copying
- Installs only the final artifact into the runtime image
# syntax=docker/dockerfile:1.4
FROM python:3.12-slim AS base
# Stage 1: Build wheel
FROM base AS builder
RUN pip install uv
WORKDIR /app
COPY . /app
RUN uv pip install --system build && uv run -m build -w
# Stage 2: Runtime image
FROM base AS final
WORKDIR /app
# Efficient mount from builder using BuildKit
RUN --mount=from=builder,source=/app/dist,target=/wheels pip install /wheels/*.whl && rm -rf /wheels
CMD ["my_app"]
A few notes:
- This uses the newer
RUN --mount=...
syntax, which requires BuildKit. You can enable it withDOCKER_BUILDKIT=1
. - We never install dev tools in the final image.
- We never copy the full source tree into the runtime.
- The image is deterministic, minimal, and fast to build.
Conclusion: Simpler, Faster, More Secure
Old habits die hard. venv
in Docker has been passed around in tutorials, blog posts, and Stack Overflow answers for over a decade.
But the world has moved on.
With tools like uv
, modern packaging standards, and multi-stage Docker builds, we have all the ingredients for clean, minimal, production-ready Python containers.
Skip the venv
.
Build a wheel.
Install it cleanly.
Ship lean containers.
Your future self—and your CI pipeline—will thank you.