Think of it as a
Shipping Container
for your software — pack once, run anywhere
The Shipping Container Analogy
Before shipping containers, loading cargo was chaos — different shapes, sizes, handling requirements. Then came standardized containers: pack anything inside, and it works on any ship, truck, or crane. Docker does the same for software — package your app with everything it needs, and it runs identically on any machine.
Your Machine
App + runtime + dependencies
Container
Everything packaged together
Anywhere
Runs identical everywhere
The Problem It Solves
# Classic developer nightmare
"Works on my machine!" 🤷
Developer A: Python 3.9, pip 21.0
Developer B: Python 3.11, pip 23.0
Production: Python 3.8, pip 19.0 😱
# Manual setup on every server
$ ssh prod-server
$ apt-get install python3 python3-pip
$ pip install -r requirements.txt
# Pray the versions match
# Repeat for 50 servers... - "Works on my machine" syndrome
- Dependency version conflicts
- Hours of manual server setup
- Different behavior across environments
- Impossible to reproduce bugs
# Build once, run everywhere
$ docker build -t myapp:1.0 .
$ docker push myapp:1.0
# On ANY machine (dev, staging, prod)
$ docker run myapp:1.0
# ✓ Same Python version
# ✓ Same dependencies
# ✓ Same behavior
# Done. ☕ - Identical environment everywhere
- Dependencies locked in the image
- Single command deployment
- Bugs reproduce exactly
- Onboard new devs in minutes
Core Concepts
Images
Read-only templates containing your app + dependencies + OS. Like a snapshot or a class definition. You build them with a Dockerfile.
Containers
Running instances of images. Like objects created from a class. Isolated, lightweight, start in milliseconds. Disposable and replaceable.
Volumes
Persistent storage that survives container restarts. Containers are ephemeral — volumes keep your data safe across container lifecycles.
Networks
Isolated virtual networks for container communication. Containers in the same network can find each other by name. Isolated by default.
Dockerfile
The recipe for building an image. Step-by-step instructions: base image, install dependencies, copy code, configure startup command.
Registry
Storage for images — like npm for containers. Docker Hub is public; you can use private registries (ECR, GCR, ACR) for your own images.
Dockerfile Deep Dive
Choose a Base Image
Start with an existing image — usually an OS or runtime. Don't reinvent the wheel.
# Official Python runtime
FROM python:3.11-slim
# Or Node.js
FROM node:20-alpine
# Or just Linux
FROM ubuntu:22.04
# Pro tip: use -slim or -alpine variants for smaller images Set Up the Working Directory
Create a home for your application inside the container.
# All subsequent commands run from here
WORKDIR /app
# Creates the directory if it doesn't exist
# Sets it as the current directory Install Dependencies First
Copy dependency files first, then install. Docker caches layers — if dependencies don't change, this layer is reused.
# Python example
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Node.js example
COPY package*.json ./
RUN npm ci --only=production Copy Application Code
Copy your source code after dependencies — code changes often, dependencies don't.
# Copy everything from current directory to /app
COPY . .
# Or be specific
COPY src/ ./src/
COPY config/ ./config/ Define the Startup Command
Tell Docker how to run your app. CMD is the default; ENTRYPOINT sets a fixed executable.
# Expose the port your app listens on
EXPOSE 8000
# Run the application
CMD ["python", "app.py"]
# Or for Node.js
CMD ["node", "server.js"]
# Or use npm
CMD ["npm", "start"] Complete Examples
Application Code
# app.py
from flask import Flask
import os
app = Flask(__name__)
@app.route('/')
def hello():
return 'Hello from Docker! 🐳'
@app.route('/health')
def health():
return {'status': 'healthy'}
if __name__ == '__main__':
port = int(os.environ.get('PORT', 5000))
app.run(host='0.0.0.0', port=port) # requirements.txt
flask==3.0.0
gunicorn==21.2.0 Dockerfile
# Dockerfile
FROM python:3.11-slim
# Set working directory
WORKDIR /app
# Install dependencies first (cached if unchanged)
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY app.py .
# Environment variables
ENV PORT=5000
# Document the port
EXPOSE 5000
# Run with gunicorn (production server)
CMD ["gunicorn", "--bind", "0.0.0.0:5000", "app:app"] Build & Run
# Build the image
$ docker build -t my-python-app:1.0 .
# Run the container
$ docker run -d -p 5000:5000 --name myapp my-python-app:1.0
# Test it
$ curl http://localhost:5000
Hello from Docker! 🐳
# View logs
$ docker logs myapp
# Stop and remove
$ docker stop myapp && docker rm myapp Docker Compose — Multi-Container Apps
Real applications aren't just one container. You need a database, a cache, maybe a message queue. Docker Compose lets you define and run multi-container applications with a single YAML file. One command brings up your entire stack.
Web Service
Your application
Database
Persistent storage
Cache
Fast data access
docker-compose.yml
# docker-compose.yml
version: '3.8'
services:
web:
build: .
ports:
- "5000:5000"
environment:
- DATABASE_URL=postgresql://user:pass@db:5432/mydb
- REDIS_URL=redis://redis:6379
depends_on:
- db
- redis
restart: unless-stopped
db:
image: postgres:15-alpine
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=mydb
volumes:
- db-data:/var/lib/postgresql/data
restart: unless-stopped
redis:
image: redis:7-alpine
restart: unless-stopped
volumes:
db-data: Compose Commands
# Start all services (detached)
$ docker compose up -d
# View running containers
$ docker compose ps
# View logs (all services)
$ docker compose logs -f
# View logs (specific service)
$ docker compose logs -f web
# Stop all services
$ docker compose down
# Stop and remove volumes (⚠️ destroys data)
$ docker compose down -v
# Rebuild and restart
$ docker compose up -d --build Docker vs Docker Compose
🐳 Use docker run when:
- Running a single container
- Quick tests and experiments
- CI/CD pipeline steps
- One-off administrative tasks
- Learning Docker basics
🐙 Use docker compose when:
- App needs multiple services
- Sharing setup with team
- Local development environments
- Managing networks and volumes
- Reproducible stacks
Common Commands
🐳 Docker Commands
# Images
docker build -t name:tag .
docker pull image:tag
docker images
docker rmi image:tag
# Containers
docker run -d -p 8080:80 image
docker ps # running
docker ps -a # all
docker stop container
docker rm container
docker logs container
docker exec -it container sh
# Cleanup
docker system prune # remove unused
docker system prune -a # remove ALL 🐙 Compose Commands
# Lifecycle
docker compose up -d
docker compose down
docker compose restart
docker compose stop
# Monitoring
docker compose ps
docker compose logs -f
docker compose logs -f service
# Building
docker compose build
docker compose up -d --build
# Shell access
docker compose exec service sh
docker compose run service cmd When to Use Docker
✓ Use when:
- Dev/prod parity matters
- Onboarding new developers quickly
- Isolating dependencies between projects
- Deploying to cloud/Kubernetes
- Microservices architecture
- CI/CD pipelines
⚠️ Consider skipping if:
- Tiny scripts with no dependencies
- Native desktop apps (use native packaging)
- Hardware-level access needed
- Team has no container experience
- Overhead > benefit for simple projects
- Windows GUI applications
Trade-offs
Pros
- Consistent environments everywhere
- Lightweight vs VMs (shared kernel)
- Fast startup (milliseconds)
- Version control for environments
- Massive ecosystem (Docker Hub)
- Works with CI/CD and Kubernetes
Cons
- Learning curve for beginners
- Adds complexity for simple apps
- Security requires attention
- Debugging can be tricky
- Storage/image bloat over time
- Linux-based (not native on Mac/Win)
Key Takeaways
Images are blueprints, containers are instances
Build an image once, run it as many containers as you want. Images are immutable; containers are ephemeral.
Dockerfile = recipe for your environment
Start from a base, install dependencies, copy code, define the startup command. Layer order matters for caching.
Use volumes for persistent data
Containers are disposable. Databases, uploads, and anything important goes in volumes or bind mounts.
Docker Compose for multi-container apps
One docker-compose.yml to define all services, networks, and volumes. One command to run everything.
Build once, ship anywhere
The same image runs on your laptop, your colleague's Mac, AWS, GCP, or any Kubernetes cluster. No surprises.