Docker Container Fundamentals: Containers vs Virtual Machines Linux Mastery Series
Prerequisites
What Are Docker Container Fundamentals?
Docker container fundamentals represent a lightweight virtualization technology that packages applications with their dependencies into isolated runtime environments. Unlike virtual machines that virtualize entire hardware stacks, containers share the host operating system kernel while maintaining process isolation through Linux namespaces, cgroups, and layered filesystems. This architecture enables containers to start in milliseconds, consume minimal resources (typically 10-100MB vs 1-10GB for VMs), and achieve near-native performance while providing consistent deployment across development, testing, and production environments.
Key advantages immediately:
- Startup speed: Containers launch in seconds vs minutes for VMs
- Resource efficiency: Share OS kernel, reducing overhead by 60-80%
- Portability: Run identically across any Linux system with Docker installed
- Isolation: Process-level separation without hypervisor overhead
Table of Contents
- What Are Docker Containers and How Do They Work?
- How Do Containers Differ From Virtual Machines?
- What Is Docker Architecture and Its Core Components?
- How to Install Docker on Linux Systems?
- What Are Docker Images and How to Build Them?
- How to Run and Manage Docker Containers?
- What Is Docker Networking and How Does It Work?
- How to Manage Docker Storage and Volumes?
- What Are Docker Compose and Multi-Container Applications?
- FAQ: Docker Container Fundamentals
- Troubleshooting Common Docker Issues
What Are Docker Containers and How Do They Work?
Docker containers represent a revolutionary approach to application deployment, fundamentally changing how we think about virtualization. Therefore, understanding the underlying technology becomes essential for modern system administration.
Understanding Container Technology
Containers are lightweight, standalone executable packages that include everything needed to run an application: code, runtime, system tools, libraries, and settings. Moreover, they leverage several Linux kernel features to achieve isolation without the overhead of traditional virtualization.
Core Linux technologies enabling containers:
- Namespaces: Provide isolation for processes, network, filesystem, and users
- Control Groups (cgroups): Limit and monitor resource usage (CPU, memory, I/O)
- Union Filesystems: Enable layered file systems for efficient storage
- Security mechanisms: SELinux, AppArmor, and seccomp profiles
Consequently, these technologies work together to create isolated environments that share the host kernel while maintaining strong separation boundaries.
How Container Isolation Works
When you start a container, Docker creates multiple namespace types:
# View namespaces for a running container
sudo ls -la /proc/$(docker inspect -f '{{.State.Pid}}' container_name)/ns/
This command reveals namespace types:
pid– Process isolationnet– Network isolationmnt– Filesystem mount pointsuts– Hostname and domain nameipc– Inter-process communicationuser– User and group ID isolation
Furthermore, each namespace ensures that processes inside containers cannot interfere with processes outside or in other containers, creating robust isolation boundaries.
How Do Containers Differ From Virtual Machines?
Understanding the architectural differences between containers and virtual machines is crucial for making informed infrastructure decisions. Specifically, these technologies solve similar problems through fundamentally different approaches.
Architectural Comparison
Virtual Machines (Traditional Approach):
- Run complete operating system with dedicated kernel
- Require hypervisor (KVM, VMware, VirtualBox) for hardware virtualization
- Each VM includes full OS overhead (1-10GB+ per instance)
- Boot times typically 30-60 seconds
- Strong isolation through hardware virtualization
Containers (Modern Approach):
- Share host operating system kernel
- Use OS-level virtualization (namespaces, cgroups)
- Minimal overhead (typically 10-100MB per instance)
- Start in milliseconds to seconds
- Process-level isolation without hypervisor
Performance and Resource Comparison
| Metric | Virtual Machines | Docker Containers |
|---|---|---|
| Startup Time | 30-60 seconds | 0.1-2 seconds |
| Memory Overhead | 1-10GB per VM | 10-100MB per container |
| Disk Space | 5-50GB per VM | 50-500MB per container |
| CPU Performance | 5-10% overhead | Near-native performance |
| Density | 10-50 per host | 100-1000+ per host |
| Portability | Moderate (hardware dependent) | Excellent (consistent across platforms) |
When to Use Containers vs Virtual Machines
Use Docker containers when:
- Deploying microservices architectures
- Running multiple instances of same application
- Continuous integration/continuous deployment (CI/CD) pipelines
- Development environment consistency is critical
- Rapid scaling requirements exist
- Cloud-native application development
Use virtual machines when:
- Running different operating systems on same hardware
- Legacy applications requiring full OS
- Strong security isolation is mandatory
- Testing kernel-level modifications
- Running untrusted code requiring hardware-level isolation
Nevertheless, many modern infrastructures use both technologies together: VMs for strong isolation between tenants, containers for application density within VMs.
What Is Docker Architecture and Its Core Components?
Docker architecture consists of several interconnected components that work together to provide containerization capabilities. Understanding these components helps administrators manage containerized applications effectively.
Docker Engine Components
Docker Engine comprises three primary components:
- Docker Daemon (dockerd)
- Background service managing containers, images, networks, and volumes
- Listens for Docker API requests
- Handles container lifecycle operations
- Docker CLI (docker)
- Command-line interface for interacting with Docker daemon
- Communicates via REST API
- Primary tool for container management
- Docker REST API
- Interface between CLI and daemon
- Enables programmatic container management
- Used by third-party tools and orchestration platforms
Key Docker Components
# View Docker system information
docker system info
# Display Docker version details
docker version
Essential Docker elements:
- Images: Read-only templates for creating containers
- Containers: Runnable instances of images
- Registries: Repositories for storing and distributing images (Docker Hub, private registries)
- Networks: Virtual networks connecting containers
- Volumes: Persistent data storage mechanisms
Moreover, Docker uses a client-server architecture where the Docker client communicates with the Docker daemon, which performs the actual work of building, running, and distributing containers.
Docker Hub and Image Registries
Docker Hub serves as the default public registry containing thousands of pre-built images:
# Search for images on Docker Hub
docker search nginx
# Pull an image from Docker Hub
docker pull nginx:latest
Furthermore, organizations often deploy private registries for proprietary images, ensuring security and compliance requirements are met.
How to Install Docker on Linux Systems?
Installing Docker on Linux varies slightly across distributions, but the process remains straightforward. Consequently, choosing the correct installation method ensures optimal performance and security.
Ubuntu/Debian Installation
# Update package index
sudo apt update
# Install prerequisite packages
sudo apt install -y \
apt-transport-https \
ca-certificates \
curl \
gnupg \
lsb-release
# Add Docker's official GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
# Set up the stable repository
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install Docker Engine
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
# Verify installation
sudo docker run hello-world
RHEL/CentOS/Fedora Installation
# Remove old versions
sudo dnf remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine
# Set up the repository
sudo dnf -y install dnf-plugins-core
sudo dnf config-manager --add-repo https://download.docker.com/linux/fedora/docker-ce.repo
# Install Docker Engine
sudo dnf install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
# Start and enable Docker service
sudo systemctl start docker
sudo systemctl enable docker
# Verify installation
sudo docker run hello-world
Post-Installation Configuration
# Add your user to the docker group (avoid sudo for docker commands)
sudo usermod -aG docker $USER
# Apply new group membership (logout/login or use newgrp)
newgrp docker
# Test without sudo
docker run hello-world
# Configure Docker to start on boot
sudo systemctl enable docker.service
sudo systemctl enable containerd.service
Additionally, configure Docker daemon settings by editing /etc/docker/daemon.json:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
},
"storage-driver": "overlay2"
}
Then restart Docker:
sudo systemctl restart docker
What Are Docker Images and How to Build Them?
Docker images serve as the foundation for containers, containing all necessary components to run applications. Therefore, understanding image creation and management becomes crucial for efficient container operations.
Understanding Docker Image Layers
Docker images consist of read-only layers stacked on top of each other. Each layer represents a filesystem change (adding, modifying, or deleting files):
# View image layers and history
docker history nginx:latest
# Inspect detailed image information
docker inspect nginx:latest
Consequently, this layered approach provides several benefits:
- Efficient storage: Shared layers reduce disk usage
- Fast builds: Only changed layers rebuild
- Easy distribution: Only new layers transfer during push/pull
Creating Docker Images with Dockerfile
A Dockerfile contains instructions for building images:
# Base image selection
FROM ubuntu:22.04
# Maintainer information
LABEL maintainer="admin@linuxtips.pro"
LABEL description="Custom web application container"
# Set environment variables
ENV APP_HOME=/opt/webapp \
APP_USER=appuser
# Install dependencies
RUN apt-get update && \
apt-get install -y python3 python3-pip nginx && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# Create application user
RUN useradd -m -d ${APP_HOME} ${APP_USER}
# Set working directory
WORKDIR ${APP_HOME}
# Copy application files
COPY requirements.txt .
RUN pip3 install --no-cache-dir -r requirements.txt
COPY app/ .
# Change ownership
RUN chown -R ${APP_USER}:${APP_USER} ${APP_HOME}
# Switch to non-root user
USER ${APP_USER}
# Expose port
EXPOSE 8080
# Health check
HEALTHCHECK --interval=30s --timeout=3s \
CMD curl -f http://localhost:8080/health || exit 1
# Container startup command
CMD ["python3", "app.py"]
Building and Managing Images
# Build image from Dockerfile in current directory
docker build -t myapp:1.0 .
# Build with specific Dockerfile
docker build -f Dockerfile.prod -t myapp:prod .
# Build without cache (force complete rebuild)
docker build --no-cache -t myapp:1.0 .
# List local images
docker images
# Tag an image
docker tag myapp:1.0 myregistry.com/myapp:1.0
# Remove images
docker rmi myapp:1.0
# Remove unused images
docker image prune -a
Multi-Stage Builds for Optimization
Multi-stage builds reduce final image size by separating build and runtime environments:
# Build stage
FROM golang:1.21 AS builder
WORKDIR /build
COPY . .
RUN go mod download
RUN CGO_ENABLED=0 GOOS=linux go build -o app .
# Runtime stage
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=builder /build/app .
CMD ["./app"]
This approach creates production images that are 90% smaller than single-stage builds, significantly improving deployment speed and security posture.
How to Run and Manage Docker Containers?
Running and managing containers effectively requires understanding Docker’s runtime capabilities and lifecycle management. Therefore, mastering these commands becomes essential for day-to-day operations.
Basic Container Operations
# Run container in foreground (interactive)
docker run -it ubuntu:22.04 /bin/bash
# Run container in background (detached)
docker run -d --name webserver nginx:latest
# Run with port mapping (host:container)
docker run -d -p 8080:80 --name web nginx:latest
# Run with environment variables
docker run -d \
-e MYSQL_ROOT_PASSWORD=secret \
-e MYSQL_DATABASE=myapp \
--name mysql mysql:8.0
# Run with resource limits
docker run -d \
--memory="512m" \
--cpus="1.5" \
--name limited nginx:latest
# Run with automatic restart policy
docker run -d \
--restart unless-stopped \
--name persistent nginx:latest
Container Lifecycle Management
# List running containers
docker ps
# List all containers (including stopped)
docker ps -a
# Start stopped container
docker start webserver
# Stop running container (graceful shutdown)
docker stop webserver
# Stop with timeout
docker stop -t 30 webserver
# Restart container
docker restart webserver
# Pause container processes
docker pause webserver
# Unpause container
docker unpause webserver
# Kill container (force stop)
docker kill webserver
# Remove stopped container
docker rm webserver
# Remove running container (force)
docker rm -f webserver
# Remove all stopped containers
docker container prune
Interacting with Running Containers
# Execute command in running container
docker exec -it webserver /bin/bash
# Execute single command
docker exec webserver ls -la /var/www/html
# View container logs
docker logs webserver
# Follow logs in real-time
docker logs -f webserver
# View last 100 lines
docker logs --tail 100 webserver
# View logs with timestamps
docker logs -t webserver
# Copy files to/from container
docker cp localfile.txt webserver:/path/to/destination
docker cp webserver:/path/to/file.txt ./localpath/
# View container resource usage
docker stats webserver
# View all containers stats
docker stats
# Inspect container details
docker inspect webserver
# View container processes
docker top webserver
Advanced Container Features
# Run container with custom DNS
docker run -d \
--dns 8.8.8.8 \
--dns-search example.com \
--name custom-dns nginx
# Run with custom hostname
docker run -d \
--hostname mywebserver \
--name web nginx
# Run with additional capabilities
docker run -d \
--cap-add=NET_ADMIN \
--name nettools nginx
# Run with security options
docker run -d \
--security-opt seccomp=unconfined \
--security-opt apparmor=docker-default \
--name secure nginx
# Run with tmpfs mount (in-memory filesystem)
docker run -d \
--tmpfs /tmp:rw,size=100m,mode=1777 \
--name tmpfs-test nginx
What Is Docker Networking and How Does It Work?
Docker networking provides connectivity between containers and external networks through flexible networking models. Consequently, understanding these models enables architects to design secure, scalable container infrastructures.
Docker Network Drivers
Docker supports several network drivers, each suited for different use cases:
- Bridge: Default network for standalone containers
- Host: Remove network isolation (use host network directly)
- Overlay: Enable swarm service communication across hosts
- Macvlan: Assign MAC addresses to containers for legacy app compatibility
- None: Disable networking completely
Working with Docker Networks
# List available networks
docker network ls
# Inspect network details
docker network inspect bridge
# Create custom bridge network
docker network create \
--driver bridge \
--subnet 172.20.0.0/16 \
--gateway 172.20.0.1 \
mynetwork
# Create network with custom options
docker network create \
--driver bridge \
--subnet 192.168.10.0/24 \
--ip-range 192.168.10.128/25 \
--gateway 192.168.10.1 \
--opt "com.docker.network.bridge.name"="custom_bridge" \
advanced-network
# Run container on specific network
docker run -d \
--network mynetwork \
--name web1 nginx
# Connect existing container to network
docker network connect mynetwork web2
# Disconnect container from network
docker network disconnect mynetwork web2
# Remove unused networks
docker network prune
Container Communication Examples
# Create network and containers
docker network create app-network
# Start database container
docker run -d \
--network app-network \
--name postgres-db \
-e POSTGRES_PASSWORD=secret \
postgres:15
# Start application container (connects to db by name)
docker run -d \
--network app-network \
--name webapp \
-e DATABASE_HOST=postgres-db \
-e DATABASE_PORT=5432 \
-p 8080:8080 \
myapp:latest
# Test connectivity between containers
docker exec webapp ping -c 3 postgres-db
docker exec webapp nc -zv postgres-db 5432
Port Publishing and Mapping
# Publish single port
docker run -d -p 8080:80 nginx
# Publish to specific host interface
docker run -d -p 127.0.0.1:8080:80 nginx
# Publish multiple ports
docker run -d \
-p 80:80 \
-p 443:443 \
nginx
# Publish all exposed ports to random host ports
docker run -d -P nginx
# View port mappings
docker port nginx-container
Host Network Mode
# Run container using host network stack
docker run -d \
--network host \
--name host-network-app \
nginx
# No port mapping needed - uses host ports directly
# Container listens on port 80 of host machine
curl http://localhost:80
How to Manage Docker Storage and Volumes?
Docker storage management ensures data persistence and efficient disk utilization. Specifically, volumes provide the recommended mechanism for persisting data generated and used by containers.
Understanding Docker Storage Drivers
Docker uses storage drivers to manage image layers and container filesystems:
- overlay2: Default and recommended driver (modern Linux kernels)
- btrfs: For Btrfs filesystems
- zfs: For ZFS filesystems
- devicemapper: Legacy driver for older systems
# Check current storage driver
docker info | grep "Storage Driver"
# View storage driver details
docker info
Working with Docker Volumes
# Create named volume
docker volume create mydata
# Create volume with specific driver and options
docker volume create \
--driver local \
--opt type=nfs \
--opt o=addr=192.168.1.100,rw \
--opt device=:/path/to/share \
nfs-volume
# List volumes
docker volume ls
# Inspect volume
docker volume inspect mydata
# Remove volume
docker volume rm mydata
# Remove unused volumes
docker volume prune
Using Volumes with Containers
# Run container with named volume
docker run -d \
--name mysql \
-v mysql-data:/var/lib/mysql \
-e MYSQL_ROOT_PASSWORD=secret \
mysql:8.0
# Run with multiple volumes
docker run -d \
--name webapp \
-v app-data:/var/www/html \
-v app-logs:/var/log/app \
myapp:latest
# Run with read-only volume
docker run -d \
--name readonly-app \
-v config-data:/etc/config:ro \
myapp:latest
# Mount host directory (bind mount)
docker run -d \
--name dev-env \
-v /home/user/project:/workspace \
-v /home/user/.ssh:/root/.ssh:ro \
development:latest
Bind Mounts vs Volumes
Bind Mounts:
- Direct mapping to host filesystem path
- Suitable for development environments
- Host filesystem structure exposed to container
Volumes:
- Managed by Docker in
/var/lib/docker/volumes/ - Better performance on Docker Desktop
- Easier backup and migration
- Recommended for production
# Bind mount example (development)
docker run -d \
--name dev-server \
--mount type=bind,source=/home/user/code,target=/app \
node:18
# Volume mount example (production)
docker run -d \
--name prod-server \
--mount type=volume,source=app-data,target=/app/data \
node:18
Backup and Restore Volumes
# Backup volume to tar archive
docker run --rm \
-v mysql-data:/data \
-v $(pwd):/backup \
ubuntu tar czf /backup/mysql-backup.tar.gz -C /data .
# Restore volume from backup
docker run --rm \
-v mysql-data:/data \
-v $(pwd):/backup \
ubuntu bash -c "cd /data && tar xzf /backup/mysql-backup.tar.gz"
# Copy volume data between volumes
docker run --rm \
-v source-volume:/source:ro \
-v target-volume:/target \
ubuntu cp -av /source/. /target/
What Are Docker Compose and Multi-Container Applications?
Docker Compose simplifies managing multi-container applications through declarative YAML configuration files. Therefore, complex application stacks become easy to define, version, and deploy.
Installing Docker Compose
# Docker Compose v2 (plugin, recommended)
sudo apt install docker-compose-plugin
# Verify installation
docker compose version
# Legacy standalone version (if needed)
sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
docker-compose --version
Basic Docker Compose File
Create docker-compose.yml:
version: '3.8'
services:
# Web application service
web:
image: nginx:latest
container_name: web-frontend
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
- ./html:/usr/share/nginx/html:ro
- web-logs:/var/log/nginx
networks:
- frontend
- backend
depends_on:
- api
restart: unless-stopped
# API service
api:
build:
context: ./api
dockerfile: Dockerfile
container_name: api-server
environment:
- DATABASE_HOST=postgres
- DATABASE_PORT=5432
- DATABASE_NAME=myapp
- DATABASE_USER=apiuser
- DATABASE_PASSWORD=secret
- REDIS_HOST=redis
ports:
- "8080:8080"
volumes:
- ./api:/app
- api-data:/data
networks:
- backend
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_started
restart: unless-stopped
# Database service
postgres:
image: postgres:15-alpine
container_name: postgres-db
environment:
- POSTGRES_DB=myapp
- POSTGRES_USER=apiuser
- POSTGRES_PASSWORD=secret
volumes:
- postgres-data:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql:ro
networks:
- backend
healthcheck:
test: ["CMD-SHELL", "pg_isready -U apiuser"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
# Redis cache
redis:
image: redis:7-alpine
container_name: redis-cache
command: redis-server --appendonly yes
volumes:
- redis-data:/data
networks:
- backend
restart: unless-stopped
networks:
frontend:
driver: bridge
backend:
driver: bridge
internal: true
volumes:
postgres-data:
redis-data:
api-data:
web-logs:
Docker Compose Commands
# Start all services in background
docker compose up -d
# Start and rebuild images
docker compose up -d --build
# View running services
docker compose ps
# View service logs
docker compose logs
# Follow logs for specific service
docker compose logs -f api
# Stop all services
docker compose stop
# Stop and remove containers, networks
docker compose down
# Stop and remove everything including volumes
docker compose down -v
# Restart specific service
docker compose restart api
# Scale service to multiple instances
docker compose up -d --scale api=3
# Execute command in service container
docker compose exec api /bin/bash
# View service configuration
docker compose config
# Validate compose file
docker compose config --quiet
Advanced Compose Features
version: '3.8'
services:
app:
image: myapp:latest
# Resource limits
deploy:
resources:
limits:
cpus: '2'
memory: 1G
reservations:
cpus: '0.5'
memory: 512M
# Health check configuration
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
# Logging configuration
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
# Security options
security_opt:
- no-new-privileges:true
cap_drop:
- ALL
cap_add:
- NET_BIND_SERVICE
# Environment from file
env_file:
- .env
- .env.production
# Dependency conditions
depends_on:
db:
condition: service_healthy
cache:
condition: service_started
FAQ: Docker Container Fundamentals
What is the main difference between Docker containers and VMs?
Containers share the host OS kernel and virtualize at the OS level, making them lightweight (10-100MB) with sub-second startup times. Virtual machines include complete operating systems with dedicated kernels, requiring hypervisors and consuming 1-10GB+ per instance with 30-60 second boot times. Consequently, containers provide better resource efficiency and portability while VMs offer stronger isolation through hardware virtualization.
How do Docker images relate to containers?
Docker images are read-only templates containing application code, dependencies, and configuration. Containers are runnable instances of images with a writable layer on top. Think of images as classes and containers as objects in programming—you can create multiple containers from a single image, each with its own runtime state and data.
Can containers run different Linux distributions than the host?
Yes, containers can run different Linux distributions (Ubuntu, Alpine, CentOS) on the same host because they only need distribution-specific userspace tools and libraries. However, all containers share the host kernel, so kernel-specific features must match. For example, you can run Ubuntu containers on a CentOS host, but kernel modules and versions must be compatible.
What happens to container data when a container is removed?
Data in the container’s writable layer is lost when the container is removed. To persist data, use Docker volumes or bind mounts that store data outside the container filesystem. Volumes survive container lifecycle operations and enable data sharing between containers. Always use volumes for databases, application uploads, and configuration that must persist.
How many containers can run on a single host?
Container density depends on host resources and application requirements. Modern hosts can run hundreds to thousands of containers—typical enterprise servers support 100-500 containers, while cloud instances with optimized kernels can exceed 1,000. The actual limit depends on CPU, memory, network bandwidth, and disk I/O capacity. Furthermore, orchestration platforms like Kubernetes manage these limits automatically.
Are Docker containers secure for production use?
Docker containers provide process-level isolation using Linux security features (namespaces, cgroups, SELinux, AppArmor). However, they share the kernel with the host, making them less isolated than VMs. Production security requires: running containers as non-root users, using minimal base images (Alpine), scanning images for vulnerabilities, implementing network policies, and keeping Docker Engine updated. Additionally, use security profiles (seccomp, AppArmor) and consider rootless Docker for enhanced security.
Can Docker containers communicate across different hosts?
Yes, Docker supports multi-host networking through overlay networks in Swarm mode or third-party solutions like Kubernetes networking, Weave, or Calico. These create virtual networks spanning multiple hosts, enabling containers on different machines to communicate as if on the same network. Moreover, service discovery and load balancing become automatic in orchestrated environments.
What is the difference between CMD and ENTRYPOINT in Dockerfile?
CMD provides default arguments for the container but can be overridden when running the container. ENTRYPOINT defines the main executable that always runs, with CMD providing default arguments. Use ENTRYPOINT for the main process (e.g., ENTRYPOINT ["nginx"]) and CMD for default flags (e.g., CMD ["-g", "daemon off;"]). Combining both enables flexible yet predictable container behavior.
Troubleshooting Common Docker Issues
Container Fails to Start or Exits Immediately
Symptoms: Container status shows “Exited” seconds after starting
Diagnostic commands:
# View container logs
docker logs container-name
# Check container exit code
docker ps -a --filter name=container-name
# Inspect container details
docker inspect container-name | grep -A 10 "State"
# Start container in interactive mode for debugging
docker run -it --entrypoint /bin/bash image-name
Common causes and solutions:
- Application crashes on startup
# Check application logs docker logs container-name # Verify environment variables docker inspect container-name | grep -A 20 "Env" - Missing dependencies
# Test image manually docker run -it image-name /bin/bash # Then try running the application command - Permission issues
# Check file ownership docker exec container-name ls -la /app # Fix in Dockerfile RUN chown -R appuser:appuser /app USER appuser
Network Connectivity Issues Between Containers
Symptoms: Containers cannot communicate despite being on the same network
Diagnostic commands:
# Verify network connectivity
docker network inspect bridge
# Check container network settings
docker inspect -f '{{.NetworkSettings.Networks}}' container-name
# Test connectivity
docker exec container1 ping container2
docker exec container1 nc -zv container2 port
# Check DNS resolution
docker exec container1 nslookup container2
docker exec container1 cat /etc/resolv.conf
Solutions:
- Containers on different networks
# Create common network docker network create app-network # Connect containers docker network connect app-network container1 docker network connect app-network container2 - Firewall blocking traffic
# Check iptables rules sudo iptables -L -n -v | grep DOCKER # Reload Docker to recreate rules sudo systemctl restart docker - Port not exposed
# Verify exposed ports docker inspect image-name | grep ExposedPorts # Add EXPOSE to Dockerfile EXPOSE 8080
Storage Space Issues
Symptoms: “No space left on device” errors
Diagnostic commands:
# Check Docker disk usage
docker system df
# Detailed usage breakdown
docker system df -v
# Check specific volumes
docker volume ls
du -sh /var/lib/docker/volumes/*
Solutions:
- Clean up unused resources
# Remove stopped containers docker container prune # Remove unused images docker image prune -a # Remove unused volumes docker volume prune # Remove everything unused docker system prune -a --volumes - Limit container log sizes
// /etc/docker/daemon.json { "log-driver": "json-file", "log-opts": { "max-size": "10m", "max-file": "3" } }sudo systemctl restart docker - Move Docker data directory
# Stop Docker sudo systemctl stop docker # Move data sudo mv /var/lib/docker /new/location/docker # Create symlink sudo ln -s /new/location/docker /var/lib/docker # Start Docker sudo systemctl start docker
Permission Denied Errors
Symptoms: “permission denied” when running Docker commands
Diagnostic commands:
# Check Docker socket permissions
ls -l /var/run/docker.sock
# Verify user group membership
groups $USER
# Check Docker service status
sudo systemctl status docker
Solutions:
- Add user to docker group
sudo usermod -aG docker $USER newgrp docker # Or logout/login - Fix socket permissions
sudo chmod 666 /var/run/docker.sock # Or restart Docker sudo systemctl restart docker - Run with sudo (temporary)
sudo docker run hello-world
Container Performance Issues
Symptoms: Slow container performance, high CPU/memory usage
Diagnostic commands:
# Monitor real-time resource usage
docker stats
# Check specific container stats
docker stats container-name
# View container processes
docker top container-name
# Inspect resource limits
docker inspect container-name | grep -A 20 "HostConfig"
Solutions:
- Set resource limits
docker run -d \ --memory="1g" \ --memory-swap="2g" \ --cpus="2" \ --name limited \ nginx - Optimize image layers
# Combine RUN commands RUN apt-get update && \ apt-get install -y package1 package2 && \ apt-get clean && \ rm -rf /var/lib/apt/lists/* - Use appropriate storage driver
# Check current driver docker info | grep "Storage Driver" # Change to overlay2 in daemon.json { "storage-driver": "overlay2" }
Docker Daemon Won’t Start
Symptoms: Docker service fails to start, API not responding
Diagnostic commands:
# Check service status
sudo systemctl status docker
# View daemon logs
sudo journalctl -u docker.service -n 50 --no-pager
# Check daemon configuration
docker --version
cat /etc/docker/daemon.json
Solutions:
- Fix daemon.json syntax errors
# Validate JSON syntax cat /etc/docker/daemon.json | python3 -m json.tool # Backup and fix sudo cp /etc/docker/daemon.json /etc/docker/daemon.json.bak sudo nano /etc/docker/daemon.json - Clear corrupted Docker state
sudo systemctl stop docker sudo rm -rf /var/lib/docker/network sudo systemctl start docker - Reinstall Docker
# Ubuntu/Debian sudo apt remove docker-ce docker-ce-cli containerd.io sudo apt autoremove sudo apt install docker-ce docker-ce-cli containerd.io
Additional Resources
Official Documentation
- Docker Official Documentation – Comprehensive guides and API reference
- Docker Hub – Public container registry with millions of images
- Docker Compose Documentation – Multi-container orchestration guide
- Docker Engine API – Programmatic container management
Linux and Container Standards
- Open Container Initiative (OCI) – Container format and runtime specifications
- CNCF Container Runtimes – Cloud Native Computing Foundation projects
- Linux Containers (LXC) Project – Alternative container implementation
- Kernel.org – Namespaces – Linux namespace documentation
Security Resources
- Docker Security Best Practices – Official security guidelines
- CIS Docker Benchmark – Security configuration standards
- Snyk Container Security – Vulnerability scanning and best practices
- Docker Content Trust – Image signing and verification
Community and Learning
- Docker Community Forums – Community support and discussions
- Stack Overflow – Docker Tag – Technical Q&A
- Docker GitHub Repository – Source code and issue tracking
- Play with Docker – Free online Docker playground
Related LinuxTips.pro Articles
- Post #50: Custom Monitoring Scripts and Alerts – Monitor containerized applications
- Post #52: Nginx High-Performance Web Server Setup – Use Nginx in containers
- Post #62: Docker Image Management and Optimization – Advanced image techniques
- Post #63: Docker Networking and Volumes – Deep dive into networking
- Post #64: Kubernetes Basics Container Orchestration – Scale beyond single-host Docker
Published: October 2025 | Category: Virtualization and Containers | Linux Mastery Series: Post 61/100
Master Linux system administration from foundation to expert level with our comprehensive 100-article series. Previous: MongoDB on Linux | Next: Docker Image Management