Master your Docker interview with this comprehensive guide covering everything from basics to advanced topics. Whether you’re a junior developer or a senior architect, these Docker containerization interview questions will help you succeed in DevOps interviews in 2026.
- 1. Docker Fundamentals & Architecture
- 2. Docker Images & Containers
- 3. Docker Compose
- 4. Docker Networking
- 5. Docker Storage & Volumes
- 6. Docker Security Best Practices
- 7. Docker Performance Optimization
- 8. CI/CD with Docker, GitHub & GitHub Actions
- 9. Docker Swarm vs Kubernetes
- 10. Advanced Docker Concepts
- Q37: What is Docker Content Trust and how do you implement it?
- Q38: Explain Docker BuildKit and its advanced features
- Q39: How do you implement container health checks?
- Q40: What are Docker init processes and why are they important?
- Q41: Explain Docker rootless mode and its benefits
- Q42: How do you implement Docker multi-stage builds for different languages?
- Q43: How do you debug containers that won't start?
- Q44: What are Docker contexts and how do you use them?
- Q45: How do you implement container restart policies and handle crashes?
- Conclusion and Interview Tips
- List services
- Scale service
- Update service (rolling update)
1. Docker Fundamentals & Architecture
Q1: What is Docker, and why is it important in 2026?
Answer: Docker is a containerization platform that packages applications and their dependencies into isolated containers. In 2026, Docker remains crucial because:
- Consistency: Ensures applications run identically across development, testing, and production environments
- Microservices Architecture: Essential for building and deploying microservices
- Resource Efficiency: Containers share the host OS kernel, using fewer resources than VMs
- DevOps Integration: Core component of modern CI/CD pipelines
- Cloud-Native Development: Foundation for Kubernetes and container orchestration
Interview Tip: Mention real-world benefits like faster deployment times and improved scalability.
Q2: Explain Docker architecture and its key components
Answer: Docker uses a client-server architecture with these components:
1. Docker Client: CLI tool that sends commands to Docker daemon
2. Docker Daemon (dockerd): Background service managing containers, images, networks, and volumes
3. Docker Registry: Stores Docker images (Docker Hub, private registries)
4. Docker Objects:
- Images: Read-only templates for creating containers
- Containers: Runnable instances of images
- Networks: Enable container communication
- Volumes: Persist data outside containers
Architecture Flow:
Client → Docker CLI → REST API → Docker Daemon → containerd → runc → Container
Follow-up Questions:
- What is containerd and runc?
- How does Docker differ from virtual machines?
Q3: What’s the difference between Docker and Virtual Machines?
Answer:
| Aspect | Docker Containers | Virtual Machines |
|---|---|---|
| OS | Share host kernel | Full OS per VM |
| Size | MBs | GBs |
| Startup | Seconds | Minutes |
| Performance | Near-native | Overhead from hypervisor |
| Isolation | Process-level | Hardware-level |
| Resource Usage | Lightweight | Heavy |
Practical Example: Running 10 microservices:
- Docker: ~2GB RAM, starts in 10 seconds
- VMs: ~20GB RAM, starts in 10+ minutes
Interview Tip: Emphasize that containers are ideal for microservices, while VMs are better for complete OS isolation.
Q4: What are Docker namespaces and cgroups?
Answer:
Namespaces provide isolation for containers:
- PID namespace: Process isolation
- NET namespace: Network isolation
- MNT namespace: Filesystem mount points
- UTS namespace: Hostname and domain name
- IPC namespace: Inter-process communication
- USER namespace: User and group ID isolation
Cgroups (Control Groups) limit resource usage:
- CPU allocation
- Memory limits
- Disk I/O
- Network bandwidth
Example:
# Limit container to 512MB RAM and 50% CPU
docker run -m 512m --cpus="0.5" nginx
Q5: Explain the Docker image layering system
Answer: Docker images use a Union File System with layers:
How it works:
- Each Dockerfile instruction creates a new layer
- Layers are read-only except the top container layer
- Layers are cached and reused across images
- Only changed layers are rebuilt
Example Dockerfile:
FROM node:18-alpine # Layer 1: Base image
WORKDIR /app # Layer 2: Set working directory
COPY package*.json ./ # Layer 3: Copy dependency files
RUN npm install # Layer 4: Install dependencies
COPY . . # Layer 5: Copy application code
CMD ["node", "server.js"] # Layer 6: Set command
Benefits:
- Efficient storage (shared layers)
- Faster builds (cached layers)
- Quick distribution (only changed layers transferred)
Interview Tip: Mention optimization strategies like ordering instructions from least to most frequently changed.
2. Docker Images & Containers
Q6: What’s the difference between an image and a container?
Answer:
Image:
- Read-only template
- Blueprint for containers
- Stored in registries
- Immutable
Container:
- Running instance of an image
- Has a writable layer
- Can be started, stopped, and deleted
- Ephemeral by default
Analogy: Image is like a class in OOP, container is like an object instance.
# Pull image (download template)
docker pull nginx:latest
# Run container (create instance)
docker run -d --name web nginx:latest
# Multiple containers from same image
docker run -d --name web1 nginx
docker run -d --name web2 nginx
Q7: How do you optimize Docker images for production?
Answer: Key optimization strategies:
1. Use Multi-Stage Builds:
# Build stage
FROM node:22 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# Production stage
FROM node:22-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
CMD ["node", "dist/server.js"]
2. Choose Minimal Base Images:
# Bad: 900MB
FROM ubuntu:latest
# Good: 150MB
FROM node:22
# Better: 40MB
FROM node:22-slim
# Best: 8MB
FROM node:22-alpine
3. Optimize Layer Caching:
# Copy dependencies first (changes less often)
COPY package*.json ./
RUN npm install
# Copy code last (changes frequently)
COPY . .
4. Use .dockerignore:
node_modules
.git
.env
*.log
coverage/
.DS_Store
5. Combine RUN Commands:
# Bad: 3 layers
RUN apt-get update
RUN apt-get install -y curl
RUN apt-get clean
# Good: 1 layer
RUN apt-get update && \
apt-get install -y curl && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
Result: Image size reduced from 1.2GB to 150MB, build time from 10min to 2min.
Q8: Explain Docker build cache and how to leverage it?
Answer: Docker caches each layer during builds to speed up subsequent builds.
Cache Invalidation Rules:
- If a layer changes, all subsequent layers are rebuilt
- Cache is based on instruction and file contents
-
COPYandADDuse file checksums
Best Practices:
# Order matters for cache efficiency
# 1. Base image (rarely changes)
FROM python:3.11-slim
# 2. System dependencies (rarely changes)
RUN apt-get update && apt-get install -y gcc
# 3. Application dependencies (changes occasionally)
COPY requirements.txt .
RUN pip install -r requirements.txt
# 4. Application code (changes frequently)
COPY . .
# 5. Runtime configuration
CMD ["python", "app.py"]
Force rebuild without cache:
docker build --no-cache -t myapp:latest .
View layer sizes:
docker history myapp:latest
Q9: How do you manage Docker images in a CI/CD pipeline?
Answer: Image management strategy:
1. Tagging Strategy:
# Semantic versioning
docker tag myapp:latest myapp:1.2.3
# Git commit SHA
docker tag myapp:latest myapp:${GIT_COMMIT_SHA}
# Environment-specific
docker tag myapp:latest myapp:production
docker tag myapp:latest myapp:staging
# Date-based
docker tag myapp:latest myapp:2026-01-06
2. Registry Management:
# Login to registry
docker login registry.example.com
# Push with multiple tags
docker push myapp:1.2.3
docker push myapp:latest
# Pull specific version
docker pull myapp:1.2.3
3. Image Scanning:
# Scan for vulnerabilities
docker scan myapp:latest
# Use Trivy
trivy image myapp:latest
4. Cleanup Strategy:
# Remove dangling images
docker image prune
# Remove unused images
docker image prune -a
# Remove images older than 7 days
docker image prune -a --filter "until=168h"
Q10: What are the different ways to create a Docker image?
Answer:
1. Dockerfile (Most Common):
FROM nginx:alpine
COPY index.html /usr/share/nginx/html/
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
docker build -t mynginx:1.0 .
2. Docker Commit (Not Recommended for Production):
# Run container and make changes
docker run -it ubuntu bash
apt-get update && apt-get install curl
# Commit changes to new image
docker commit container_id myubuntu:curl
3. Import from Tarball:
docker import backup.tar myimage:latest
4. Using BuildKit (Modern, Faster):
DOCKER_BUILDKIT=1 docker build -t myapp .
Interview Tip: Always recommend Dockerfile for reproducibility and version control.
3. Docker Compose
Q11: What is Docker Compose and when should you use it?
Answer: Docker Compose is a tool for defining and running multi-container applications using a YAML file.
Use Cases:
- Development environments with multiple services
- Microservices architecture
- Testing with dependencies (database, cache, queue)
- Local service orchestration
Example docker-compose.yml:
services:
web:
build: ./web
ports:
- "3000:3000"
environment:
- DATABASE_URL=postgres://db:5432/myapp
depends_on:
- db
- redis
volumes:
- ./web:/app
networks:
- app-network
db:
image: postgres:15-alpine
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
- POSTGRES_DB=myapp
volumes:
- postgres-data:/var/lib/postgresql/data
networks:
- app-network
redis:
image: redis:7-alpine
networks:
- app-network
volumes:
postgres-data:
networks:
app-network:
driver: bridge
Commands:
# Start all services
docker-compose up -d
# View logs
docker-compose logs -f web
# Scale service
docker-compose up -d --scale web=3
# Stop all services
docker-compose down
# Remove volumes too
docker-compose down -v
Q12: Explain Docker Compose networking and service discovery
Answer: Docker Compose automatically creates a network for your application.
Service Discovery:
services:
backend:
image: myapi:latest
frontend:
image: myweb:latest
environment:
# Services can reach each other by name
- API_URL=http://backend:8080
How it works:
- Compose creates a default network
- Services join this network automatically
- DNS resolution allows service-to-service communication by name
- Each service gets its own hostname
Custom Networks:
services:
web:
networks:
- frontend
- backend
api:
networks:
- backend
db:
networks:
- backend
networks:
frontend:
driver: bridge
backend:
driver: bridge
internal: true # No external access
Testing connectivity:
# Execute command in container
docker-compose exec web ping api
# Check network
docker network inspect myapp_backend
Q13: How do you handle environment-specific configurations in Docker Compose?
Answer: Multiple approaches:
1. Environment Files:
# docker-compose.yml
services:
web:
env_file:
- .env.common
- .env.production
# .env.production
NODE_ENV=production
DATABASE_URL=postgres://prod-db:5432/app
REDIS_URL=redis://prod-redis:6379
LOG_LEVEL=error
2. Multiple Compose Files:
# Base configuration
docker-compose.yml
# Development overrides
docker-compose.dev.yml
# Production overrides
docker-compose.prod.yml
# docker-compose.prod.yml
version: '3.8'
services:
web:
image: myapp:${VERSION}
restart: always
deploy:
replicas: 3
resources:
limits:
cpus: '0.5'
memory: 512M
Usage:
# Development
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up
# Production
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up
3. Variable Substitution:
services:
web:
image: myapp:${VERSION:-latest}
ports:
- "${WEB_PORT:-3000}:3000"
Q14: How do you debug issues in Docker Compose?
Answer: Debugging strategies:
1. View Logs:
# All services
docker-compose logs
# Specific service with follow
docker-compose logs -f web
# Last 100 lines
docker-compose logs --tail=100 web
# Timestamps
docker-compose logs -t web
2. Check Service Status:
# List running services
docker-compose ps
# Check resource usage
docker stats $(docker-compose ps -q)
3. Execute Commands:
# Interactive shell
docker-compose exec web sh
# Run command
docker-compose exec db psql -U user -d myapp
# Run as root
docker-compose exec --user root web bash
4. Inspect Configuration:
# View resolved compose file
docker-compose config
# Validate compose file
docker-compose config --quiet
5. Network Debugging:
# Test connectivity
docker-compose exec web ping db
# Check DNS resolution
docker-compose exec web nslookup db
# Inspect network
docker network inspect myapp_default
6. Volume Issues:
# List volumes
docker volume ls
# Inspect volume
docker volume inspect myapp_postgres-data
# Check permissions
docker-compose exec web ls -la /app
4. Docker Networking
Q15: Explain different Docker network drivers
Answer:
1. Bridge (Default):
- Default network for standalone containers
- Containers can communicate via IP or container name
- Provides network isolation
docker network create my-bridge-network
docker run -d --network my-bridge-network --name web nginx
docker run -d --network my-bridge-network --name api node:18
2. Host:
- Container uses host’s network stack
- No network isolation
- Better performance for high-throughput applications
docker run -d --network host nginx
# Container accessible at host's IP directly
3. None:
- No networking
- Complete isolation
- Useful for batch jobs
docker run -d --network none alpine sleep 3600
4. Overlay:
- Multi-host networking
- Used in Docker Swarm/Kubernetes
- Enables container communication across hosts
docker network create -d overlay my-overlay-network
5. Macvlan:
- Assigns MAC address to container
- Container appears as physical device
- Used for legacy applications
docker network create -d macvlan \
--subnet=192.168.1.0/24 \
--gateway=192.168.1.1 \
-o parent=eth0 macvlan-net
Q16: How do you expose container ports and what’s the difference between EXPOSE and publish?
Answer:
EXPOSE (Documentation only):
# Dockerfile
EXPOSE 8080
# Only documents intended port, doesn't actually publish
Publish Ports (Actual port mapping):
# Map container port 8080 to host port 3000
docker run -p 3000:8080 myapp
# Map to random host port
docker run -P myapp
# Bind to specific interface
docker run -p 127.0.0.1:3000:8080 myapp
# Multiple ports
docker run -p 3000:8080 -p 3001:8081 myapp
# UDP port
docker run -p 3000:8080/udp myapp
Port Mapping Scenarios:
# docker-compose.yml
services:
web:
ports:
# HOST:CONTAINER
- "3000:8080" # Public access
- "127.0.0.1:3001:8081" # Localhost only
- "3002" # Random host port
expose:
- "9090" # Only to other services
Check port mappings:
docker port container_name
Q17: How do containers communicate with each other?
Answer:
1. Same Bridge Network (by name):
docker network create mynetwork
docker run -d --name db --network mynetwork postgres
docker run -d --name app --network mynetwork myapp
# In app container
curl http://db:5432
2. Default Bridge (by IP only):
docker run -d --name db postgres
DB_IP=$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' db)
docker run -d --name app -e DATABASE_URL=$DB_IP myapp
3. Links (Legacy, not recommended):
docker run -d --name db postgres
docker run -d --name app --link db:database myapp
4. Docker Compose (Automatic DNS):
services:
api:
build: ./api
worker:
build: ./worker
environment:
- API_URL=http://api:3000
5. Host Network:
docker run --network host app1
docker run --network host app2
# Both can communicate via localhost
Best Practice Example:
services:
frontend:
networks:
- public
backend:
networks:
- public
- private
database:
networks:
- private
networks:
public:
private:
internal: true
Q18: How do you troubleshoot Docker network issues?
Answer:
1. Network Inspection:
# List networks
docker network ls
# Inspect network
docker network inspect bridge
# See which containers are connected
docker network inspect mynetwork --format '{{range .Containers}}{{.Name}} {{end}}'
2. Container Network Details:
# Get container IP
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name
# All network settings
docker inspect container_name | jq '.[0].NetworkSettings'
3. Connectivity Tests:
# Ping between containers
docker exec web ping api
# Test port connectivity
docker exec web nc -zv api 8080
# DNS resolution
docker exec web nslookup api
# Trace route
docker exec web traceroute api
# Check listening ports
docker exec web netstat -tlnp
4. Common Issues & Solutions:
Issue: Container can’t reach another container by name
# Solution: Ensure on same custom network
docker network connect mynetwork container1
docker network connect mynetwork container2
Issue: Can’t access exposed port
# Check port mapping
docker port container_name
# Verify process listening
docker exec container_name netstat -tlnp | grep 8080
# Check firewall
sudo iptables -L -n | grep 8080
Issue: DNS not working
# Check DNS settings
docker exec container_name cat /etc/resolv.conf
# Test Docker DNS
docker exec container_name nslookup google.com
5. Network Performance:
# Test bandwidth between containers
docker exec -it web iperf3 -s
docker exec -it api iperf3 -c web
5. Docker Storage & Volumes
Q19: Explain Docker volumes vs bind mounts vs tmpfs
Answer:
1. Volumes (Recommended):
- Managed by Docker
- Stored in Docker’s directory
- Persist data outside container lifecycle
- Can be shared between containers
# Create named volume
docker volume create mydata
# Use volume
docker run -v mydata:/app/data myapp
# Docker Compose
services:
db:
volumes:
- postgres-data:/var/lib/postgresql/data
volumes:
postgres-data:
2. Bind Mounts:
- Mount host directory into container
- Full host path required
- Useful for development
# Bind mount
docker run -v /home/user/code:/app myapp
# Docker Compose
services:
web:
volumes:
- ./src:/app/src:ro # Read-only
- ./logs:/app/logs # Read-write
3. tmpfs Mounts:
- Stored in host memory only
- Never written to disk
- Useful for sensitive data
# tmpfs mount
docker run --tmpfs /app/temp:rw,size=100m,mode=1777 myapp
Comparison:
| Feature | Volume | Bind Mount | tmpfs |
|---|---|---|---|
| Location | Docker managed | Host filesystem | Host memory |
| Persistence | Yes | Yes | No |
| Performance | Good | Good | Fastest |
| Portability | High | Low | High |
| Use Case | Production data | Development | Secrets/temp |
Q20: How do you backup and restore Docker volumes?
Answer:
Backup Volume:
# Method 1: Using tar
docker run --rm \
-v mydata:/data \
-v $(pwd):/backup \
alpine tar czf /backup/mydata-backup.tar.gz -C /data .
# Method 2: Using docker cp
docker create --name temp -v mydata:/data alpine
docker cp temp:/data ./backup
docker rm temp
Restore Volume:
# Create new volume
docker volume create mydata-restored
# Restore data
docker run --rm \
-v mydata-restored:/data \
-v $(pwd):/backup \
alpine tar xzf /backup/mydata-backup.tar.gz -C /data
# Verify
docker run --rm -v mydata-restored:/data alpine ls -la /data
Automated Backup Script:
#!/bin/bash
# backup-volumes.sh
VOLUMES=$(docker volume ls -q)
BACKUP_DIR="./backups/$(date +%Y%m%d)"
mkdir -p $BACKUP_DIR
for volume in $VOLUMES; do
echo "Backing up $volume..."
docker run --rm \
-v $volume:/data \
-v $BACKUP_DIR:/backup \
alpine tar czf /backup/${volume}.tar.gz -C /data .
done
echo "Backup complete!"
Docker Compose Backup:
# Backup all volumes defined in compose
docker-compose down
docker run --rm \
-v myapp_db-data:/data \
-v $(pwd)/backups:/backup \
alpine tar czf /backup/db-data.tar.gz -C /data .
docker-compose up -d
Q21: What are volume drivers and when would you use them?
Answer: Volume drivers enable storing volumes on remote hosts or cloud providers.
Built-in Driver (local):
docker volume create --driver local myvolume
Common Volume Drivers:
1. NFS (Network File System):
docker volume create --driver local \
--opt type=nfs \
--opt o=addr=192.168.1.100,rw \
--opt device=:/path/to/share \
nfs-volume
2. CIFS/SMB:
docker volume create --driver local \
--opt type=cifs \
--opt o=username=user,password=pass \
--opt device=//server/share \
smb-volume
3. Cloud Storage (AWS EBS):
docker volume create --driver rexray/ebs \
--opt size=10 \
aws-volume
Docker Compose with NFS:
services:
app:
volumes:
- nfs-data:/data
volumes:
nfs-data:
driver: local
driver_opts:
type: nfs
o: addr=192.168.1.100,rw
device: ":/mnt/data"
Use Cases:
- Multi-host deployments
- Shared storage across containers
- Cloud-native applications
- High availability setups
Q22: How do you manage storage permissions in Docker?
Answer:
Problem: Permission denied errors when containers access volumes
Solutions:
1. Match User IDs:
# Create user with specific UID
FROM node:18-alpine
RUN addgroup -g 1001 -S appgroup && \
adduser -u 1001 -S appuser -G appgroup
USER appuser
WORKDIR /app
# Run container as specific user
docker run --user 1001:1001 -v mydata:/app/data myapp
2. Change Volume Permissions:
# Using init container pattern
docker run --rm -v mydata:/data alpine chown -R 1001:1001 /data
3. Docker Compose:
services:
app:
user: "1001:1001"
volumes:
- app-data:/data
init:
image: alpine
volumes:
- app-data:/data
command: chown -R 1001:1001 /data
4. Bind Mount Permissions:
# Development: give everyone access (not for production)
chmod -R 777 ./data
# Better: match host user
docker run --user $(id -u):$(id -g) -v $(pwd):/app myapp
5. Named Volumes with Permissions:
FROM alpine
RUN mkdir -p /data && chown -R 1001:1001 /data
VOLUME /data
USER 1001
6. Docker Security Best Practices
Q23: What are the top Docker security best practices?
Answer:
1. Use Official and Minimal Base Images:
# Bad
FROM ubuntu:latest
# Good
FROM node:18-alpine
# Best - Distroless
FROM gcr.io/distroless/nodejs18-debian11
2. Run as Non-Root User:
FROM node:18-alpine
# Create non-root user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
# Set ownership
COPY --chown=appuser:appgroup . /app
USER appuser
WORKDIR /app
CMD ["node", "server.js"]
3. Scan Images for Vulnerabilities:
# Docker scan
docker scan myapp:latest
# Trivy
trivy image myapp:latest
# Snyk
snyk container test myapp:latest
4. Use Read-Only Filesystem:
docker run --read-only --tmpfs /tmp myapp
Docker Compose:
services:
web:
read_only: true
tmpfs:
- /tmp
- /var/run
5. Limit Container Resources:
docker run \
--memory="512m" \
--cpus="0.5" \
--pids-limit=100 \
myapp
6. Drop Unnecessary Capabilities:
docker run \
--cap-drop=ALL \
--cap-add=NET_BIND_SERVICE \
myapp
7. Use Secrets Management:
# Docker secrets (Swarm)
echo "db_password" | docker secret create db_pass -
# Docker Compose
docker run --env-file .env.secret myapp
Q24: How do you scan Docker images for security vulnerabilities?
Answer:
1. Docker Native Scanning:
# Scan image
docker scan myapp:latest
# Scan with specific severity
docker scan --severity high myapp:latest
# JSON output
docker scan --json myapp:latest > scan-results.json
2. Trivy (Recommended):
# Install Trivy
brew install trivy # macOS
apt-get install trivy # Ubuntu
# Scan image
trivy image myapp:latest
# Only critical and high
trivy image --severity CRITICAL,HIGH myapp:latest
# Scan Dockerfile
trivy config Dockerfile
# CI/CD integration
trivy image --exit-code 1 --severity CRITICAL myapp:latest
3. Snyk:
# Install
npm install -g snyk
# Authenticate
snyk auth
# Scan image
snyk container test myapp:latest
# Monitor in Snyk dashboard
snyk container monitor myapp:latest
4. Clair:
# Run Clair server
docker run -d --name clair -p 6060:6060 quay.io/coreos/clair:latest
# Scan with clairctl
clairctl analyze myapp:latest
CI/CD Integration Example (GitHub Actions):
name: Security Scan
on: [push]
jobs:
scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Build image
run: docker build -t myapp:${{ github.sha }} .
- name: Run Trivy scan
uses: aquasecurity/trivy-action@master
with:
image-ref: myapp:${{ github.sha }}
severity: 'CRITICAL,HIGH'
exit-code: '1'
5. Best Practices:
- Scan during CI/CD pipeline
- Block deployments on critical vulnerabilities
- Regular rescans of running containers
- Maintain inventory of base images
- Automate patching process
Q25: Explain Docker secrets and how to manage sensitive data
Answer:
Docker Swarm Secrets (Production):
# Create secret from file
docker secret create db_password ./password.txt
# Create secret from stdin
echo "mypassword" | docker secret create db_pass -
# List secrets
docker secret ls
# Use in service
docker service create \
--name webapp \
--secret db_pass \
myapp:latest
In Application:
// Secret available at /run/secrets/db_pass
const fs = require('fs');
const dbPassword = fs.readFileSync('/run/secrets/db_pass', 'utf8').trim();
Docker Compose Secrets:
version: '3.8'
services:
db:
image: postgres:15
secrets:
- db_password
environment:
- POSTGRES_PASSWORD_FILE=/run/secrets/db_password
secrets:
db_password:
file: ./secrets/db_password.txt
Environment Variables (Development Only):
# NOT recommended for production
docker run -e DATABASE_PASSWORD=secret myapp
# Better: Use env file
docker run --env-file .env.secret myapp
HashiCorp Vault Integration:
# Fetch secret from Vault
docker run \
-e VAULT_ADDR=https://vault.example.com \
-e VAULT_TOKEN=$VAULT_TOKEN \
myapp
Best Practices:
- Never commit secrets to Git
- Use different secrets per environment
- Rotate secrets regularly
- Use secret management tools (Vault, AWS Secrets Manager)
- Avoid environment variables for production secrets
Q26: How do you implement the principle of least privilege in Docker?
Answer:
1. Non-Root User:
FROM python:3.11-slim
# Create user with minimal permissions
RUN useradd -m -u 1000 appuser && \
mkdir /app && \
chown appuser:appuser /app
USER appuser
WORKDIR /app
COPY --chown=appuser:appuser requirements.txt .
RUN pip install --user -r requirements.txt
COPY --chown=appuser:appuser . .
CMD ["python", "app.py"]
2. Drop All Capabilities, Add Only Required:
services:
web:
cap_drop:
- ALL
cap_add:
- NET_BIND_SERVICE # Only if binding to ports < 1024
- CHOWN # Only if changing file ownership
3. Read-Only Root Filesystem:
services:
app:
read_only: true
tmpfs:
- /tmp:mode=1777,size=128m
- /var/log:mode=0755,size=64m
4. Security Options:
docker run \
--security-opt=no-new-privileges:true \
--security-opt=seccomp=default \
myapp
5. AppArmor/SELinux Profiles:
# AppArmor
docker run --security-opt apparmor=docker-default myapp
# SELinux
docker run --security-opt label=type:container_runtime_t myapp
6. Network Isolation:
services:
frontend:
networks:
- public
backend:
networks:
- internal
database:
networks:
- internal
networks:
public:
internal:
internal: true # No external access
7. Docker Performance Optimization
Q27: How do you optimize Docker container performance?
Answer:
1. Resource Limits:
# Set memory and CPU limits
docker run \
--memory="1g" \
--memory-reservation="512m" \
--memory-swap="2g" \
--cpus="1.5" \
--cpu-shares=1024 \
myapp
Docker Compose:
services:
web:
deploy:
resources:
limits:
cpus: '1.5'
memory: 1G
reservations:
cpus: '0.5'
memory: 512M
2. Use BuildKit:
# syntax=docker/dockerfile:1.4
FROM node:18-alpine
WORKDIR /app
# BuildKit cache mounts
RUN --mount=type=cache,target=/root/.npm \
npm install -g pnpm
COPY package*.json ./
RUN --mount=type=cache,target=/root/.local/share/pnpm/store \
pnpm install --frozen-lockfile
COPY . .
RUN pnpm build
CMD ["node", "dist/server.js"]
Build with BuildKit:
DOCKER_BUILDKIT=1 docker build -t myapp .
3. Optimize Image Layers:
# Bad: Many layers
RUN apt-get update
RUN apt-get install -y curl
RUN apt-get install -y git
RUN apt-get clean
# Good: Single layer, smaller image
RUN apt-get update && \
apt-get install -y --no-install-recommends curl git && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
4. Use Multi-Stage Builds:
# Build stage
FROM golang:1.21 AS builder
WORKDIR /app
COPY . .
RUN CGO_ENABLED=0 go build -o server
# Runtime stage (10x smaller)
FROM alpine:3.18
COPY --from=builder /app/server /server
CMD ["/server"]
5. Enable Docker Daemon Optimization:
// /etc/docker/daemon.json
{
"storage-driver": "overlay2",
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
},
"default-ulimits": {
"nofile": {
"Name": "nofile",
"Hard": 64000,
"Soft": 64000
}
}
}
6. Use Volume Mounts for I/O Performance:
# Better I/O than bind mounts
docker volume create fast-storage
docker run -v fast-storage:/data myapp
Q28: How do you monitor Docker container performance?
Answer:
1. Docker Stats:
# Real-time stats for all containers
docker stats
# Specific container
docker stats container_name
# No stream, single snapshot
docker stats --no-stream
# Format output
docker stats --format "table {{.Container}}\t{{.CPUPerc}}\t{{.MemUsage}}"
2. Docker Events:
# Monitor container events
docker events
# Filter by container
docker events --filter container=myapp
# Filter by type
docker events --filter type=container
3. Prometheus + cAdvisor:
version: '3.8'
services:
cadvisor:
image: gcr.io/cadvisor/cadvisor:latest
ports:
- "8080:8080"
volumes:
- /:/rootfs:ro
- /var/run:/var/run:ro
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
prometheus:
image: prom/prometheus:latest
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
grafana:
image: grafana/grafana:latest
ports:
- "3000:3000"
4. Custom Metrics with PromQL:
# prometheus.yml
scrape_configs:
- job_name: 'cadvisor'
static_configs:
- targets: ['cadvisor:8080']
Example Queries:
# CPU usage
rate(container_cpu_usage_seconds_total[5m])
# Memory usage
container_memory_usage_bytes
# Network I/O
rate(container_network_receive_bytes_total[5m])
5. Application Performance Monitoring:
// Node.js with prom-client
const promClient = require('prom-client');
const register = new promClient.Registry();
const httpRequestDuration = new promClient.Histogram({
name: 'http_request_duration_ms',
help: 'Duration of HTTP requests in ms',
labelNames: ['method', 'route', 'status_code']
});
register.registerMetric(httpRequestDuration);
Q29: What causes Docker containers to run slowly and how do you fix it?
Answer:
Common Performance Issues:
1. Insufficient Resources:
# Problem: Container hitting memory limit
docker stats myapp
# Shows high memory usage, possible OOM kills
# Solution: Increase limits
docker update --memory="2g" --cpus="2" myapp
# Or in Compose
services:
app:
deploy:
resources:
limits:
memory: 2G
cpus: '2'
2. Disk I/O Bottleneck:
# Problem: Slow disk operations
docker run -v /slow/hdd:/data myapp
# Solution: Use volumes on faster storage
docker volume create --driver local \
--opt type=tmpfs \
--opt device=tmpfs \
fast-storage
docker run -v fast-storage:/data myapp
3. Logging Overhead:
# Problem: Large logs consuming resources
docker logs myapp | wc -l # Millions of lines
# Solution: Configure log rotation
docker run \
--log-opt max-size=10m \
--log-opt max-file=3 \
myapp
// /etc/docker/daemon.json
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}
4. Network Latency:
# Problem: DNS resolution slow
docker exec myapp time nslookup google.com # Takes > 1s
# Solution: Use custom DNS
docker run --dns 8.8.8.8 --dns 1.1.1.1 myapp
5. Inefficient Image:
# Problem: Large base image
FROM ubuntu:latest # 78MB + dependencies
# Solution: Use alpine
FROM alpine:3.18 # 7MB base
# Or distroless
FROM gcr.io/distroless/base # Minimal
6. Too Many Processes:
# Problem: Multiple services in one container
CMD supervisord # Running nginx, php-fpm, redis
# Solution: One process per container
# Use docker-compose with separate services
Diagnosis Script:
#!/bin/bash
# diagnose.sh
CONTAINER=$1
echo "=== CPU Usage ==="
docker stats --no-stream $CONTAINER | awk '{print $3}'
echo "=== Memory Usage ==="
docker stats --no-stream $CONTAINER | awk '{print $4}'
echo "=== Disk I/O ==="
docker exec $CONTAINER iostat -x 1 5
echo "=== Network Latency ==="
docker exec $CONTAINER ping -c 5 8.8.8.8
echo "=== Process Count ==="
docker exec $CONTAINER ps aux | wc -l
echo "=== Log Size ==="
docker inspect --format='{{.LogPath}}' $CONTAINER | xargs ls -lh
8. CI/CD with Docker, GitHub & GitHub Actions
Q30: How do you integrate Docker with GitHub Actions?
Answer:
Complete GitHub Actions Workflow:
# .github/workflows/docker-ci-cd.yml
name: Docker CI/CD Pipeline
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main ]
release:
types: [ published ]
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
build-and-test:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Container Registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=ref,event=branch
type=ref,event=pr
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=sha,prefix={{branch}}-
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Run tests in container
run: |
docker run --rm ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }} npm test
- name: Security scan with Trivy
uses: aquasecurity/trivy-action@master
with:
image-ref: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }}
severity: 'CRITICAL,HIGH'
exit-code: '1'
deploy-staging:
needs: build-and-test
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/develop'
steps:
- name: Deploy to staging
run: |
# SSH to staging server and deploy
ssh ${{ secrets.STAGING_HOST }} "
docker pull ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:develop
docker-compose -f docker-compose.staging.yml up -d
"
deploy-production:
needs: build-and-test
runs-on: ubuntu-latest
if: github.event_name == 'release'
steps:
- name: Deploy to production
run: |
# Deploy to production with health checks
ssh ${{ secrets.PROD_HOST }} "
docker pull ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.event.release.tag_name }}
docker-compose -f docker-compose.prod.yml up -d
./health-check.sh
"
Q31: How do you implement Docker layer caching in CI/CD?
Answer:
GitHub Actions with BuildKit Cache:
- name: Build with cache
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: myapp:latest
cache-from: type=gha
cache-to: type=gha,mode=max
build-args: |
BUILDKIT_INLINE_CACHE=1
Docker Compose Cache:
# docker-compose.ci.yml
version: '3.8'
services:
app:
build:
context: .
cache_from:
- myapp:latest
- myapp:cache
image: myapp:${CI_COMMIT_SHA}
Optimized Dockerfile for Caching:
# syntax=docker/dockerfile:1.4
FROM node:18-alpine AS deps
WORKDIR /app
# Cache dependencies layer
COPY package*.json ./
RUN --mount=type=cache,target=/root/.npm \
npm ci --only=production
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN --mount=type=cache,target=/root/.npm \
npm ci
COPY . .
RUN --mount=type=cache,target=/app/.next/cache \
npm run build
FROM node:18-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
COPY --from=deps /app/node_modules ./node_modules
COPY --from=builder /app/.next ./.next
COPY --from=builder /app/public ./public
COPY --from=builder /app/package.json ./package.json
CMD ["npm", "start"]
Multi-Registry Caching:
- name: Build with multi-registry cache
run: |
docker buildx build \
--cache-from type=registry,ref=myregistry/myapp:cache \
--cache-to type=registry,ref=myregistry/myapp:cache,mode=max \
--push \
-t myregistry/myapp:${{ github.sha }} \
.
Q32: How do you implement a complete Docker-based deployment pipeline?
Answer:
Multi-Environment Pipeline:
# .github/workflows/complete-pipeline.yml
name: Complete Docker Pipeline
on:
push:
branches: [ main, develop, feature/* ]
jobs:
lint-and-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Lint Dockerfile
uses: hadolint/hadolint-action@v3.1.0
with:
dockerfile: Dockerfile
- name: Run unit tests
run: |
docker-compose -f docker-compose.test.yml up --build --abort-on-container-exit
docker-compose -f docker-compose.test.yml down
build-and-scan:
needs: lint-and-test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Build image
uses: docker/build-push-action@v5
with:
context: .
load: true
tags: myapp:${{ github.sha }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Scan for vulnerabilities
run: |
docker run --rm \
-v /var/run/docker.sock:/var/run/docker.sock \
aquasec/trivy:latest image \
--severity HIGH,CRITICAL \
--exit-code 1 \
myapp:${{ github.sha }}
- name: Run integration tests
run: |
docker-compose -f docker-compose.integration.yml up -d
./wait-for-it.sh localhost:3000 -- npm run test:integration
docker-compose -f docker-compose.integration.yml down
push-to-registry:
needs: build-and-scan
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main' || github.ref == 'refs/heads/develop'
steps:
- uses: actions/checkout@v4
- name: Login to DockerHub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GHCR
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action@v5
with:
context: .
platforms: linux/amd64,linux/arm64
push: true
tags: |
myorg/myapp:${{ github.sha }}
myorg/myapp:latest
ghcr.io/${{ github.repository }}:${{ github.sha }}
ghcr.io/${{ github.repository }}:latest
deploy-staging:
needs: push-to-registry
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/develop'
environment:
name: staging
url: https://staging.example.com
steps:
- name: Deploy to staging
uses: appleboy/ssh-action@master
with:
host: ${{ secrets.STAGING_HOST }}
username: ${{ secrets.STAGING_USER }}
key: ${{ secrets.STAGING_SSH_KEY }}
script: |
cd /opt/myapp
docker-compose pull
docker-compose up -d
docker system prune -f
deploy-production:
needs: push-to-registry
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
environment:
name: production
url: https://example.com
steps:
- name: Deploy to production
uses: appleboy/ssh-action@master
with:
host: ${{ secrets.PROD_HOST }}
username: ${{ secrets.PROD_USER }}
key: ${{ secrets.PROD_SSH_KEY }}
script: |
cd /opt/myapp
docker-compose pull
docker-compose up -d --no-deps --build web
./health-check.sh
if [ $? -eq 0 ]; then
docker-compose up -d
else
docker-compose up -d --force-recreate
exit 1
fi
Health Check Script:
#!/bin/bash
# health-check.sh
MAX_RETRIES=30
RETRY_INTERVAL=2
HEALTH_URL="http://localhost:3000/health"
for i in $(seq 1 $MAX_RETRIES); do
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" $HEALTH_URL)
if [ $HTTP_CODE -eq 200 ]; then
echo "Health check passed!"
exit 0
fi
echo "Attempt $i/$MAX_RETRIES: Health check failed (HTTP $HTTP_CODE)"
sleep $RETRY_INTERVAL
done
echo "Health check failed after $MAX_RETRIES attempts"
exit 1
Q33: How do you implement blue-green deployment with Docker?
Answer:
Blue-Green Deployment Strategy:
# docker-compose.blue-green.yml
version: '3.8'
services:
nginx:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
depends_on:
- blue
- green
blue:
image: myapp:${BLUE_VERSION}
environment:
- COLOR=blue
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 10s
timeout: 5s
retries: 3
green:
image: myapp:${GREEN_VERSION}
environment:
- COLOR=green
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 10s
timeout: 5s
retries: 3
Nginx Configuration:
# nginx.conf
upstream backend {
server blue:3000 weight=100;
server green:3000 weight=0;
}
server {
listen 80;
location / {
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
location /health {
access_log off;
return 200 "healthy\n";
}
}
Deployment Script:
#!/bin/bash
# blue-green-deploy.sh
NEW_VERSION=$1
CURRENT_COLOR=$(docker-compose exec nginx curl -s http://backend/color)
if [ "$CURRENT_COLOR" == "blue" ]; then
DEPLOY_TO="green"
WEIGHT_FROM=100
WEIGHT_TO=0
else
DEPLOY_TO="blue"
WEIGHT_FROM=0
WEIGHT_TO=100
fi
echo "Deploying to $DEPLOY_TO environment..."
# Update the target environment
export ${DEPLOY_TO^^}_VERSION=$NEW_VERSION
docker-compose up -d $DEPLOY_TO
# Wait for health check
echo "Waiting for $DEPLOY_TO to be healthy..."
for i in {1..30}; do
if docker-compose exec $DEPLOY_TO curl -f http://localhost:3000/health; then
echo "$DEPLOY_TO is healthy!"
break
fi
sleep 2
done
# Gradual traffic shift
echo "Shifting traffic to $DEPLOY_TO..."
for weight in 10 25 50 75 100; do
# Update nginx config with new weights
sed -i "s/server blue:.* weight=[0-9]\+/server blue:3000 weight=$((100-weight))/" nginx.conf
sed -i "s/server green:.* weight=[0-9]\+/server green:3000 weight=$weight/" nginx.conf
docker-compose exec nginx nginx -s reload
echo "Traffic: $weight% on $DEPLOY_TO"
sleep 30
done
echo "Deployment complete! All traffic on $DEPLOY_TO"
GitHub Actions Integration:
deploy-blue-green:
runs-on: ubuntu-latest
steps:
- name: Deploy with blue-green
run: |
ssh ${{ secrets.PROD_HOST }} '
cd /opt/myapp
./blue-green-deploy.sh ${{ github.sha }}
'
- name: Verify deployment
run: |
sleep 60
RESPONSE=$(curl -s https://example.com/health)
if [ "$RESPONSE" != "healthy" ]; then
echo "Deployment failed! Rolling back..."
ssh ${{ secrets.PROD_HOST }} './rollback.sh'
exit 1
fi
9. Docker Swarm vs Kubernetes
Q34: What are the key differences between Docker Swarm and Kubernetes?
Answer:
| Feature | Docker Swarm | Kubernetes |
|---|---|---|
| Complexity | Simple, easy to learn | Complex, steep learning curve |
| Setup | Built into Docker | Separate installation |
| Scalability | Good for small-medium clusters | Excellent for large clusters |
| Ecosystem | Limited | Extensive (Helm, operators, etc.) |
| Auto-scaling | Manual scaling | Horizontal Pod Autoscaler |
| Self-healing | Basic restart policies | Advanced health checks & recovery |
| Load Balancing | Built-in | Requires configuration |
| Community | Smaller | Massive, industry standard |
| Use Case | Simple deployments | Complex, enterprise-grade |
When to use Docker Swarm:
- Small to medium deployments
- Team familiar with Docker
- Quick setup required
- Simple microservices architecture
- Limited DevOps resources
When to use Kubernetes:
- Large-scale deployments
- Multi-cloud strategy
- Advanced orchestration needs
- Large DevOps team
- Industry-standard required
Q35: How do you set up a Docker Swarm cluster?
Answer:
Initialize Swarm:
# On manager node
docker swarm init --advertise-addr 192.168.1.100
# Get join token for workers
docker swarm join-token worker
# Get join token for managers
docker swarm join-token manager
Join Worker Nodes:
# On worker nodes
docker swarm join \
--token SWMTKN-1-xxxxx \
192.168.1.100:2377
Deploy Service:
# Create service
docker service create \
--name web \
--replicas 3 \
--publish 80:80 \
nginx:alpine
Kubernetes Deployment:
kubectl apply -f deployment.yaml
kubectl scale deployment web-deployment --replicas=5
kubectl set image deployment/web-deployment nginx=nginx:latest
Key Differences:
- Kubernetes uses Pods (can contain multiple containers)
- More granular control in K8s
- Swarm simpler for Docker users
- K8s has namespaces for isolation
- Better monitoring/logging in K8s ecosystem
Q36: Explain Kubernetes architecture and how it compares to Docker Swarm
Answer:
Kubernetes Architecture:
Control Plane Components:
- API Server: Central management point
- etcd: Distributed key-value store
- Scheduler: Assigns pods to nodes
- Controller Manager: Maintains desired state
- Cloud Controller Manager: Cloud provider integration
Node Components:
- kubelet: Node agent
- kube-proxy: Network proxy
- Container Runtime: Docker/containerd/CRI-O
Kubernetes Manifest:
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
---
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
selector:
app: web
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
Comparison:
Docker Swarm Service:
docker service create \
--name web \
--replicas 3 \
--publish 80:80 \
10. Advanced Docker Concepts
Q37: What is Docker Content Trust and how do you implement it?
Answer: Docker Content Trust (DCT) provides cryptographic signing and verification of Docker images.
Enable Content Trust:
# Enable globally
export DOCKER_CONTENT_TRUST=1
# Pull only signed images
docker pull nginx:alpine
# Disable for specific command
DOCKER_CONTENT_TRUST=0 docker pull untrusted-image
Sign and Push Images:
# Generate keys (first time)
docker trust key generate mykey
# Add signer
docker trust signer add --key mykey.pub alice myrepo/myimage
# Push signed image
export DOCKER_CONTENT_TRUST=1
docker push myrepo/myimage:latest
# Will prompt for passphrase
Inspect Signatures:
# View trust data
docker trust inspect myrepo/myimage:latest
# View signers
docker trust signer remove alice myrepo/myimage
GitHub Actions with Content Trust:
- name: Sign and push image
env:
DOCKER_CONTENT_TRUST: 1
DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE: ${{ secrets.DCT_PASSPHRASE }}
run: |
docker push myrepo/myimage:${{ github.sha }}
Use Cases:
- Ensure image integrity
- Prevent unauthorized image deployment
- Regulatory compliance
- Supply chain security
Q38: Explain Docker BuildKit and its advanced features
Answer: BuildKit is the next-generation Docker build system with improved performance and new features.
Enable BuildKit:
# One-time
DOCKER_BUILDKIT=1 docker build .
# Permanently
export DOCKER_BUILDKIT=1
# Or in daemon.json
{
"features": {
"buildkit": true
}
}
Advanced Features:
1. Cache Mounts:
# syntax=docker/dockerfile:1.4
FROM node:18-alpine
WORKDIR /app
# Cache npm dependencies
RUN --mount=type=cache,target=/root/.npm \
npm install -g pnpm
COPY package*.json ./
# Cache pnpm store
RUN --mount=type=cache,target=/root/.local/share/pnpm/store \
pnpm install --frozen-lockfile
COPY . .
RUN pnpm build
CMD ["node", "dist/server.js"]
2. Secret Mounts:
# syntax=docker/dockerfile:1.4
FROM alpine
# Mount secret at build time (not stored in image)
RUN --mount=type=secret,id=github_token \
apk add git && \
git config --global url."https://$(cat /run/secrets/github_token)@github.com/".insteadOf "https://github.com/"
# Build with secret
docker build --secret id=github_token,src=$HOME/.github-token -t myapp .
3. SSH Mounts:
# syntax=docker/dockerfile:1.4
FROM alpine
RUN apk add git openssh-client
# Use SSH keys for private repos
RUN --mount=type=ssh \
git clone git@github.com:private/repo.git
# Build with SSH
docker build --ssh default -t myapp .
4. Multi-Platform Builds:
# Create builder
docker buildx create --name multiplatform --use
# Build for multiple architectures
docker buildx build \
--platform linux/amd64,linux/arm64,linux/arm/v7 \
--push \
-t myrepo/myapp:latest \
.
5. Build-time Network Control:
# syntax=docker/dockerfile:1.4
# Disable network during build
FROM alpine
RUN --network=none echo "No network access"
# Or enable only specific hosts
RUN --network=host curl https://api.example.com
6. Parallel Builds:
# syntax=docker/dockerfile:1.4
FROM alpine AS build1
RUN task1
FROM alpine AS build2
RUN task2
FROM alpine
COPY --from=build1 /output1 /
COPY --from=build2 /output2 /
BuildKit Benefits:
- Parallel build stage execution
- Incremental builds
- Build cache import/export
- Rootless builds
- Improved security
Q39: How do you implement container health checks?
Answer:
Dockerfile Health Check:
FROM node:18-alpine
WORKDIR /app
COPY . .
# Simple HTTP health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD node healthcheck.js
# Or using curl
HEALTHCHECK --interval=30s --timeout=3s \
CMD curl -f http://localhost:3000/health || exit 1
# Or custom script
HEALTHCHECK CMD /app/healthcheck.sh || exit 1
CMD ["node", "server.js"]
Health Check Script:
// healthcheck.js
const http = require('http');
const options = {
host: 'localhost',
port: 3000,
path: '/health',
timeout: 2000
};
const healthCheck = http.request(options, (res) => {
console.log(`STATUS: ${res.statusCode}`);
if (res.statusCode === 200) {
process.exit(0);
} else {
process.exit(1);
}
});
healthCheck.on('error', (err) => {
console.error('ERROR:', err);
process.exit(1);
});
healthCheck.end();
Docker Compose Health Checks:
version: '3.8'
services:
web:
image: myapp:latest
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
depends_on:
db:
condition: service_healthy
db:
image: postgres:15
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
environment:
POSTGRES_PASSWORD: password
redis:
image: redis:7-alpine
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 3s
retries: 5
Advanced Health Check Endpoint:
// Express.js health check
app.get('/health', async (req, res) => {
const healthcheck = {
uptime: process.uptime(),
message: 'OK',
timestamp: Date.now(),
checks: {}
};
try {
// Check database
await db.ping();
healthcheck.checks.database = 'OK';
// Check Redis
await redis.ping();
healthcheck.checks.redis = 'OK';
// Check external API
const apiResponse = await fetch('https://api.example.com/health');
healthcheck.checks.externalAPI = apiResponse.ok ? 'OK' : 'DEGRADED';
res.status(200).json(healthcheck);
} catch (error) {
healthcheck.message = error.message;
res.status(503).json(healthcheck);
}
});
Kubernetes Liveness and Readiness:
apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
containers:
- name: app
image: myapp:latest
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /ready
port: 3000
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 3
successThreshold: 1
failureThreshold: 3
Monitor Health Status:
# Check health status
docker ps
# Shows (healthy), (unhealthy), or (health: starting)
# View health check logs
docker inspect --format='{{json .State.Health}}' container_name | jq
# Watch health status
watch -n 1 'docker inspect --format="{{.State.Health.Status}}" container_name'
Q40: What are Docker init processes and why are they important?
Answer:
The Problem:
# Bad: Process runs as PID 1
FROM node:18-alpine
CMD ["node", "server.js"]
When a process runs as PID 1:
- It doesn’t handle signals properly (SIGTERM, SIGINT)
- Zombie processes aren’t reaped
- Graceful shutdown doesn’t work
Solutions:
1. Use tini or dumb-init:
FROM node:18-alpine
# Install tini
RUN apk add --no-cache tini
# Use tini as entrypoint
ENTRYPOINT ["/sbin/tini", "--"]
CMD ["node", "server.js"]
2. Use Docker’s –init flag:
docker run --init myapp:latest
Docker Compose:
services:
web:
image: myapp:latest
init: true
3. Proper Signal Handling in Application:
// server.js
const express = require('express');
const app = express();
const server = app.listen(3000, () => {
console.log('Server started on port 3000');
});
// Graceful shutdown
process.on('SIGTERM', () => {
console.log('SIGTERM signal received: closing HTTP server');
server.close(() => {
console.log('HTTP server closed');
// Close database connections
db.close(() => {
console.log('Database connection closed');
process.exit(0);
});
});
});
process.on('SIGINT', () => {
console.log('SIGINT signal received');
process.exit(0);
});
4. Use exec form of CMD:
# Bad: Shell form (creates unnecessary shell process)
CMD node server.js
# Good: Exec form
CMD ["node", "server.js"]
# Best: With init process
ENTRYPOINT ["/sbin/tini", "--"]
CMD ["node", "server.js"]
Testing Signal Handling:
# Start container
docker run -d --name test myapp
# Send SIGTERM
docker stop test
# Should see graceful shutdown logs
# Check exit code
docker inspect test --format='{{.State.ExitCode}}'
# Should be 0 for graceful shutdown
Q41: Explain Docker rootless mode and its benefits
Answer:
Rootless Docker allows running Docker daemon and containers without root privileges.
Installation:
# Install rootless Docker
curl -fsSL https://get.docker.com/rootless | sh
# Add to PATH
export PATH=$HOME/bin:$PATH
export DOCKER_HOST=unix://$XDG_RUNTIME_DIR/docker.sock
# Start daemon
systemctl --user start docker
# Enable on boot
systemctl --user enable docker
Benefits:
- Enhanced security (no root access needed)
- Mitigates container breakout risks
- Better for multi-tenant environments
- Compliance with security policies
- Isolation between users
Limitations:
- Can’t use privileged ports (< 1024) by default
- Some storage drivers not supported
- Performance overhead
- Limited cgroup support
Port Mapping Workaround:
# Use port > 1024
docker run -p 8080:80 nginx
# Or use sysctl (requires sudo once)
sudo sysctl net.ipv4.ip_unprivileged_port_start=80
Docker Compose with Rootless:
version: '3.8'
services:
web:
image: nginx:alpine
ports:
- "8080:80" # Use high port
user: "1000:1000"
Check if running rootless:
docker info | grep -i rootless
# Output: Rootless: true
Q42: How do you implement Docker multi-stage builds for different languages?
Answer:
Node.js Multi-Stage Build:
# syntax=docker/dockerfile:1.4
# Stage 1: Dependencies
FROM node:18-alpine AS deps
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
# Stage 2: Build
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Stage 3: Production
FROM node:18-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
RUN addgroup --system --gid 1001 nodejs && \
adduser --system --uid 1001 nodejs
COPY --from=deps --chown=nodejs:nodejs /app/node_modules ./node_modules
COPY --from=builder --chown=nodejs:nodejs /app/dist ./dist
COPY --from=builder --chown=nodejs:nodejs /app/package.json ./
USER nodejs
EXPOSE 3000
CMD ["node", "dist/server.js"]
Python Multi-Stage Build:
# Stage 1: Builder
FROM python:3.11-slim AS builder
WORKDIR /app
# Install build dependencies
RUN apt-get update && \
apt-get install -y --no-install-recommends gcc
# Install Python dependencies
COPY requirements.txt .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /app/wheels -r requirements.txt
# Stage 2: Production
FROM python:3.11-slim
WORKDIR /app
# Copy only wheels
COPY --from=builder /app/wheels /wheels
COPY --from=builder /app/requirements.txt .
# Install dependencies
RUN pip install --no-cache /wheels/*
# Copy application
COPY . .
# Create non-root user
RUN useradd -m -u 1000 appuser && \
chown -R appuser:appuser /app
USER appuser
CMD ["python", "app.py"]
Go Multi-Stage Build:
# Stage 1: Build
FROM golang:1.21-alpine AS builder
WORKDIR /app
# Copy go mod files
COPY go.mod go.sum ./
RUN go mod download
# Copy source
COPY . .
# Build binary
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o main .
# Stage 2: Production
FROM alpine:3.18
RUN apk --no-cache add ca-certificates
WORKDIR /root/
# Copy binary from builder
COPY --from=builder /app/main .
# Run as non-root
RUN adduser -D appuser
USER appuser
CMD ["./main"]
Java Multi-Stage Build:
# Stage 1: Build with Maven
FROM maven:3.9-eclipse-temurin-17 AS builder
WORKDIR /app
# Copy pom.xml and download dependencies
COPY pom.xml .
RUN mvn dependency:go-offline
# Copy source and build
COPY src ./src
RUN mvn package -DskipTests
# Stage 2: Runtime with JRE
FROM eclipse-temurin:17-jre-alpine
WORKDIR /app
# Copy jar from builder
COPY --from=builder /app/target/*.jar app.jar
# Create non-root user
RUN addgroup -S spring && adduser -S spring -G spring
USER spring
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "app.jar"]
React/Next.js Multi-Stage Build:
# Stage 1: Dependencies
FROM node:18-alpine AS deps
WORKDIR /app
COPY package*.json ./
RUN npm ci
# Stage 2: Builder
FROM node:18-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build
# Stage 3: Production
FROM node:18-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
RUN addgroup --system --gid 1001 nodejs && \
adduser --system --uid 1001 nextjs
# Copy built assets
COPY --from=builder /app/public ./public
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
CMD ["node", "server.js"]
Size Comparison:
# Single-stage Go build
FROM golang:1.21
# Final size: ~800MB
# Multi-stage Go build
FROM golang:1.21 AS builder
FROM alpine:3.18
# Final size: ~15MB
Q43: How do you debug containers that won’t start?
Answer:
Common Debugging Techniques:
1. View Container Logs:
# View logs of stopped container
docker logs container_name
# Follow logs in real-time
docker logs -f container_name
# Last 100 lines
docker logs --tail 100 container_name
# With timestamps
docker logs -t container_name
2. Inspect Container:
# Full container details
docker inspect container_name
# Specific field
docker inspect --format='{{.State.ExitCode}}' container_name
docker inspect --format='{{.State.Error}}' container_name
# Network settings
docker inspect --format='{{json .NetworkSettings}}' container_name | jq
3. Override Entrypoint:
# Start with shell instead of normal command
docker run -it --entrypoint /bin/sh myapp
# Or bash
docker run -it --entrypoint /bin/bash myapp
# Docker Compose
docker-compose run --entrypoint /bin/sh web
4. Check for Missing Dependencies:
# Run and check what's missing
docker run -it myapp /bin/sh
> ls -la
> which node
> echo $PATH
> env
5. File Permission Issues:
# Add debugging
FROM node:18-alpine
WORKDIR /app
COPY . .
# Check permissions
RUN ls -la /app
RUN whoami
RUN id
CMD ["node", "server.js"]
6. Use docker events:
# Monitor Docker events in real-time
docker events
# Filter by container
docker events --filter container=myapp
# Save to file
docker events > events.log &
docker run myapp
# Check events.log
7. Health Check Debugging:
# See health check results
docker inspect --format='{{json .State.Health}}' container_name | jq
# Run health check manually
docker exec container_name curl -f http://localhost:3000/health
8. Common Issues and Solutions:
Issue: Port already in use
# Check what's using the port
sudo lsof -i :3000
# Change host port
docker run -p 3001:3000 myapp
Issue: Volume mount permission denied
# Check volume permissions
docker run -v mydata:/data alpine ls -la /data
# Fix permissions
docker run -v mydata:/data alpine chown -R 1000:1000 /data
Issue: Network connectivity
# Test DNS
docker run alpine nslookup google.com
# Test network
docker run --network host alpine ping 8.8.8.8
9. Build-time Debugging:
# Add debugging steps
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
# Debug: Show files
RUN ls -la
# Debug: Check npm
RUN npm --version
RUN npm install
# Debug: Show installed packages
RUN ls -la node_modules
COPY . .
# Debug: Show all files
RUN find . -type f
CMD ["node", "server.js"]
10. Use docker-compose logs:
# All services
docker-compose logs
# Specific service
docker-compose logs web
# Follow and timestamps
docker-compose logs -f -t web
Q44: What are Docker contexts and how do you use them?
Answer:
Docker contexts allow switching between different Docker environments easily.
List Contexts:
# Show available contexts
docker context ls
# Current context
docker context show
Create Context:
# For remote Docker host
docker context create remote-server \
--docker "host=ssh://user@remote-server"
# For Kubernetes
docker context create k8s-context \
--kubernetes config-file=~/.kube/config \
--default-stack-orchestrator=kubernetes
Switch Context:
# Use context
docker context use remote-server
# Run command
docker ps
# Now shows containers on remote-server
# Switch back to default
docker context use default
Practical Use Cases:
1. Multi-Environment Management:
# Create contexts for different environments
docker context create dev --docker "host=ssh://dev-server"
docker context create staging --docker "host=ssh://staging-server"
docker context create prod --docker "host=ssh://prod-server"
# Deploy to different environments
docker context use dev
docker-compose up -d
docker context use prod
docker-compose -f docker-compose.prod.yml up -d
2. Cloud Provider Integration:
# AWS ECS
docker context create ecs-context --ecs
docker context use ecs-context
docker compose up
# Azure ACI
docker context create aci-context --aci
docker context use aci-context
docker compose up
3. Local and Remote Development:
# Local development
docker context use default
docker-compose up
# Test on remote server
docker context use remote-dev
docker-compose up -d
4. SSH Configuration:
# ~/.ssh/config
Host docker-remote
HostName 192.168.1.100
User dockeruser
IdentityFile ~/.ssh/docker-key
# Create context using SSH config
docker context create remote --docker "host=ssh://docker-remote"
5. Context with Environment Variables:
# Use with environment variable
export DOCKER_CONTEXT=staging
docker ps
# Or per-command
DOCKER_CONTEXT=prod docker ps
6. Inspect Context:
# View context details
docker context inspect remote-server
# Export context
docker context export remote-server
# Creates remote-server.dockercontext file
# Import context
docker context import remote-server remote-server.dockercontext
Q45: How do you implement container restart policies and handle crashes?
Answer:
Restart Policies:
1. Docker Run:
# No restart (default)
docker run --restart=no myapp
# Always restart
docker run --restart=always myapp
# Restart unless stopped manually
docker run --restart=unless-stopped myapp
# Restart on failure (max 5 times)
docker run --restart=on-failure:5 myapp
2. Docker Compose:
version: '3.8'
services:
web:
image: myapp:latest
restart: unless-stopped
deploy:
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
worker:
image: worker:latest
restart: always
batch:
image: batch:latest
restart: "no"
3. Swarm Mode:
services:
web:
image: myapp:latest
deploy:
replicas: 3
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
update_config:
failure_action: rollback
monitor: 60s
Handle Application Crashes:
1. Implement Graceful Shutdown:
// Node.js example
const express = require('express');
const app = express();
const server = app.listen(3000);
let isShuttingDown = false;
// Health check endpoint
app.get('/health', (req, res) => {
if (isShuttingDown) {
res.status(503).send('Shutting down');
} else {
res.status(200).send('OK');
}
});
// Graceful shutdown
function gracefulShutdown(signal) {
console.log(`${signal} received, starting graceful shutdown`);
isShuttingDown = true;
// Stop accepting new requests
server.close(() => {
console.log('HTTP server closed');
// Close database connections
mongoose.connection.close(false, () => {
console.log('MongoDB connection closed');
process.exit(0);
});
});
// Force shutdown after 30s
setTimeout(() => {
console.error('Forced shutdown after timeout');
process.exit(1);
}, 30000);
}
process.on('SIGTERM', () => gracefulShutdown('SIGTERM'));
process.on('SIGINT', () => gracefulShutdown('SIGINT'));
2. Crash Recovery Script:
#!/bin/bash
# monitor.sh
CONTAINER_NAME="myapp"
MAX_RESTARTS=10
RESTART_COUNT=0
while true; do
if ! docker ps | grep -q $CONTAINER_NAME; then
echo "Container $CONTAINER_NAME is not running"
if [ $RESTART_COUNT -lt $MAX_RESTARTS ]; then
echo "Restarting container (attempt $((RESTART_COUNT+1)))"
docker start $CONTAINER_NAME
RESTART_COUNT=$((RESTART_COUNT+1))
# Send alert
curl -X POST https://alerts.example.com/webhook \
-d "Container $CONTAINER_NAME restarted"
else
echo "Max restart attempts reached, sending critical alert"
# Send critical alert
exit 1
fi
fi
sleep 10
done
3. Monitor and Alert:
# docker-compose.monitoring.yml
version: '3.8'
services:
app:
image: myapp:latest
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
labels:
- "com.example.alert=true"
monitor:
image: google/cadvisor:latest
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
alertmanager:
image: prom/alertmanager:latest
volumes:
- ./alertmanager.yml:/etc/alertmanager/alertmanager.yml
4. Use Process Managers:
FROM node:18-alpine
# Install PM2
RUN npm install -g pm2
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
# PM2 with cluster mode
CMD ["pm2-runtime", "start", "ecosystem.config.js"]
// ecosystem.config.js
module.exports = {
apps: [{
name: 'api',
script: './server.js',
instances: 'max',
exec_mode: 'cluster',
max_restarts: 10,
min_uptime: '10s',
max_memory_restart: '1G',
error_file: '/dev/null',
out_file: '/dev/null',
merge_logs: true,
autorestart: true
}]
};
Conclusion and Interview Tips
Final Tips for Docker Interviews in 2026:
1. Demonstrate Practical Experience:
- Share real-world scenarios you’ve encountered
- Explain trade-offs you’ve made
- Discuss how you’ve optimized Docker in production
2. Security First:
- Always mention security best practices
- Discuss image scanning and vulnerability management
- Explain least privilege principles
3. Performance Awareness:
- Know image optimization techniques
- Understand resource management
- Explain monitoring and debugging approaches
4. Cloud-Native Mindset:
- Understand orchestration (Swarm vs Kubernetes)
- Know CI/CD integration patterns
- Discuss multi-cloud strategies
5. Stay Updated:
- Follow Docker blog and release notes
- Know the latest features (BuildKit, rootless, etc.)
- Understand industry trends
Common Follow-up Questions Interviewers Ask:
- “Have you used this in production?”
- “What challenges did you face?”
- “How would you scale this?”
- “What’s your monitoring strategy?”
- “How do you handle secrets?”
Resources for Further Learning:
- Docker Official Documentation
- Docker Mastery Course
- Kubernetes Documentation
- DevOps blogs and communities
- GitHub Actions marketplace
Key Takeaways:
- Fundamentals Matter: Know Docker architecture inside-out
- Security is Critical: Always implement best practices
- Optimization is Key: Small images, fast builds
- Monitoring is Essential: Know your containers’ health
- CI/CD Integration: Automate everything
- Cloud-Native: Understand orchestration
- Troubleshooting Skills: Know how to debug
- Best Practices: Follow industry standards
Good luck with your Docker interviews in 2026! Remember to demonstrate not just knowledge, but understanding of when and why to use specific Docker features and patterns.

