Kubernetes Container Orchestration on Linux
Knowledge Overview
Prerequisites
Linux Administration, Networking Basics, YAML syntax, Container Concepts, Bash Scripting, Log Analysis
Time Investment
22 minutes reading time
44-66 minutes hands-on practice
Guide Content
Kubernetes container orchestration is the automated management system that deploys, scales, and operates containerized applications across clusters of machines. As the industry-standard platform for container orchestration, Kubernetes (K8s) transforms how organizations deploy and manage cloud-native applications, providing self-healing capabilities, automatic scaling, and declarative configuration management that eliminates manual intervention in production environments.
Table of Contents
- What is Kubernetes Container Orchestration?
- How Does Kubernetes Architecture Work?
- What are Kubernetes Pods and Why Do They Matter?
- How to Deploy Applications with Kubernetes Services?
- What is a Kubernetes Deployment?
- How to Install and Configure kubectl?
- How to Create Your First Kubernetes Pod?
- How to Scale Applications with ReplicaSets?
- What are Kubernetes Namespaces?
- How to Manage Kubernetes Cluster Resources?
- FAQ: Kubernetes Container Orchestration
- Troubleshooting Common Kubernetes Issues
- Additional Resources
What is Kubernetes Container Orchestration?
Kubernetes container orchestration automates the deployment, scaling, and management of containerized applications across distributed systems. Originally developed by Google and now maintained by the Cloud Native Computing Foundation (CNCF), Kubernetes provides a robust framework for running distributed systems resiliently, handling application scaling, failover, and deployment patterns.
Why Choose Kubernetes for Container Management?
Moreover, Kubernetes solves critical challenges in container management. Therefore, when you deploy applications using kubernetes container orchestration, the platform automatically handles container scheduling, load balancing, and service discovery. Additionally, it provides built-in mechanisms for rolling updates, enabling zero-downtime deployments across your infrastructure.
Key benefits of container orchestration include:
- Automated scheduling that places containers on optimal nodes based on resource requirements
- Self-healing capabilities that automatically replace and reschedule failed containers
- Horizontal scaling that adjusts application replicas based on CPU utilization or custom metrics
- Service discovery and load balancing that distributes network traffic efficiently
- Rolling updates and rollbacks for zero-downtime deployments
- Secret and configuration management for secure application configuration
Furthermore, kubernetes container orchestration integrates seamlessly with existing DevOps tools and cloud providers, making it the de facto standard for modern application deployment. Consequently, organizations adopting Kubernetes gain portability across cloud environments while maintaining consistent operational practices.
How Does Kubernetes Architecture Work?
Understanding kubernetes container orchestration requires knowledge of its distributed architecture comprising the control plane and worker nodes. Essentially, the control plane manages the cluster state while worker nodes run your containerized applications.
Control Plane Components
The control plane makes global decisions about the cluster and responds to cluster events. Specifically, it consists of several critical components:
API Server (kube-apiserver)
The Kubernetes API server exposes the Kubernetes API and serves as the front-end for the control plane. Additionally, it validates and configures data for API objects including pods, services, and deployments.
# Check API server health
kubectl get --raw='/readyz?verbose'
# View API resources
kubectl api-resources
etcd - Distributed Key-Value Store
Meanwhile, etcd stores all cluster data, providing consistent and highly-available storage for the kubernetes container orchestration system. Therefore, it maintains the desired state configuration and current state information.
# View etcd cluster health (on control plane node)
sudo ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
endpoint health
Scheduler (kube-scheduler)
Subsequently, the scheduler watches for newly created pods and assigns them to nodes based on resource requirements, constraints, and available capacity.
# View scheduler events
kubectl get events --sort-by='.lastTimestamp' | grep Scheduled
Controller Manager (kube-controller-manager)
Furthermore, the controller manager runs controller processes that regulate cluster state. For instance, the Deployment controller manages ReplicaSets, while the Node controller monitors node health.
Worker Node Components
Worker nodes run your containerized applications. Particularly, each node contains:
Kubelet
The kubelet agent ensures containers are running in pods. Moreover, it receives pod specifications from the API server and ensures the described containers are running and healthy.
# Check kubelet status
sudo systemctl status kubelet
# View kubelet logs
sudo journalctl -u kubelet -f
Kube-proxy
Additionally, kube-proxy maintains network rules on nodes, implementing the Kubernetes Service concept by enabling network communication to your pods.
# View kube-proxy configuration
kubectl -n kube-system get configmap kube-proxy -o yaml
Container Runtime
Finally, the container runtime (Docker, containerd, or CRI-O) actually runs the containers. Therefore, Kubernetes supports any runtime implementing the Kubernetes Container Runtime Interface (CRI).
# Check container runtime
kubectl get nodes -o wide
# View running containers (using containerd)
sudo crictl ps
What are Kubernetes Pods and Why Do They Matter?
A pod represents the smallest deployable unit in kubernetes container orchestration. Specifically, pods encapsulate one or more containers that share storage, network resources, and a specification for running the containers.
Understanding Pod Lifecycle
Consequently, pods follow a defined lifecycle from creation to termination:
- Pending: Pod accepted but containers not yet created
- Running: Pod bound to node and all containers created
- Succeeded: All containers terminated successfully
- Failed: All containers terminated with at least one failure
- Unknown: Pod state cannot be determined
How to Create a Basic Pod
Let's create a simple nginx pod to demonstrate kubernetes container orchestration fundamentals:
# nginx-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
environment: production
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80
protocol: TCP
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
Deploy the pod using kubectl:
# Create the pod
kubectl apply -f nginx-pod.yaml
# Verify pod creation
kubectl get pods
# View detailed pod information
kubectl describe pod nginx-pod
# Check pod logs
kubectl logs nginx-pod
# Execute commands inside the pod
kubectl exec -it nginx-pod -- /bin/bash
Multi-Container Pod Pattern
Furthermore, pods can run multiple containers that work together. For example, the sidecar pattern deploys a helper container alongside the main application:
# multi-container-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: web-app-pod
spec:
containers:
- name: web-app
image: nginx:1.25
ports:
- containerPort: 80
volumeMounts:
- name: shared-logs
mountPath: /var/log/nginx
- name: log-aggregator
image: busybox:1.35
command: ['sh', '-c', 'tail -f /var/log/nginx/access.log']
volumeMounts:
- name: shared-logs
mountPath: /var/log/nginx
volumes:
- name: shared-logs
emptyDir: {}
How to Deploy Applications with Kubernetes Services?
Kubernetes Services provide stable networking endpoints for kubernetes container orchestration. Particularly, Services enable communication between different parts of your application and external clients.
Service Types Explained
Kubernetes offers four service types, each serving specific networking requirements:
ClusterIP Service (Default)
ClusterIP exposes the service on an internal cluster IP. Therefore, this service type is only accessible within the cluster:
# clusterip-service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: ClusterIP
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
# Create the service
kubectl apply -f clusterip-service.yaml
# View service details
kubectl get service nginx-service
kubectl describe service nginx-service
# Test service from within cluster
kubectl run test-pod --image=busybox:1.35 --rm -it --restart=Never -- \
wget -qO- http://nginx-service
NodePort Service
Meanwhile, NodePort exposes the service on each node's IP at a static port:
# nodeport-service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-nodeport
spec:
type: NodePort
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30080
# Apply NodePort service
kubectl apply -f nodeport-service.yaml
# Get node IP
kubectl get nodes -o wide
# Access service (replace NODE_IP)
curl http://NODE_IP:30080
LoadBalancer Service
Furthermore, LoadBalancer provisions an external load balancer (on supported cloud providers):
# loadbalancer-service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-loadbalancer
spec:
type: LoadBalancer
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
ExternalName Service
Additionally, ExternalName maps a service to a DNS name:
# externalname-service.yaml
apiVersion: v1
kind: Service
metadata:
name: external-database
spec:
type: ExternalName
externalName: db.example.com
What is a Kubernetes Deployment?
A Deployment provides declarative updates for pods and ReplicaSets in kubernetes container orchestration. Consequently, Deployments manage the creation and scaling of pods while maintaining desired state configuration.
How to Create a Production Deployment
Create a deployment with multiple replicas for high availability:
# nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 5
Deploy and manage the application:
# Create deployment
kubectl apply -f nginx-deployment.yaml
# Check deployment status
kubectl get deployments
kubectl rollout status deployment/nginx-deployment
# View deployment details
kubectl describe deployment nginx-deployment
# Scale deployment
kubectl scale deployment nginx-deployment --replicas=5
# View pods created by deployment
kubectl get pods -l app=nginx
How to Perform Rolling Updates
Moreover, kubernetes container orchestration enables zero-downtime updates through rolling deployments:
# Update container image
kubectl set image deployment/nginx-deployment nginx=nginx:1.26
# Monitor rollout progress
kubectl rollout status deployment/nginx-deployment
# View rollout history
kubectl rollout history deployment/nginx-deployment
# Rollback to previous version
kubectl rollout undo deployment/nginx-deployment
# Rollback to specific revision
kubectl rollout undo deployment/nginx-deployment --to-revision=2
# Pause rollout
kubectl rollout pause deployment/nginx-deployment
# Resume rollout
kubectl rollout resume deployment/nginx-deployment
How to Install and Configure kubectl?
kubectl is the command-line tool for kubernetes container orchestration, enabling you to run commands against Kubernetes clusters.
Installing kubectl on Linux
Install kubectl using multiple methods:
# Method 1: Download latest stable version
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
# Verify the binary
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
echo "$(cat kubectl.sha256) kubectl" | sha256sum --check
# Install kubectl
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
# Verify installation
kubectl version --client
# Method 2: Using package manager (Ubuntu/Debian)
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubectl
# Method 3: Install via snap
sudo snap install kubectl --classic
Configuring kubectl Context
Furthermore, configure kubectl to connect to your kubernetes cluster:
# View kubectl configuration
kubectl config view
# Get current context
kubectl config current-context
# List all contexts
kubectl config get-contexts
# Switch context
kubectl config use-context my-cluster-context
# Set default namespace
kubectl config set-context --current --namespace=my-namespace
# Create new context
kubectl config set-context dev-context --cluster=development --user=developer
Essential kubectl Commands
Master these fundamental commands for kubernetes container orchestration:
# Cluster information
kubectl cluster-info
kubectl get nodes
kubectl top nodes # Requires metrics-server
# Resource management
kubectl get all --all-namespaces
kubectl get pods -o wide
kubectl get services
kubectl get deployments
# Detailed information
kubectl describe pod <pod-name>
kubectl describe node <node-name>
# Resource creation
kubectl create -f manifest.yaml
kubectl apply -f manifest.yaml
kubectl apply -f ./manifests/ # Apply directory
# Resource deletion
kubectl delete pod <pod-name>
kubectl delete -f manifest.yaml
kubectl delete all --all # Danger: deletes everything in namespace
# Logs and debugging
kubectl logs <pod-name>
kubectl logs <pod-name> -f # Follow logs
kubectl logs <pod-name> -c <container-name> # Multi-container pod
kubectl logs --previous <pod-name> # Previous instance logs
# Execute commands
kubectl exec -it <pod-name> -- /bin/bash
kubectl exec <pod-name> -- ls /app
# Port forwarding
kubectl port-forward pod/<pod-name> 8080:80
kubectl port-forward service/<service-name> 8080:80
# Copy files
kubectl cp <pod-name>:/path/to/file ./local-file
kubectl cp ./local-file <pod-name>:/path/to/file
How to Create Your First Kubernetes Pod?
Let's walk through creating a complete application deployment using kubernetes container orchestration principles.
Step 1: Create Application Deployment
First, create a complete web application deployment:
# webapp-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp
namespace: default
labels:
app: webapp
version: v1
spec:
replicas: 3
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
version: v1
spec:
containers:
- name: webapp
image: nginx:1.25-alpine
ports:
- containerPort: 80
name: http
env:
- name: ENVIRONMENT
value: "production"
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"
livenessProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 10
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 80
initialDelaySeconds: 5
periodSeconds: 5
volumeMounts:
- name: config
mountPath: /etc/nginx/conf.d
volumes:
- name: config
configMap:
name: nginx-config
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
data:
default.conf: |
server {
listen 80;
location / {
return 200 'Hello from Kubernetes Container Orchestration!';
add_header Content-Type text/plain;
}
location /health {
access_log off;
return 200 'healthy';
}
location /ready {
access_log off;
return 200 'ready';
}
}
Step 2: Deploy Application
Subsequently, deploy the application to your cluster:
# Apply the deployment
kubectl apply -f webapp-deployment.yaml
# Verify deployment
kubectl get deployments webapp
kubectl get pods -l app=webapp
# Watch pod creation
kubectl get pods -l app=webapp -w
# Check deployment events
kubectl describe deployment webapp
Step 3: Expose Application with Service
Then, create a service to expose the application:
# webapp-service.yaml
apiVersion: v1
kind: Service
metadata:
name: webapp-service
labels:
app: webapp
spec:
type: NodePort
selector:
app: webapp
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30080
# Apply service
kubectl apply -f webapp-service.yaml
# Get service details
kubectl get service webapp-service
kubectl describe service webapp-service
# Test the service
kubectl run test-pod --image=busybox:1.35 --rm -it --restart=Never -- \
wget -qO- http://webapp-service
Step 4: Verify Application Health
Finally, verify your kubernetes container orchestration deployment:
# Check pod logs
kubectl logs -l app=webapp --tail=50
# Test endpoints
kubectl get endpoints webapp-service
# Port forward to local machine
kubectl port-forward service/webapp-service 8080:80
# Test locally (in another terminal)
curl http://localhost:8080
How to Scale Applications with ReplicaSets?
ReplicaSets ensure kubernetes container orchestration maintains a specified number of pod replicas. Indeed, ReplicaSets provide high availability and load distribution.
Understanding ReplicaSet Functionality
ReplicaSets maintain pod replicas through continuous reconciliation. Specifically, the ReplicaSet controller monitors the current state and adjusts to match the desired state.
# View ReplicaSets
kubectl get replicasets
kubectl get rs # Short form
# Describe ReplicaSet
kubectl describe rs nginx-deployment-<hash>
# Scale ReplicaSet directly (not recommended)
kubectl scale rs nginx-deployment-<hash> --replicas=5
Horizontal Pod Autoscaling
Moreover, Horizontal Pod Autoscaler automatically scales pod replicas based on metrics:
# hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: webapp-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: webapp
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
behavior:
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 50
periodSeconds: 60
scaleUp:
stabilizationWindowSeconds: 0
policies:
- type: Percent
value: 100
periodSeconds: 30
- type: Pods
value: 2
periodSeconds: 30
selectPolicy: Max
# Create HPA
kubectl apply -f hpa.yaml
# View HPA status
kubectl get hpa
kubectl describe hpa webapp-hpa
# Watch HPA in action
kubectl get hpa webapp-hpa --watch
# Generate load to test autoscaling
kubectl run -it load-generator --rm --image=busybox:1.35 --restart=Never -- \
/bin/sh -c "while true; do wget -q -O- http://webapp-service; done"
What are Kubernetes Namespaces?
Namespaces provide virtual clusters within kubernetes container orchestration, enabling resource isolation and multi-tenancy. Particularly, namespaces organize resources and apply policies across different teams or projects.
How to Create and Manage Namespaces
Create namespaces for environment separation:
# Create namespace imperatively
kubectl create namespace development
kubectl create namespace staging
kubectl create namespace production
# Create namespace declaratively
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
name: testing
labels:
environment: test
EOF
# List namespaces
kubectl get namespaces
kubectl get ns # Short form
# View namespace details
kubectl describe namespace development
# Delete namespace (WARNING: deletes all resources)
kubectl delete namespace testing
Working with Namespaced Resources
Subsequently, deploy resources to specific namespaces:
# Deploy to specific namespace
kubectl apply -f deployment.yaml -n development
# Get resources from namespace
kubectl get pods -n development
kubectl get all -n production
# Set default namespace for context
kubectl config set-context --current --namespace=development
# View resources across all namespaces
kubectl get pods --all-namespaces
kubectl get pods -A # Short form
Implementing Resource Quotas
Furthermore, enforce resource limits using ResourceQuota:
# resource-quota.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-quota
namespace: development
spec:
hard:
requests.cpu: "10"
requests.memory: 20Gi
limits.cpu: "20"
limits.memory: 40Gi
pods: "50"
services: "10"
persistentvolumeclaims: "20"
---
apiVersion: v1
kind: LimitRange
metadata:
name: limit-range
namespace: development
spec:
limits:
- max:
cpu: "2"
memory: "4Gi"
min:
cpu: "100m"
memory: "128Mi"
default:
cpu: "500m"
memory: "512Mi"
defaultRequest:
cpu: "200m"
memory: "256Mi"
type: Container
# Apply quotas
kubectl apply -f resource-quota.yaml
# View resource quotas
kubectl get resourcequota -n development
kubectl describe resourcequota compute-quota -n development
# View limit ranges
kubectl get limitrange -n development
kubectl describe limitrange limit-range -n development
How to Manage Kubernetes Cluster Resources?
Effective kubernetes container orchestration requires understanding resource management, including compute resources, storage, and networking.
Managing Compute Resources
Define resource requests and limits for predictable performance:
# resource-management.yaml
apiVersion: v1
kind: Pod
metadata:
name: resource-demo
spec:
containers:
- name: app
image: nginx:1.25
resources:
requests:
memory: "256Mi"
cpu: "500m"
limits:
memory: "512Mi"
cpu: "1000m"
Request vs Limit:
- Requests: Guaranteed resources for scheduling decisions
- Limits: Maximum resources the container can consume
# View node resource allocation
kubectl top nodes
kubectl describe nodes | grep -A 5 "Allocated resources"
# View pod resource usage
kubectl top pods
kubectl top pods -n kube-system
# View resource requests across namespace
kubectl get pods -n default -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[*].resources.requests.cpu}{"\t"}{.spec.containers[*].resources.requests.memory}{"\n"}{end}'
Persistent Storage Management
Implement persistent storage using PersistentVolumes and PersistentVolumeClaims:
# persistent-storage.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-data
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: manual
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-data
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: Pod
metadata:
name: pod-with-storage
spec:
containers:
- name: app
image: nginx:1.25
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: data-volume
volumes:
- name: data-volume
persistentVolumeClaim:
claimName: pvc-data
# View storage resources
kubectl get pv
kubectl get pvc
kubectl describe pv pv-data
# Check storage class
kubectl get storageclass
ConfigMaps and Secrets
Additionally, manage application configuration using ConfigMaps and Secrets:
# config-and-secrets.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
app.properties: |
server.port=8080
database.host=db.example.com
log.level=INFO
feature.flags: |
feature.new_ui=true
feature.beta_api=false
---
apiVersion: v1
kind: Secret
metadata:
name: app-secrets
type: Opaque
data:
database-password: cGFzc3dvcmQxMjM= # base64 encoded
api-key: YXBpa2V5MTIzNDU2Nzg5MA==
---
apiVersion: v1
kind: Pod
metadata:
name: app-pod
spec:
containers:
- name: app
image: nginx:1.25
env:
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: app-secrets
key: database-password
volumeMounts:
- name: config
mountPath: /etc/config
volumes:
- name: config
configMap:
name: app-config
# Create secret from literal
kubectl create secret generic db-secret \
--from-literal=username=admin \
--from-literal=password=secretpass123
# Create secret from file
kubectl create secret generic tls-secret \
--from-file=tls.crt=./server.crt \
--from-file=tls.key=./server.key
# View secrets (data hidden)
kubectl get secrets
kubectl describe secret app-secrets
# Decode secret
kubectl get secret app-secrets -o jsonpath='{.data.database-password}' | base64 -d
# Create ConfigMap from file
kubectl create configmap app-config --from-file=./config/
# View ConfigMap
kubectl get configmap app-config -o yaml
FAQ: Kubernetes Container Orchestration
What is the difference between Docker and Kubernetes?
Docker is a containerization platform that packages applications into containers, while kubernetes container orchestration manages and orchestrates multiple containers across distributed systems. Essentially, Docker creates containers and Kubernetes manages them at scale, handling deployment, scaling, networking, and lifecycle management across clusters of machines.
How does Kubernetes achieve high availability?
Kubernetes achieves high availability through multiple mechanisms. Specifically, the control plane runs multiple master nodes with leader election, while applications use ReplicaSets to maintain multiple pod replicas. Moreover, Services provide stable endpoints with automatic failover, and the platform continuously monitors and replaces failed components automatically.
What is the smallest unit in Kubernetes?
The pod represents the smallest deployable unit in kubernetes container orchestration. Particularly, a pod encapsulates one or more containers with shared storage and network resources, running as a single instance on a node. Therefore, containers within a pod share the same IP address and port space.
How do I monitor Kubernetes cluster health?
Monitor cluster health using multiple approaches. First, install metrics-server for resource metrics (kubectl top nodes/pods). Additionally, deploy Prometheus and Grafana for comprehensive monitoring, use kubectl commands to check component status, and implement liveness and readiness probes in your applications for automated health checking.
Can Kubernetes run on a single node?
Yes, kubernetes container orchestration supports single-node deployments using tools like Minikube, kind (Kubernetes in Docker), or K3s. However, single-node setups sacrifice high availability benefits. Consequently, production environments typically require multi-node clusters for redundancy, though development and testing often use single-node configurations.
What are Kubernetes labels and selectors?
Labels are key-value pairs attached to kubernetes objects for organization and selection. Meanwhile, selectors query objects based on label criteria. For instance, Services use label selectors to identify which pods receive traffic, while Deployments use them to manage pod templates. Therefore, labels enable flexible resource grouping and management.
How does Kubernetes handle secrets?
Kubernetes stores secrets as base64-encoded data in etcd. Furthermore, the platform supports encryption at rest for enhanced security. Applications can consume secrets as environment variables or mounted files. However, remember that base64 encoding is not encryptionβimplement additional security measures including RBAC policies and encryption providers for sensitive data.
What is a Kubernetes Ingress?
Ingress provides HTTP and HTTPS routing to services within kubernetes container orchestration clusters. Specifically, Ingress acts as a layer 7 load balancer, managing external access with features including SSL termination, virtual hosting, and path-based routing. Therefore, Ingress consolidates routing rules and reduces the number of external load balancers required.
How do I upgrade a Kubernetes cluster?
Upgrade clusters following these steps: First, backup etcd data and critical resources. Subsequently, upgrade the control plane components (API server, controller manager, scheduler) one version at a time. Then, upgrade worker nodes using cordon and drain commands to safely migrate pods. Finally, verify cluster functionality after each component upgrade.
What is the difference between StatefulSets and Deployments?
Deployments manage stateless applications where pods are interchangeable and can be replaced freely. Conversely, StatefulSets maintain stable pod identities with persistent storage and ordered deployment. Specifically, StatefulSets provide stable network identifiers and ordered scaling for stateful applications like databases requiring persistent state across pod restarts.
Troubleshooting Common Kubernetes Issues
Pod Stuck in Pending State
Symptoms: Pods remain in Pending status indefinitely
Diagnosis:
# Check pod events
kubectl describe pod <pod-name>
# Check node resources
kubectl describe nodes
# View scheduler logs
kubectl logs -n kube-system -l component=kube-scheduler
Common Causes:
- Insufficient cluster resources (CPU, memory)
- Unsatisfiable node selectors or affinity rules
- PersistentVolumeClaim not bound
- Image pull errors
Solutions:
# Scale down other deployments
kubectl scale deployment <other-deployment> --replicas=1
# Add nodes to cluster
# (cloud provider specific)
# Check and fix PVC
kubectl get pvc
kubectl describe pvc <pvc-name>
# Relax pod constraints
kubectl edit pod <pod-name>
# Remove or modify nodeSelector, affinity rules
CrashLoopBackOff Error
Symptoms: Pods continuously restart with increasing backoff delays
Diagnosis:
# View pod status
kubectl get pods
# Check pod logs
kubectl logs <pod-name>
kubectl logs <pod-name> --previous
# Describe pod for events
kubectl describe pod <pod-name>
# Execute into running container
kubectl exec -it <pod-name> -- /bin/sh
Common Causes:
- Application crashes on startup
- Missing environment variables
- Failed health check probes
- Insufficient resources
Solutions:
# Check application logs
kubectl logs <pod-name> --tail=100
# Verify environment configuration
kubectl get pod <pod-name> -o yaml | grep -A 10 env:
# Adjust resource limits
kubectl edit deployment <deployment-name>
# Increase memory/CPU limits
# Temporarily disable probes for debugging
kubectl set probe deployment <deployment-name> \
--liveness --remove
ImagePullBackOff Error
Symptoms: Pods cannot pull container images
Diagnosis:
# Check pod events
kubectl describe pod <pod-name>
# Verify image name
kubectl get pod <pod-name> -o yaml | grep image:
# Check image pull secrets
kubectl get secrets
kubectl describe secret <image-pull-secret>
Solutions:
# Verify image exists
docker pull <image-name>
# Create image pull secret
kubectl create secret docker-registry regcred \
--docker-server=<registry-url> \
--docker-username=<username> \
--docker-password=<password> \
--docker-email=<email>
# Add secret to service account
kubectl patch serviceaccount default -p \
'{"imagePullSecrets": [{"name": "regcred"}]}'
# Or specify in pod spec
kubectl edit deployment <deployment-name>
# Add: spec.template.spec.imagePullSecrets
Service Not Accessible
Symptoms: Cannot reach application through Service endpoint
Diagnosis:
# Check service configuration
kubectl get service <service-name>
kubectl describe service <service-name>
# Verify endpoints
kubectl get endpoints <service-name>
# Check pod labels
kubectl get pods --show-labels
# Test service DNS
kubectl run test-pod --image=busybox:1.35 --rm -it --restart=Never -- \
nslookup <service-name>
Solutions:
# Verify label selectors match
kubectl get service <service-name> -o yaml | grep selector -A 2
kubectl get pods -l <label-key>=<label-value>
# Check pod status
kubectl get pods -o wide
# Verify network policies
kubectl get networkpolicies
# Test connectivity
kubectl run test-pod --image=busybox:1.35 --rm -it --restart=Never -- \
wget -qO- http://<service-name>:<port>
Node NotReady Status
Symptoms: Worker nodes show NotReady status
Diagnosis:
# Check node status
kubectl get nodes
kubectl describe node <node-name>
# Check kubelet status on node
ssh <node>
sudo systemctl status kubelet
sudo journalctl -u kubelet -n 100
# Check node resources
kubectl top node <node-name>
Solutions:
# Restart kubelet
sudo systemctl restart kubelet
# Check disk space
df -h
sudo docker system prune -a
# Verify network connectivity
ping <api-server-ip>
# Rejoin node to cluster (if needed)
kubeadm token create --print-join-command
# Run join command on worker node
DNS Resolution Failures
Symptoms: Pods cannot resolve service names
Diagnosis:
# Check CoreDNS pods
kubectl get pods -n kube-system -l k8s-app=kube-dns
# View CoreDNS logs
kubectl logs -n kube-system -l k8s-app=kube-dns
# Test DNS from pod
kubectl run test-dns --image=busybox:1.35 --rm -it --restart=Never -- \
nslookup kubernetes.default
Solutions:
# Restart CoreDNS
kubectl rollout restart -n kube-system deployment/coredns
# Check CoreDNS ConfigMap
kubectl get configmap coredns -n kube-system -o yaml
# Verify kube-dns service
kubectl get service kube-dns -n kube-system
# Check network policies
kubectl get networkpolicies --all-namespaces
Persistent Volume Issues
Symptoms: PersistentVolumeClaims stuck in Pending
Diagnosis:
# Check PVC status
kubectl get pvc
kubectl describe pvc <pvc-name>
# View available PVs
kubectl get pv
# Check storage class
kubectl get storageclass
Solutions:
# Create matching PersistentVolume
kubectl apply -f pv-definition.yaml
# Verify access modes compatibility
kubectl get pv -o custom-columns=NAME:.metadata.name,ACCESS:.spec.accessModes
# Check storage class availability
kubectl get storageclass
# Force PVC deletion if stuck
kubectl patch pvc <pvc-name> -p '{"metadata":{"finalizers":null}}'
Additional Resources
Official Kubernetes Documentation
- Kubernetes Official Documentation - Comprehensive kubernetes container orchestration guide
- Kubernetes API Reference - Complete API documentation
- kubectl Reference - Command-line tool documentation
- Kubernetes GitHub Repository - Source code and issue tracking
Learning Resources
- Kubernetes Tutorials - Interactive hands-on tutorials
- Play with Kubernetes - Free online Kubernetes playground
- Kubernetes the Hard Way - Manual cluster setup guide
- CNCF Cloud Native Interactive Landscape - Kubernetes ecosystem overview
Container Orchestration Tools
- Minikube - Local Kubernetes development environment
- kind (Kubernetes in Docker) - Kubernetes clusters in Docker containers
- K3s - Lightweight Kubernetes distribution
- MicroK8s - Canonical's minimal Kubernetes
Monitoring and Observability
- Prometheus - Monitoring and alerting toolkit
- Grafana - Observability and visualization platform
- Kubernetes Dashboard - Web-based cluster management UI
- Lens - Kubernetes IDE and management platform
Security Resources
- CIS Kubernetes Benchmark - Security configuration guidelines
- Kubernetes Security Best Practices - Official security documentation
- Falco - Runtime security and threat detection
- Open Policy Agent (OPA) - Policy-based access control
Related LinuxTips.pro Articles
- Docker Fundamentals: Containers vs Virtual Machines - Understanding container technology basics
- Docker Image Management and Optimization - Efficient container image creation
- Docker Networking and Volumes - Container connectivity and storage
- Kubernetes Cluster Setup with kubeadm - Production cluster deployment guide
- Linux Security Essentials: Hardening Your System - System security fundamentals
- System Performance Monitoring with top and htop - Resource monitoring techniques
Kubernetes container orchestration revolutionizes how organizations deploy and manage containerized applications at scale. By understanding the architecture, mastering kubectl commands, and implementing best practices covered in this guide, you now possess the fundamental knowledge to deploy production-ready applications using Kubernetes on Linux systems. Continue your kubernetes journey by exploring advanced topics including custom resource definitions, operators, service mesh integration, and multi-cluster management to fully leverage the power of container orchestration in modern cloud-native environments.