Introduction
Kubernetes has become the de facto standard for running containerized applications at scale. But the ecosystem can feel overwhelming: pods, deployments, services, ingress, operators, Helm, RBAC—the list goes on. This in-depth tutorial takes you from zero to hero with clear explanations, hands-on examples, and practical guidance. You’ll learn the mental model, set up a local cluster, deploy and expose applications, manage configuration and storage, scale reliably, secure your workloads, observe and debug issues, and ship to production using modern workflows.
What you’ll build and learn:
- A working local Kubernetes cluster (kind or minikube)
- A complete app stack: Deployment + Service + Ingress + HPA
- Configuration and secrets management
- Persistent storage with PVCs
- Best-practice security, scheduling, and reliability patterns
- Observability with metrics and logs
- Packaging and GitOps deployment options
- A curated list of high-quality resources to go further
Note: Examples target a recent Kubernetes release. Always check the version-specific docs if something looks different in your environment.
Table of Contents
- What Is Kubernetes (and Why)?
- Core Architecture and Mental Model
- Setting Up a Local Cluster
- Tooling Essentials
- Your First App: Deploy, Expose, Scale
- Config and Secrets
- Storage and Stateful Workloads
- Scaling, Reliability, and Updates
- Scheduling 101
- Security Fundamentals
- Networking Deep Dive
- Observability and Debugging
- Packaging and Release: Manifests, Kustomize, Helm, and GitOps
- Cost and Multi-Cluster Considerations
- Cleanup and Next Steps
- Resources
- Conclusion
What Is Kubernetes (and Why)?
Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications. It provides:
- A declarative API to define desired state
- Orchestration of containers into pods and services
- Automated placement, scaling, healing, and rollout strategies
- Extensibility via controllers, CRDs, and operators
The key value: you tell Kubernetes what you want; controllers continuously reconcile the actual cluster state to match.
Core Architecture and Mental Model
Control Plane Components
- kube-apiserver: Front door to the cluster; all requests go through the API server.
- etcd: Distributed key-value store holding cluster state.
- kube-scheduler: Assigns pods to nodes based on resource availability and constraints.
- kube-controller-manager: Runs core controllers (e.g., Deployment, StatefulSet, Job) that implement reconciliation loops.
- cloud-controller-manager: Integrates with cloud provider resources (when applicable).
Node Components
- kubelet: Agent on each node; manages pods, talks to the API server and container runtime.
- container runtime: Runs containers (containerd, CRI-O).
- kube-proxy or CNI datapath: Implements service networking; many CNIs (e.g., Cilium, Calico) provide data plane capabilities.
Declarative Model and Reconciliation
- You apply YAML manifests describing desired state (spec).
- Controllers watch actual state and make changes to converge toward the desired state.
- Labels and selectors connect resources (e.g., Service selects pods via labels).
- Events and status sections reflect what the cluster is doing.
Setting Up a Local Cluster
Choose one:
Option A: kind (Kubernetes in Docker)
- Fast, reproducible, great for CI and local dev.
# Install kind: https://kind.sigs.k8s.io/
kind create cluster --name dev --image kindest/node:v1.31.0
kubectl cluster-info
kubectl get nodes -o wide
Option B: minikube
- Batteries-included addons (metrics-server, ingress).
# Install minikube: https://minikube.sigs.k8s.io/
minikube start --kubernetes-version=stable
minikube addons enable metrics-server
minikube addons enable ingress
Option C: Docker Desktop/Rancher Desktop
- Enable “Kubernetes” in settings.
Tip: For kind, you’ll install metrics-server and an ingress controller manually in later steps.
Tooling Essentials
- kubectl: CLI to interact with the cluster.
- kubeconfig and contexts:
- List contexts: kubectl config get-contexts
- Switch: kubectl config use-context
- Namespaces: Logical isolation; use them early.
- Helpful tools:
- krew (kubectl plugin manager): https://krew.sigs.k8s.io
- kubectx/kubens for fast context/namespace switching: https://github.com/ahmetb/kubectx
- Lens or OpenLens IDE: https://k8slens.dev
- Stern for log tailing across pods: https://github.com/stern/stern
Your First App: Deploy, Expose, Scale
We’ll deploy a simple HTTP API, expose it via Service and Ingress, and autoscale it.
Create a Namespace
kubectl create ns demo
kubectl config set-context --current --namespace=demo
Deployment and Service
The Deployment runs pods; the Service provides stable networking.
app.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-api
labels:
app: hello-api
spec:
replicas: 2
selector:
matchLabels:
app: hello-api
template:
metadata:
labels:
app: hello-api
spec:
containers:
- name: hello-api
image: ghcr.io/traefik/whoami:v1.10.2
ports:
- containerPort: 80
resources:
requests:
cpu: "50m"
memory: "64Mi"
limits:
cpu: "250m"
memory: "128Mi"
livenessProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 5
periodSeconds: 10
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 3
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: hello-svc
labels:
app: hello-api
spec:
type: ClusterIP
selector:
app: hello-api
ports:
- name: http
port: 80
targetPort: 80
Apply it:
kubectl apply -f app.yaml
kubectl get deploy,po,svc -l app=hello-api
kubectl describe deploy hello-api
Note: The “whoami” image is a simple HTTP server; in real projects you’ll use your own images.
Expose with Ingress
Ingress provides HTTP routing and TLS termination via an ingress controller.
- minikube users: enable ingress addon (done above).
- kind users: install NGINX ingress controller:
# NGINX Ingress Controller for kind (official YAML)
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
kubectl -n ingress-nginx wait --for=condition=Available deployment/ingress-nginx-controller --timeout=120s
Create an Ingress:
ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-ing
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: hello.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: hello-svc
port:
number: 80
Apply and test:
kubectl apply -f ingress.yaml
# For local resolution, map host to ingress IP
# minikube:
minikube ip
# kind: use host networking or map 127.0.0.1 (see docs)
# Add to /etc/hosts (macOS/Linux)
# 127.0.0.1 hello.local
# minikube users: use the IP from `minikube ip` instead of 127.0.0.1
curl -H "Host: hello.local" http://127.0.0.1/
Tip: On minikube, you can also run
minikube tunnelto get LoadBalancer services locally.
Autoscale with HPA
Horizontal Pod Autoscaler adjusts replicas based on metrics.
- Ensure metrics-server is installed:
- minikube: addon is enabled earlier.
- kind:
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
# Wait for it to be Ready
kubectl -n kube-system get deploy metrics-server
- Create HPA targeting CPU 50% utilization:
hpa.yaml:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: hello-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: hello-api
minReplicas: 2
maxReplicas: 8
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
kubectl apply -f hpa.yaml
kubectl get hpa hello-hpa -w
Generate load to see scaling:
# Simple load test
kubectl run -it --rm load --image=busybox -- /bin/sh -c "while true; do wget -qO- http://hello-svc.demo.svc.cluster.local/ > /dev/null; done"
Config and Secrets
Use ConfigMap for non-sensitive config and Secret for sensitive data (Base64-encoded; consider external secret stores for production).
config-secret.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
LOG_LEVEL: "info"
WELCOME_MSG: "Hello from ConfigMap"
---
apiVersion: v1
kind: Secret
metadata:
name: app-secret
type: Opaque
stringData:
API_KEY: "super-secret-key"
Mount them in the Deployment:
patch-config.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-api
spec:
template:
spec:
containers:
- name: hello-api
env:
- name: LOG_LEVEL
valueFrom:
configMapKeyRef:
name: app-config
key: LOG_LEVEL
- name: API_KEY
valueFrom:
secretKeyRef:
name: app-secret
key: API_KEY
volumeMounts:
- name: config-vol
mountPath: /etc/app
readOnly: true
volumes:
- name: config-vol
configMap:
name: app-config
Apply:
kubectl apply -f config-secret.yaml
kubectl apply -f patch-config.yaml
Important: Encrypt secrets at rest and in transit; avoid committing them to Git. Use external secret operators (e.g., External Secrets) to sync from cloud secret managers.
Storage and Stateful Workloads
Kubernetes uses PersistentVolume (PV) and PersistentVolumeClaim (PVC) for storage. A StorageClass provisions volumes dynamically.
pvc.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-pvc
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
Mount the PVC (example):
deploy-with-volume.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: worker
spec:
replicas: 1
selector:
matchLabels:
app: worker
template:
metadata:
labels:
app: worker
spec:
containers:
- name: worker
image: busybox
command: ["sh", "-c", "echo $(date) >> /data/log.txt; sleep 3600"]
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: data-pvc
If you need stable network identity and ordered startup/shutdown (e.g., databases), use StatefulSet:
statefulset-example.yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: db
spec:
serviceName: "db"
replicas: 1
selector:
matchLabels:
app: db
template:
metadata:
labels:
app: db
spec:
containers:
- name: db
image: postgres:16
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: app-secret
key: API_KEY
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 5Gi
Caution: Running stateful databases on Kubernetes is possible but requires careful planning (storage performance, backup/restore, failover). Consider managed DBs in production unless you have strong reasons and expertise.
Scaling, Reliability, and Updates
- Resource requests/limits: Requests drive scheduling; limits cap usage.
- Probes: liveness/readiness/startup to ensure correct lifecycle management.
- Rolling updates/rollbacks:
- kubectl rollout status deploy/hello-api
- kubectl rollout undo deploy/hello-api
- PodDisruptionBudget (PDB): Maintain availability during maintenance.
pdb.yaml:
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: hello-pdb
spec:
minAvailable: 1
selector:
matchLabels:
app: hello-api
- PriorityClasses: Influence scheduling during resource pressure.
- Cluster autoscaler (cloud): Adjusts node count based on pending pods.
Scheduling 101
- Taints and tolerations: Repel pods from nodes unless tolerations are present.
- Node affinity/anti-affinity: Hard/soft constraints on node selection.
- Pod topology spread constraints: Evenly distribute across zones/nodes.
example-scheduling.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: scheduled-app
spec:
replicas: 3
selector:
matchLabels:
app: scheduled
template:
metadata:
labels:
app: scheduled
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
topologyKey: "kubernetes.io/hostname"
labelSelector:
matchLabels:
app: scheduled
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app: scheduled
tolerations:
- key: "workload"
operator: "Equal"
value: "batch"
effect: "NoSchedule"
containers:
- name: app
image: nginx:1.27
Security Fundamentals
- Namespaces and RBAC:
- Use least privilege via Roles/ClusterRoles and RoleBindings.
- ServiceAccounts:
- Associate pods with dedicated ServiceAccounts, not default.
- NetworkPolicies:
- Deny-by-default and explicitly allow required traffic.
- Pod Security Admission (PSA):
- Use baseline/restricted policies at Namespace level.
- securityContext:
- Run as non-root, drop capabilities, read-only root FS, seccomp profiles.
rbac-network-security.yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
name: app-sa
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: read-config
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: app-rb
subjects:
- kind: ServiceAccount
name: app-sa
roleRef:
kind: Role
name: read-config
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Namespace
metadata:
name: restricted
labels:
pod-security.kubernetes.io/enforce: "restricted"
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
namespace: demo
spec:
podSelector: {}
policyTypes: ["Ingress","Egress"]
Container securityContext example:
spec:
containers:
- name: app
image: your/image
securityContext:
runAsNonRoot: true
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
seccompProfile:
type: RuntimeDefault
readOnlyRootFilesystem: true
Consider admission policy tools like Kyverno or Gatekeeper (OPA) to enforce org-wide security rules.
Networking Deep Dive
- Services:
- ClusterIP: internal-only
- NodePort: exposes a port on every node
- LoadBalancer: external load balancer (cloud) or via addon locally
- Ingress:
- L7 HTTP routing and TLS via controllers (NGINX, Traefik, HAProxy, Contour, Kong, Cilium)
- CNI:
- Provides pod networking; advanced CNIs add NetworkPolicy, eBPF observability, and encryption.
Example TLS Ingress (cert-manager recommended for certs):
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-tls
spec:
tls:
- hosts: ["hello.local"]
secretName: hello-tls-secret
rules:
- host: hello.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: hello-svc
port:
number: 80
Observability and Debugging
- Logs:
- kubectl logs deploy/hello-api
- Stern to tail multiple pods
- Events and describe:
- kubectl describe pod
- kubectl get events –sort-by=.lastTimestamp
- Exec and port-forward:
- kubectl exec -it deploy/hello-api – sh
- kubectl port-forward svc/hello-svc 8080:80
- Ephemeral containers (debug):
- kubectl debug -it pod/ –image=busybox –target=
- Metrics:
- Install Prometheus and Grafana (kube-prometheus-stack via Helm).
- Tracing:
- Use OpenTelemetry SDK/collector for distributed tracing.
Helm install example (Prometheus + Grafana):
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm upgrade --install monitoring prometheus-community/kube-prometheus-stack -n monitoring --create-namespace
Packaging and Release: Manifests, Kustomize, Helm, and GitOps
- Raw manifests:
- Great for learning; can get repetitive.
- Kustomize:
- Built into kubectl; supports overlays and patches.
- kubectl apply -k ./overlays/dev
- Helm:
- Package, template, and version your app.
- helm create chart; helm upgrade –install
- GitOps:
- Declarative desired state in Git; reconcile to cluster.
- Tools: Argo CD, Flux.
- Benefits: auditability, rollbacks, drift detection.
Simple Kustomize structure:
app/
base/
deployment.yaml
service.yaml
kustomization.yaml
overlays/
dev/
kustomization.yaml # references ../../base and patches
Cost and Multi-Cluster Considerations
- Rightsizing:
- Set accurate requests/limits; use metrics to adjust.
- Autoscaling:
- HPA for pods; Cluster Autoscaler for nodes.
- Bin packing and scheduling policies to improve utilization.
- Multi-tenancy:
- Start with namespaces + RBAC + NetworkPolicy + PSA.
- Multi-cluster:
- Reasons: isolation, geography, scale, compliance.
- Tooling: Cluster API, KubeFed (federation), service mesh, or GitOps managing many clusters.
Cleanup and Next Steps
Cleanup local resources:
kubectl delete ns demo
kubectl delete ns monitoring restricted
# kind:
kind delete cluster --name dev
# minikube:
minikube delete
Next steps:
- Containerize your app and swap the example image.
- Add CI to build/push images.
- Package with Helm or Kustomize.
- Try GitOps with Argo CD or Flux.
- Add observability dashboards and alerts.
Resources
Official Documentation
- Kubernetes docs and tasks: https://kubernetes.io/docs/
- kubectl Cheat Sheet: https://kubernetes.io/docs/reference/kubectl/cheatsheet/
- API Reference: https://kubernetes.io/docs/reference/kubernetes-api/
- Concepts overview: https://kubernetes.io/docs/concepts/
Local Clusters and Tooling
- kind: https://kind.sigs.k8s.io/
- minikube: https://minikube.sigs.k8s.io/
- k3d (K3s in Docker): https://k3d.io/
- Docker Desktop Kubernetes: https://docs.docker.com/desktop/kubernetes/
- Rancher Desktop: https://rancherdesktop.io/
- krew plugins: https://krew.sigs.k8s.io/
- kubectx/kubens: https://github.com/ahmetb/kubectx
- Lens IDE: https://k8slens.dev
Ingress, Networking, and Service Mesh
- NGINX Ingress Controller: https://kubernetes.github.io/ingress-nginx/
- Traefik: https://doc.traefik.io/traefik/
- Cilium CNI: https://cilium.io/
- Calico CNI: https://www.tigera.io/project-calico/
- Istio: https://istio.io/
- Linkerd: https://linkerd.io/
Security
- RBAC: https://kubernetes.io/docs/reference/access-authn-authz/rbac/
- Network Policies: https://kubernetes.io/docs/concepts/services-networking/network-policies/
- Pod Security Admission: https://kubernetes.io/docs/concepts/security/pod-security-admission/
- Kyverno: https://kyverno.io/
- Gatekeeper (OPA): https://github.com/open-policy-agent/gatekeeper
Observability
- metrics-server: https://github.com/kubernetes-sigs/metrics-server
- Prometheus: https://prometheus.io/
- kube-prometheus-stack chart: https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack
- Grafana: https://grafana.com/
- OpenTelemetry: https://opentelemetry.io/
Packaging and GitOps
- Kustomize: https://kubectl.docs.kubernetes.io/installation/kustomize/
- Helm: https://helm.sh/
- Argo CD: https://argo-cd.readthedocs.io/
- Flux: https://fluxcd.io/
Learning and Certification
- Kubernetes Tutorials: https://kubernetes.io/docs/tutorials/
- CKA/CKAD Curriculum: https://training.linuxfoundation.org/certification/
- killer.sh exam simulators: https://killer.sh/
Cloud Provider Managed Kubernetes
- GKE: https://cloud.google.com/kubernetes-engine
- EKS: https://aws.amazon.com/eks/
- AKS: https://azure.microsoft.com/services/kubernetes-service/
Conclusion
Kubernetes is powerful because it turns infrastructure into a programmable platform with a consistent, declarative API. You learned the core architecture and mental model, assembled a local cluster, deployed and exposed an application, scaled it with HPA, managed config and secrets, attached storage, and applied essential security and networking patterns. You also saw how to observe and debug workloads, and how to package and ship changes with Helm, Kustomize, and GitOps.
From here, deepen your skills by containerizing your own services, adding CI/CD, integrating observability, and enforcing policies. The links above will guide you to production-grade patterns. With a solid grasp of the fundamentals and a good toolbox, you’re well on your way from zero to Kubernetes hero.