Kubernetes Patterns
Best practices for Kubernetes deployments and cluster management.
Deployment Patterns
Production-Ready Deployment
apiVersion: apps/v1 kind: Deployment metadata: name: app labels: app: myapp version: v1 spec: replicas: 3 selector: matchLabels: app: myapp strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 0 template: metadata: labels: app: myapp version: v1 spec: serviceAccountName: myapp-sa securityContext: runAsNonRoot: true runAsUser: 1000 fsGroup: 1000 containers: - name: app image: myapp:1.0.0 imagePullPolicy: Always ports: - containerPort: 8080 protocol: TCP resources: requests: cpu: 100m memory: 128Mi limits: cpu: 500m memory: 512Mi livenessProbe: httpGet: path: /health/live port: 8080 initialDelaySeconds: 15 periodSeconds: 10 failureThreshold: 3 readinessProbe: httpGet: path: /health/ready port: 8080 initialDelaySeconds: 5 periodSeconds: 5 failureThreshold: 3 securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true capabilities: drop: - ALL env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name volumeMounts: - name: tmp mountPath: /tmp - name: config mountPath: /etc/config readOnly: true volumes: - name: tmp emptyDir: {} - name: config configMap: name: myapp-config affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchLabels: app: myapp topologyKey: kubernetes.io/hostname topologySpreadConstraints: - maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: ScheduleAnyway labelSelector: matchLabels: app: myapp
Service Configuration
apiVersion: v1 kind: Service metadata: name: myapp labels: app: myapp spec: type: ClusterIP ports: - port: 80 targetPort: 8080 protocol: TCP name: http selector: app: myapp
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: myapp annotations: kubernetes.io/ingress.class: nginx cert-manager.io/cluster-issuer: letsencrypt-prod nginx.ingress.kubernetes.io/rate-limit: "100" spec: tls: - hosts: - myapp.example.com secretName: myapp-tls rules: - host: myapp.example.com http: paths: - path: / pathType: Prefix backend: service: name: myapp port: number: 80
Helm Chart Structure
mychart/ ├── Chart.yaml ├── values.yaml ├── values-dev.yaml ├── values-prod.yaml ├── templates/ │ ├── _helpers.tpl │ ├── deployment.yaml │ ├── service.yaml │ ├── ingress.yaml │ ├── configmap.yaml │ ├── secret.yaml │ ├── serviceaccount.yaml │ ├── hpa.yaml │ └── pdb.yaml └── charts/
values.yaml Best Practices
values.yaml
replicaCount: 3
image: repository: myapp tag: "" # Set by CI/CD pullPolicy: IfNotPresent
resources: requests: cpu: 100m memory: 128Mi limits: cpu: 500m memory: 512Mi
autoscaling: enabled: true minReplicas: 3 maxReplicas: 10 targetCPUUtilizationPercentage: 70
podDisruptionBudget: enabled: true minAvailable: 2
serviceAccount: create: true annotations: {}
ingress: enabled: true className: nginx annotations: {} hosts: - host: chart-example.local paths: - path: / pathType: Prefix
Resource Management
Horizontal Pod Autoscaler
apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: myapp spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: myapp minReplicas: 3 maxReplicas: 20 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 70 - type: Resource resource: name: memory target: type: Utilization averageUtilization: 80 behavior: scaleDown: stabilizationWindowSeconds: 300 policies: - type: Percent value: 10 periodSeconds: 60 scaleUp: stabilizationWindowSeconds: 0 policies: - type: Percent value: 100 periodSeconds: 15
Pod Disruption Budget
apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: myapp spec: minAvailable: 2 # or use maxUnavailable: 1 selector: matchLabels: app: myapp
Security Patterns
Network Policy
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: myapp-network-policy spec: podSelector: matchLabels: app: myapp policyTypes: - Ingress - Egress ingress: - from: - namespaceSelector: matchLabels: name: ingress-nginx - podSelector: matchLabels: app: frontend ports: - protocol: TCP port: 8080 egress: - to: - podSelector: matchLabels: app: database ports: - protocol: TCP port: 5432 - to: # Allow DNS - namespaceSelector: {} podSelector: matchLabels: k8s-app: kube-dns ports: - protocol: UDP port: 53
RBAC Configuration
apiVersion: v1 kind: ServiceAccount metadata: name: myapp-sa
apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: myapp-role rules:
- apiGroups: [""] resources: ["configmaps"] verbs: ["get", "list", "watch"]
- apiGroups: [""] resources: ["secrets"] resourceNames: ["myapp-secrets"] verbs: ["get"]
apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: myapp-rolebinding subjects:
- kind: ServiceAccount name: myapp-sa roleRef: kind: Role name: myapp-role apiGroup: rbac.authorization.k8s.io
Troubleshooting Commands
Check pod status and events
kubectl get pods -l app=myapp -o wide kubectl describe pod <pod-name> kubectl logs <pod-name> --previous
Debug networking
kubectl run debug --rm -it --image=nicolaka/netshoot -- /bin/bash kubectl exec -it <pod> -- curl localhost:8080/health
Check resource usage
kubectl top pods -l app=myapp kubectl top nodes
View HPA status
kubectl get hpa myapp -o yaml
Check events
kubectl get events --sort-by='.lastTimestamp' | tail -20
References
-
Kubernetes Documentation
-
Helm Best Practices
-
Kubernetes Security Best Practices