Namespace Governance
Audit and enforce organizational standards across Kubernetes namespaces. This skill checks for missing resource quotas, unbounded limit ranges, absent network policies, overly permissive RBAC bindings, label compliance, and abandoned namespaces. It produces actionable remediation steps and recommended policy manifests.
Use when: "audit namespaces", "namespace governance", "check resource quotas", "review RBAC", "namespace cleanup", "enforce labels", "namespace policy compliance"
Commands
1. audit --- Check all namespaces for policy compliance
Run a comprehensive governance audit across every namespace and flag violations.
Step 1 -- Enumerate namespaces and collect metadata
# List all namespaces with labels and age
kubectl get namespaces -o json | python3 -c "
import json, sys
from datetime import datetime, timezone
ns_list = json.load(sys.stdin)
print(f\"{'NAMESPACE':<30} {'AGE_DAYS':>8} {'LABELS':>6} {'HAS_OWNER':>9}\")
print('-' * 60)
for ns in ns_list['items']:
name = ns['metadata']['name']
labels = ns['metadata'].get('labels', {})
created = datetime.fromisoformat(ns['metadata']['creationTimestamp'].replace('Z','+00:00'))
age = (datetime.now(timezone.utc) - created).days
has_owner = 'team' in labels or 'owner' in labels
print(f\"{name:<30} {age:>8} {len(labels):>6} {str(has_owner):>9}\")
"
Step 2 -- Check ResourceQuotas
# Find namespaces without ResourceQuotas
echo "=== Namespaces WITHOUT ResourceQuota ==="
ALL_NS=$(kubectl get ns -o jsonpath='{.items[*].metadata.name}')
for ns in $ALL_NS; do
count=$(kubectl get resourcequota -n "$ns" --no-headers 2>/dev/null | wc -l)
if [ "$count" -eq 0 ]; then
# Skip system namespaces
case "$ns" in kube-system|kube-public|kube-node-lease) continue;; esac
echo " MISSING: $ns"
fi
done
# Show existing quotas and utilization
echo ""
echo "=== Existing Quota Utilization ==="
kubectl get resourcequota --all-namespaces -o custom-columns=\
NAMESPACE:.metadata.namespace,\
NAME:.metadata.name,\
CPU_USED:.status.used.requests\\.cpu,\
CPU_HARD:.status.hard.requests\\.cpu,\
MEM_USED:.status.used.requests\\.memory,\
MEM_HARD:.status.hard.requests\\.memory
Step 3 -- Check LimitRanges
# Find namespaces without LimitRanges
echo "=== Namespaces WITHOUT LimitRange ==="
for ns in $ALL_NS; do
count=$(kubectl get limitrange -n "$ns" --no-headers 2>/dev/null | wc -l)
if [ "$count" -eq 0 ]; then
case "$ns" in kube-system|kube-public|kube-node-lease) continue;; esac
echo " MISSING: $ns (pods can request unbounded resources)"
fi
done
Step 4 -- Check NetworkPolicies
# Namespaces with no NetworkPolicy (default-allow-all)
echo "=== Namespaces WITHOUT any NetworkPolicy ==="
for ns in $ALL_NS; do
count=$(kubectl get networkpolicy -n "$ns" --no-headers 2>/dev/null | wc -l)
if [ "$count" -eq 0 ]; then
case "$ns" in kube-system|kube-public|kube-node-lease) continue;; esac
pod_count=$(kubectl get pods -n "$ns" --no-headers 2>/dev/null | wc -l)
echo " OPEN: $ns ($pod_count pods, all traffic allowed)"
fi
done
Step 5 -- RBAC review
# Find overly broad RoleBindings (cluster-admin bound at namespace level, wildcard rules)
echo "=== Risky RBAC Bindings ==="
kubectl get rolebindings,clusterrolebindings --all-namespaces -o json | python3 -c "
import json, sys
data = json.load(sys.stdin)
risky = []
for item in data['items']:
ns = item['metadata'].get('namespace', 'cluster-wide')
name = item['metadata']['name']
role_ref = item.get('roleRef', {})
subjects = item.get('subjects', [])
role_name = role_ref.get('name', '')
# Flag cluster-admin bindings
if role_name == 'cluster-admin':
for s in subjects:
risky.append(f\" CRITICAL: {ns}/{name} grants cluster-admin to {s.get('kind','?')}/{s.get('name','?')}\")
# Flag bindings to default service accounts
for s in subjects:
if s.get('name') == 'default' and s.get('kind') == 'ServiceAccount':
risky.append(f\" WARNING: {ns}/{name} binds role to default SA (should use dedicated SA)\")
for r in risky[:50]:
print(r)
if not risky:
print(' No obviously risky bindings found.')
"
Step 6 -- Label compliance
# Check for required labels on namespaces
REQUIRED_LABELS=("team" "environment" "cost-center")
echo "=== Label Compliance ==="
kubectl get ns -o json | python3 -c "
import json, sys
required = ['team', 'environment', 'cost-center']
ns_list = json.load(sys.stdin)
skip = {'kube-system', 'kube-public', 'kube-node-lease', 'default'}
for ns in ns_list['items']:
name = ns['metadata']['name']
if name in skip:
continue
labels = set(ns['metadata'].get('labels', {}).keys())
missing = [r for r in required if r not in labels]
if missing:
print(f\" {name}: missing labels: {', '.join(missing)}\")
"
Report template:
## Namespace Governance Audit
### Overall Score: {X}/100
### Policy Violations
| Namespace | Issue | Severity | Remediation |
|-----------|-------|----------|-------------|
### Summary
- Total namespaces: {N} (excluding system)
- Without ResourceQuota: {N}
- Without LimitRange: {N}
- Without NetworkPolicy: {N}
- Missing required labels: {N}
- Risky RBAC bindings: {N}
### Recommended Actions (priority order)
1. {action}
2. quota --- Recommend resource quotas for namespaces
Analyze actual resource usage and recommend appropriate quotas.
Step 1 -- Collect current usage per namespace
# Get actual resource consumption per namespace
kubectl top pods --all-namespaces --no-headers 2>/dev/null | awk '{
ns=$1; cpu=$3; mem=$4
# Accumulate per namespace
cpu_total[ns] += (cpu ~ /m$/) ? substr(cpu,1,length(cpu)-1) : cpu*1000
mem_total[ns] += (mem ~ /Mi$/) ? substr(mem,1,length(mem)-1) : \
(mem ~ /Gi$/) ? substr(mem,1,length(mem)-2)*1024 : mem
count[ns]++
}
END {
printf "%-30s %10s %10s %6s\n", "NAMESPACE", "CPU(m)", "MEM(Mi)", "PODS"
for (ns in cpu_total) {
printf "%-30s %10d %10d %6d\n", ns, cpu_total[ns], mem_total[ns], count[ns]
}
}' | sort -k2 -rn
Step 2 -- Collect resource requests vs actual
# Compare requested resources to actual usage
kubectl get pods --all-namespaces -o json | python3 -c "
import json, sys
data = json.load(sys.stdin)
ns_requests = {}
for pod in data['items']:
ns = pod['metadata']['namespace']
if ns not in ns_requests:
ns_requests[ns] = {'cpu_req': 0, 'mem_req': 0, 'cpu_lim': 0, 'mem_lim': 0, 'pods': 0}
ns_requests[ns]['pods'] += 1
for c in pod['spec'].get('containers', []):
res = c.get('resources', {})
req = res.get('requests', {})
lim = res.get('limits', {})
# Parse CPU (simplified)
cpu_r = req.get('cpu', '0')
if cpu_r.endswith('m'):
ns_requests[ns]['cpu_req'] += int(cpu_r[:-1])
else:
ns_requests[ns]['cpu_req'] += int(float(cpu_r) * 1000)
# Parse memory (simplified)
mem_r = req.get('memory', '0')
if mem_r.endswith('Mi'):
ns_requests[ns]['mem_req'] += int(mem_r[:-2])
elif mem_r.endswith('Gi'):
ns_requests[ns]['mem_req'] += int(float(mem_r[:-2]) * 1024)
print(f\"{'NAMESPACE':<30} {'CPU_REQ(m)':>10} {'MEM_REQ(Mi)':>12} {'PODS':>6}\")
for ns, v in sorted(ns_requests.items(), key=lambda x: x[1]['cpu_req'], reverse=True):
print(f\"{ns:<30} {v['cpu_req']:>10} {v['mem_req']:>12} {v['pods']:>6}\")
"
Step 3 -- Generate quota recommendations
Apply a policy of: quota = max(current_usage * 1.5, requests * 1.2) with a hard cap.
# Generate ResourceQuota YAML for a namespace
NS="${1:-default}"
cat << QUOTAEOF
apiVersion: v1
kind: ResourceQuota
metadata:
name: standard-quota
namespace: ${NS}
spec:
hard:
requests.cpu: "{recommended_cpu}"
requests.memory: "{recommended_mem}"
limits.cpu: "{recommended_cpu_limit}"
limits.memory: "{recommended_mem_limit}"
pods: "{recommended_pod_count}"
persistentvolumeclaims: "10"
services.loadbalancers: "2"
---
apiVersion: v1
kind: LimitRange
metadata:
name: standard-limits
namespace: ${NS}
spec:
limits:
- default:
cpu: "500m"
memory: "512Mi"
defaultRequest:
cpu: "100m"
memory: "128Mi"
type: Container
QUOTAEOF
Report template:
## Quota Recommendations
| Namespace | Current CPU | Recommended CPU Quota | Current Mem | Recommended Mem Quota | Pod Limit |
|-----------|-------------|----------------------|-------------|----------------------|-----------|
### Methodology
- CPU quota = max(actual_usage * 1.5, total_requests * 1.2)
- Memory quota = max(actual_usage * 1.5, total_requests * 1.2)
- Pod limit = current_count * 2 (headroom for scaling)
- Adjust based on team growth plans and SLAs
3. cleanup --- Find abandoned namespaces
Identify namespaces that appear unused and are candidates for removal.
Step 1 -- Detect inactive namespaces
kubectl get ns -o json | python3 -c "
import json, sys
from datetime import datetime, timezone
data = json.load(sys.stdin)
skip = {'kube-system', 'kube-public', 'kube-node-lease', 'default'}
print(f\"{'NAMESPACE':<30} {'AGE_DAYS':>8} {'PODS':>5} {'SVCS':>5} {'DEPLOYS':>7} {'STATUS'}\")
print('-' * 75)
for ns in data['items']:
name = ns['metadata']['name']
if name in skip:
continue
created = datetime.fromisoformat(ns['metadata']['creationTimestamp'].replace('Z','+00:00'))
age = (datetime.now(timezone.utc) - created).days
print(f\"{name:<30} {age:>8}\", end='', flush=True)
" 2>/dev/null
# Enrich with workload counts
for ns in $(kubectl get ns -o jsonpath='{.items[*].metadata.name}'); do
case "$ns" in kube-system|kube-public|kube-node-lease|default) continue;; esac
pods=$(kubectl get pods -n "$ns" --no-headers 2>/dev/null | grep -c Running)
deploys=$(kubectl get deployments -n "$ns" --no-headers 2>/dev/null | wc -l)
svcs=$(kubectl get services -n "$ns" --no-headers 2>/dev/null | wc -l)
total=$((pods + deploys + svcs))
if [ "$total" -eq 0 ]; then
echo " EMPTY: $ns (age: $(kubectl get ns "$ns" -o jsonpath='{.metadata.creationTimestamp}') -- candidate for deletion)"
elif [ "$pods" -eq 0 ] && [ "$deploys" -gt 0 ]; then
echo " IDLE: $ns (has $deploys deploy(s) but 0 running pods)"
fi
done
Step 2 -- Check for recent activity
# Look at events to see if anything happened recently
for ns in $(kubectl get ns -o jsonpath='{.items[*].metadata.name}'); do
case "$ns" in kube-system|kube-public|kube-node-lease|default) continue;; esac
event_count=$(kubectl get events -n "$ns" --no-headers 2>/dev/null | wc -l)
if [ "$event_count" -eq 0 ]; then
echo " NO EVENTS: $ns (no activity in event retention window)"
fi
done
Step 3 -- Generate cleanup plan
# For each empty/idle namespace, produce a safe deletion sequence
echo "### Cleanup Commands (review before executing)"
echo ""
echo "# Step 1: Verify nothing critical exists"
echo "kubectl get all,pvc,configmap,secret -n {NAMESPACE}"
echo ""
echo "# Step 2: Remove finalizers if namespace stuck in Terminating"
echo "kubectl get ns {NAMESPACE} -o json | jq '.spec.finalizers=[]' | kubectl replace --raw /api/v1/namespaces/{NAMESPACE}/finalize -f -"
echo ""
echo "# Step 3: Delete"
echo "kubectl delete ns {NAMESPACE} --wait=false"
Report template:
## Namespace Cleanup Candidates
### Empty Namespaces (0 workloads)
| Namespace | Age (days) | Owner Label | Action |
|-----------|-----------|-------------|--------|
### Idle Namespaces (deployments but no running pods)
| Namespace | Deployments | Running Pods | Last Event |
|-----------|-------------|-------------|------------|
### Recommended Actions
1. Contact owners of idle namespaces (list above)
2. Set deletion date for ownerless empty namespaces older than 30 days
3. Apply namespace lifecycle labels: `lifecycle: active|deprecated|pending-deletion`