kubernetes-operator-scaffolder

Scaffold production-ready Kubernetes operators — generate CRDs, controllers, RBAC, webhooks, and Dockerfiles with best practices for Go or Python operators.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "kubernetes-operator-scaffolder" with this command: npx skills add charlie-morrison/kubernetes-operator-scaffolder

Kubernetes Operator Scaffolder

Generate a complete, production-ready Kubernetes operator project from a high-level resource description. Produces Custom Resource Definitions (CRDs), reconciliation controllers, RBAC manifests, admission webhooks, Dockerfiles, and CI scaffolding — following the Operator Framework and controller-runtime best practices so you skip weeks of boilerplate.

Use when: "scaffold a kubernetes operator", "create a CRD and controller", "generate operator boilerplate", "build a k8s operator for X", or when you need to extend the Kubernetes API with custom resources.

Prerequisites

Before scaffolding, the agent checks for:

# Go operator (kubebuilder path)
go version            # Go 1.22+
kubebuilder version   # kubebuilder 4.x
controller-gen --version
kustomize version

# Python operator (kopf path)
python3 --version     # 3.11+
pip show kopf         # kopf framework
pip show kubernetes    # k8s client

If tools are missing, the agent provides install commands before proceeding.

Usage

Provide the following inputs:

  • Resource name — the noun your operator manages (e.g., Database, CacheCluster, MLPipeline)
  • API group — the Kubernetes API group (e.g., infra.example.com)
  • API version — typically v1alpha1 for new operators
  • Languagego (kubebuilder) or python (kopf)
  • Spec fields — the fields users will set in the custom resource (name, type, default, validation)
  • Reconciliation behavior — what the controller should do when the resource is created, updated, or deleted

Example invocation:

Scaffold a Go operator for a PostgresCluster resource in the db.example.com group. Spec fields: replicas (int, default 3), version (string, default "16"), storageSize (string, default "10Gi"). On create, it should provision a StatefulSet with PVCs. On delete, clean up PVCs.

How It Works

Step 1: Project Structure Generation

Create the full directory tree:

operator-name/
├── api/
│   └── v1alpha1/
│       ├── types.go              # CRD Go types with markers
│       ├── groupversion_info.go  # scheme registration
│       └── zz_generated.deepcopy.go
├── cmd/
│   └── main.go                   # entrypoint with manager setup
├── internal/
│   └── controller/
│       ├── reconciler.go         # main reconcile loop
│       ├── reconciler_test.go    # envtest-based tests
│       └── finalizer.go          # cleanup logic
├── config/
│   ├── crd/
│   │   ├── kustomization.yaml
│   │   └── bases/
│   │       └── resource_crd.yaml # generated CRD manifest
│   ├── rbac/
│   │   ├── role.yaml             # ClusterRole
│   │   ├── role_binding.yaml     # ClusterRoleBinding
│   │   ├── service_account.yaml
│   │   └── kustomization.yaml
│   ├── manager/
│   │   ├── manager.yaml          # Deployment
│   │   └── kustomization.yaml
│   ├── webhook/                  # if webhooks requested
│   │   ├── manifests.yaml
│   │   └── kustomization.yaml
│   └── default/
│       └── kustomization.yaml    # ties everything together
├── hack/
│   └── boilerplate.go.txt
├── Dockerfile
├── Makefile
├── go.mod
├── go.sum
├── PROJECT                       # kubebuilder project metadata
└── README.md

For Python (kopf) operators, the structure mirrors this with src/handlers.py (kopf decorators), src/resources.py (resource builders), deploy/ (CRD + RBAC + Deployment + kustomize), tests/, Dockerfile, Makefile, and pyproject.toml.

Step 2: CRD Definition

Generate the Custom Resource Definition with:

  • OpenAPI v3 schema validation — every spec field gets proper types, defaults, min/max constraints, enum values, and descriptions
  • Status subresource — with conditions following the metav1.Condition standard (Type, Status, Reason, Message, LastTransitionTime)
  • Printer columns — so kubectl get <resource> shows useful information at a glance
  • Short names — for convenience (e.g., pg for PostgresCluster)
  • Categories — group with kubectl get all

Example CRD type definition (Go):

// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// +kubebuilder:printcolumn:name="Replicas",type=integer,JSONPath=`.spec.replicas`
// +kubebuilder:printcolumn:name="Version",type=string,JSONPath=`.spec.version`
// +kubebuilder:printcolumn:name="Status",type=string,JSONPath=`.status.phase`
// +kubebuilder:printcolumn:name="Age",type=date,JSONPath=`.metadata.creationTimestamp`
// +kubebuilder:resource:shortName=pg;pgc
type PostgresCluster struct {
    metav1.TypeMeta   `json:",inline"`
    metav1.ObjectMeta `json:"metadata,omitempty"`
    Spec              PostgresClusterSpec   `json:"spec,omitempty"`
    Status            PostgresClusterStatus `json:"status,omitempty"`
}

type PostgresClusterSpec struct {
    // +kubebuilder:validation:Minimum=1
    // +kubebuilder:validation:Maximum=10
    // +kubebuilder:default=3
    Replicas int32 `json:"replicas,omitempty"`

    // +kubebuilder:validation:Pattern=`^\d+$`
    // +kubebuilder:default="16"
    Version string `json:"version,omitempty"`

    // +kubebuilder:default="10Gi"
    StorageSize string `json:"storageSize,omitempty"`
}

type PostgresClusterStatus struct {
    Phase      string             `json:"phase,omitempty"`
    ReadyReplicas int32           `json:"readyReplicas,omitempty"`
    Conditions []metav1.Condition `json:"conditions,omitempty"`
}

Step 3: Controller / Reconciler

Generate the reconciliation loop with these patterns:

Idempotent reconciliation — every reconcile call converges toward the desired state without side effects on repeated runs:

func (r *Reconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
    log := log.FromContext(ctx)

    // 1. Fetch the custom resource
    var cluster dbv1alpha1.PostgresCluster
    if err := r.Get(ctx, req.NamespacedName, &cluster); err != nil {
        return ctrl.Result{}, client.IgnoreNotFound(err)
    }

    // 2. Handle deletion with finalizers
    if !cluster.DeletionTimestamp.IsZero() {
        return r.handleDeletion(ctx, &cluster)
    }
    if err := r.ensureFinalizer(ctx, &cluster); err != nil {
        return ctrl.Result{}, err
    }

    // 3. Reconcile owned resources (create-or-update pattern)
    if err := r.reconcileStatefulSet(ctx, &cluster); err != nil {
        return ctrl.Result{}, err
    }
    if err := r.reconcileService(ctx, &cluster); err != nil {
        return ctrl.Result{}, err
    }

    // 4. Update status
    if err := r.updateStatus(ctx, &cluster); err != nil {
        return ctrl.Result{}, err
    }

    return ctrl.Result{RequeueAfter: 30 * time.Second}, nil
}

Key patterns included:

  • Owner references — child resources (StatefulSet, Service, ConfigMap) are owned by the CR so garbage collection works automatically
  • Finalizers — for cleanup of external resources (e.g., PVCs, cloud resources) that don't get garbage-collected
  • Status conditions — update conditions using meta.SetStatusCondition following KEP-1623
  • Event recording — emit Kubernetes events for important state transitions
  • Exponential backoff — on transient failures, requeue with increasing delay
  • Watches — watch owned resources so changes to child objects trigger reconciliation

Step 4: RBAC Generation

Generate least-privilege RBAC from the controller's actual API calls:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: operator-manager-role
rules:
  # Custom resource
  - apiGroups: ["db.example.com"]
    resources: ["postgresclusters"]
    verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
  - apiGroups: ["db.example.com"]
    resources: ["postgresclusters/status"]
    verbs: ["get", "update", "patch"]
  - apiGroups: ["db.example.com"]
    resources: ["postgresclusters/finalizers"]
    verbs: ["update"]
  # Owned resources
  - apiGroups: ["apps"]
    resources: ["statefulsets"]
    verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
  - apiGroups: [""]
    resources: ["services", "configmaps", "persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
  # Events
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "patch"]

The agent reviews each verb and resource group, removing anything the controller doesn't actually need.

Step 5: Dockerfile and Build

Generate a multi-stage Dockerfile:

FROM golang:1.22 AS builder
ARG TARGETOS TARGETARCH
WORKDIR /workspace
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=${TARGETOS:-linux} GOARCH=${TARGETARCH:-amd64} \
    go build -a -o manager cmd/main.go

FROM gcr.io/distroless/static:nonroot
WORKDIR /
COPY --from=builder /workspace/manager .
USER 65532:65532
ENTRYPOINT ["/manager"]

Step 6: Testing Scaffold

Generate test files using envtest (Go) or pytest with a fake k8s client (Python). Tests cover: CR creation triggers child resource creation with correct spec, spec updates propagate to child resources, deletion triggers finalizer cleanup, status conditions are set correctly, and error cases requeue with backoff.

Step 7: Makefile

Generate a Makefile with standard targets: manifests (CRD generation), generate (deepcopy), test (envtest), build, docker-build, install (CRDs into cluster), and deploy (full operator deployment via kustomize).

Output

The agent produces:

  1. Complete project directory — ready to go build / pip install and docker build
  2. CRD YAML — with full OpenAPI schema, ready to kubectl apply
  3. RBAC manifests — least-privilege ClusterRole, ClusterRoleBinding, ServiceAccount
  4. Controller code — idempotent reconciler with finalizers, status updates, event recording
  5. Test scaffold — envtest or pytest setup with example test cases
  6. Dockerfile — multi-stage, distroless, non-root
  7. Makefile — standard build, test, deploy targets
  8. Sample CR — an example custom resource YAML for users to try

Best Practices Enforced

  • No cluster-admin — RBAC is scoped to exactly the resources the controller touches
  • Finalizers before external resources — prevents orphaned cloud resources
  • Status conditions, not status strings — follows the standard Condition type for interoperability
  • Leader election — enabled by default for HA deployments
  • Health probes — readiness and liveness endpoints on the manager
  • Metrics — Prometheus metrics endpoint exposed via controller-runtime
  • Structured logging — uses logr / structlog, no fmt.Println
  • Owner references on all child resources — garbage collection works correctly
  • Distroless container image — minimal attack surface
  • Non-root user — container runs as UID 65532

Supported Operator Patterns

The agent recognizes and scaffolds these patterns: level-triggered reconciliation (desired state convergence), finalizer-based cleanup for external resources, status aggregation from child resources, config drift detection and correction, dependent resource ordering (e.g., Service after StatefulSet), and external resource management (cloud APIs, DNS).

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

GitHub 人才猎手 (GitHub Talent Hunter)

从 GitHub 找到最匹配的技术人才,生成个性化触达话术。适用于招聘工程师、寻找技术合伙人、猎头交付候选人等场景。

Registry SourceRecently Updated
1500Profile unavailable
Coding

Dolphindb Docker

Automate DolphinDB Docker deployment with auto architecture detection (ARM64/x86_64), smart memory allocation (50% rule), and full data persistence.

Registry SourceRecently Updated
2490Profile unavailable
Coding

Vultr

Manage Vultr cloud infrastructure including VPS instances, bare metal, Kubernetes clusters, databases, DNS, firewalls, VPCs, object storage, and more. Use wh...

Registry SourceRecently Updated
3751Profile unavailable
Coding

xCloud Docker Deploy

Deploy any project to xCloud hosting — auto-detects stack (WordPress, Laravel, PHP, Node.js, Next.js, NestJS, Python, Go, Rust), routes to native or Docker d...

Registry SourceRecently Updated
4950Profile unavailable