Total Skills
41
Skills published by s3nex-com with real stars/downloads and source-aware metadata.
Total Skills
41
Total Stars
0
Total Downloads
0
Comparison chart based on real stars and downloads signals from source data.
architecture-decision-records
0
architecture-review-governance
0
data-governance-privacy
0
design-doc-generator
0
prd-creator
0
requirements-tracer
0
security-audit-secure-sdlc
0
specification-driven-development
0
Creates, reviews, updates, and manages Architecture Decision Records (ADRs) — the institutional memory of technical decision-making. Use this skill whenever the user wants to: create an ADR for a technology or architecture decision, document why a specific technology was chosen, record a technical decision before implementing it, update or supersede an existing ADR, review a proposed decision for completeness, check whether a decision warrants an ADR, maintain the ADR index, or understand why a past decision was made. Also trigger when the user asks "why are we using X", "who decided this", "was this decision documented", "log a design decision", "decision history", "record a technical decision", or "document this choice".
Defines architecture principles, catches design problems before code is written, and detects drift as delivery proceeds. Use this skill whenever the user wants to: review a system design or architecture proposal, evaluate trade-offs between technical approaches (microservices vs monolith, sync vs async, edge vs cloud), identify architectural anti-patterns or hidden coupling, enforce service and component boundaries, review integration design or data flow, validate non-functional requirements, detect architecture drift, or self-review a design before presenting it to the team. Also trigger when the user asks "is this the right approach", "what are the trade-offs", or "does this architecture scale".
Activate when classifying data (PII, sensitive, internal, public), running a Privacy Impact Assessment (PIA/DPIA), defining retention or deletion policies, designing GDPR/CCPA compliance workflows, handling subject access requests (SAR) or right-to-erasure, evaluating cross-border data transfers (EU SCCs, adequacy decisions), scoping data minimisation, reviewing new third-party data sharing, or assessing EU AI Act Article 10/13 data transparency obligations for an ML/LLM feature that trains on user data. Use before a feature that collects, stores, shares, or trains on user data is shipped.
Activate when the user wants to produce a technical design document from existing requirements, specifications, or a PRD. Use this skill to convert the outputs of prd-creator, requirements-tracer, specification-driven-development, and architecture-decision-records into a single, implementation-ready DESIGN.md that engineers can build from. Also trigger for: "write the design doc", "technical design", "system design", "how should we build this", "design document", "architecture doc", "translate specs to design", "what does the implementation look like", "design from requirements", "component design", "data flow design", "turn the PRD into a design".
Activate when the user wants to create a Product Requirements Document (PRD) from scratch, convert rough ideas or bullet points into a structured PRD, validate or improve an existing PRD, facilitate discovery sessions to extract requirements, review a PRD for completeness before it enters the development workflow, or prepare a PRD that will feed into the SDLC pipeline (requirements-tracer, specification-driven-development, design-doc-generator). Also trigger for: "write a PRD", "define the product", "what are we building", "capture requirements", "product spec", "feature definition", "we have an idea", "turn this into requirements", "requirements document", "product brief".
Converts business goals into testable, traceable requirements and keeps them linked to everything built from them. Use this skill whenever the user wants to: decompose a high-level business ask into user stories with BDD acceptance criteria, write Given/When/Then test scenarios, maintain or query the traceability matrix linking requirements to code and tests, detect orphaned code with no requirement, detect requirements with no implementation, or analyse the impact of a scope change. Also trigger for: "what does done mean", "what are we actually building", "scope creep", "is this in scope", "traceability", "BDD", "given when then", "user story", "feature breakdown", "requirements quality".
Activate when assessing security posture, performing threat modelling, reviewing secure coding practices, evaluating dependency hygiene, auditing secrets management, designing security gates for CI/CD pipelines, or mapping practices to compliance frameworks (NIST SSDF, OWASP, SOC 2). Use for security architecture reviews, STRIDE analysis, SAST/DAST/SCA tool selection, secure design principle enforcement, vulnerability triage, penetration test scoping, supply chain security, build integrity, and producing security findings reports. Covers both proactive design-time security and reactive incident-response readiness.
Governs contract-first and specification-driven development — defining interfaces, schemas, and workflows before implementation begins. Use this skill whenever the user wants to: write an OpenAPI 3.x specification, author a Protobuf or gRPC schema, define an AsyncAPI spec for event-driven interfaces, write a GraphQL schema, write a JSON Schema, review an API contract for completeness or correctness, detect breaking vs non-breaking changes, design a workflow or sequence before coding it, or validate that an implementation matches its contract. Also trigger for: "define the interface before coding", "API spec", "contract-first", "freeze the contract", "service contract", "define the schema", "sequence diagram", "API design", "contract review", "Protobuf", "AsyncAPI", "OpenAPI", "gRPC schema", "GraphQL schema", "schema-first GraphQL", "write the schema before resolvers", "GraphQL API design".
Activate when keeping external stakeholders (clients, users, partners, leadership) aligned. Use when: setting communication cadence, drafting a status update, logging a decision that affects someone outside the team, handling a scope change request, writing a difficult message, or calibrating tone for a non-technical audience. Also trigger on: "how do I tell the client", "weekly update", "they're asking for scope changes", "we need to escalate", "I need to document what was agreed".
Identifies, rates, owns, and tracks technical and project risks. Use this skill whenever the user wants to: create or update a risk register, identify risks in a new design or delivery plan, rate a risk using probability and impact, design a mitigation strategy, track risk status, or define early warning indicators for specific risks. Also trigger when the user describes risk situations without naming them: "what could go wrong", "I'm worried about the timeline", "this dependency is outside our control", "technical risks", "delivery risk", "risk assessment", "risk mitigation", "risk tracking", "early warning".
accessibility, a11y, WCAG, screen reader, keyboard navigation, color contrast, inclusive design, EU Accessibility Act, ADA compliance, focus management, aria labels, accessible components, ARIA, tab order, focus trap, skip link, axe-core, pa11y, contrast ratio, accessible forms, landmark regions
Activate when engineers want to use AI tools more effectively in daily work — coding, review, debugging, refactoring, test generation. Use when establishing team norms for AI tool use, reviewing AI-generated code for correctness and security, diagnosing why AI tool results are poor quality, or deciding which tasks belong to Claude vs Cursor/Copilot vs human. Applies to Claude Code, Cursor, GitHub Copilot, MCP integrations, and agentic coding workflows.
Activate when verifying that a service implementation actually matches its API contract, running contract tests between consumer and provider services, detecting contract drift between what the spec says and what is deployed, setting up Pact or schema-registry based contract verification in CI, investigating a production incident caused by a contract violation, comparing two spec versions to identify breaking changes, validating partner company deliverables against the agreed OpenAPI spec, or enforcing that no spec changes are deployed without going through the change control process. Use this when something is broken at an integration boundary and you need to determine whether it is a contract violation or an implementation bug.
Activate when setting up or running architecture fitness functions, enforcing import boundaries in CI, checking module layer boundaries automatically, tracking dependency budget against an approved list, detecting circular imports, flagging dead or abandoned modules, preventing architecture drift between PRs, enforcing architecture compliance in the build pipeline, or adding architecture CI checks to a project. Distinct from periodic human architecture reviews — fitness functions run automatically on every PR with no human required.
Activate when adding caching to a service, debugging cache-related bugs, configuring a CDN (Cloudflare, Fastly, CloudFront), designing cache invalidation, investigating low cache hit rate, diagnosing a cache stampede, picking TTLs, introducing Redis or Memcached, designing edge caching for static assets or API responses, choosing between cache-aside, read-through, write-through, write-behind, or refresh-ahead patterns, reviewing cache coherency in a distributed system, or deciding whether caching is the right answer at all. Trigger phrases: "add caching", "cache invalidation", "CDN configuration", "cache hit rate", "cache stampede", "cache strategy", "Redis caching", "Cloudflare config".
Activate when the user wants to implement code from a design document, break a technical design into ordered implementation tasks, generate code phase by phase following the DESIGN.md, write code that satisfies BDD acceptance criteria, implement APIs to their OpenAPI/Protobuf specs, or drive implementation with inline security and quality checkpoints. Also trigger for: "implement this", "write the code", "build it", "start coding", "implement the design", "code the feature", "implement phase 1", "write the service", "generate the implementation", "build from the design doc", "implement the spec".
Activate when establishing code review processes, writing or improving PR review checklists, setting quality gate policies for CI/CD pipelines, reviewing code quality, setting merge criteria, defining linting and static analysis tool configurations, or coaching engineers on effective review practices. Use for everything from individual PR reviews to systemic quality improvement programmes, including identifying recurring defect patterns and measuring review effectiveness.
Activate when designing or evaluating a testing strategy, defining test pyramid ratios, writing test plans for a release, establishing acceptance test frameworks using BDD/Given-When-Then, setting up contract testing between services, designing performance and load test scenarios, defining test environment strategy, or determining what constitutes "done" for a feature. Also trigger when integration tests are failing and the cause is unclear, or when test execution time is blocking delivery velocity.
database migration, schema migration, schema change, alter table, add column, drop column, rename column, migrate data, backfill, expand contract, forward migration, backward compatible schema, migration rollback, zero downtime migration, table lock, index concurrently, data migration, migration checklist, migration plan, alembic, golang-migrate, flyway, prisma migrate, liquibase, production schema change
Activate when designing or reviewing CI/CD pipelines, evaluating pipeline security and integrity, defining deployment strategies, establishing environment promotion policies, setting up release automation, governing infrastructure-as-code practices, defining rollback procedures, or troubleshooting pipeline failures blocking a release. Use for pipeline architecture, build reproducibility, deployment safety, environment parity, and the controls that ensure only reviewed and tested code reaches production.
Activate when planning disaster recovery, designing backup strategy, setting RTO or RPO targets, evaluating multi-region failover patterns (active-active, active-passive, warm-standby, pilot-light), scheduling or running DR drills, planning restore-from-backup procedures, hardening against ransomware (immutable / air-gapped backups), classifying systems by recovery tier, preparing a post-incident recovery verification, or satisfying SOC 2 availability or GDPR Article 32 technical-measures expectations. Use for everything from pre-launch DR plans for a new customer-facing system to quarterly tabletop exercises and annual full-region failover drills.
Activate when designing or implementing multi-service, event-driven, or message-based systems and the engineer mentions saga pattern, orchestration vs choreography, compensating transaction, event sourcing, CQRS, command query separation, transactional outbox, dual-write problem, idempotency key, deduplication, exactly-once, at-least-once, distributed transaction, two-phase commit, eventual consistency, causal consistency, strong consistency, read your writes, retry with backoff, exponential backoff, jitter, or circuit breaker in a distributed-coordination (not perf) context. Use when choosing between synchronous RPC and async events, when a write must update state in two systems (DB + queue), when consumers must tolerate duplicate messages, or when service boundaries force you to give up ACID transactions.
Activate when creating system design documents, writing runbooks, producing architecture diagrams, documenting API usage guides, writing onboarding documentation for a new engineer, reviewing documentation quality for a milestone or production release, creating incident response runbooks, producing system context diagrams, or evaluating whether documentation meets the standard required for a production deployment. Use when a system lacks sufficient documentation for the team to operate it safely, or when docs need to be assessed before a release gate.
Activate when writing or reviewing executable acceptance tests, converting requirements into BDD scenarios, running acceptance test suites to determine whether a feature meets the agreed acceptance criteria, writing the acceptance test plan for a milestone, verifying that all acceptance criteria from the requirements tracer are covered by tests, or producing an objective sign-off report for a milestone. Use when "done" needs to be verifiable by code, not a matter of opinion.
feature flag, flag lifecycle, flag debt, flag cleanup, stale flags, release flag, flag registry, rolling out a flag, flag removal, dark launch, kill switch, gradual rollout, flag expiry, flag audit, feature toggle
Activate when building software that calls an LLM internally — your product is the builder, not just the user. Use when designing prompt pipelines, implementing a RAG system, building agent tool loops, setting up an eval framework, or shipping an AI-powered feature. Trigger phrases: "build an LLM feature", "add AI to the product", "implement a chatbot", "build a RAG pipeline", "prompt engineering for our app", "eval framework", "we're shipping an AI feature", "LLM pipeline", "AI-powered feature".
Activate when designing or reviewing observability strategies, defining SLOs and error budgets, evaluating monitoring and alerting configurations, reviewing logging and tracing implementations, investigating production incidents through metrics and logs, designing on-call runbooks, assessing whether a service is production-ready from an operational perspective, defining DORA metrics collection, planning reliability engineering work, or evaluating partner company observability implementations against agreed NFRs. Use for everything from setting up the three pillars (metrics, logs, traces) to running error budget reviews and reliability retrospectives.
Activate when investigating performance problems, capacity planning, defining performance NFRs, reviewing load test results, designing auto-scaling strategies, analysing query performance, identifying bottlenecks in distributed systems, troubleshooting latency regressions, evaluating partner company performance test evidence, designing chaos engineering experiments, reviewing reliability patterns (circuit breakers, bulkheads, retries, timeouts), or determining whether a system meets its performance SLOs under realistic production load. Use for any work where response time, throughput, or system resilience under stress needs to be measured, designed, or improved.
Activate when the user wants to create a pull request, run pre-merge verification, manage the code review and approval process, merge code, create a release tag, or orchestrate the full PR lifecycle from ready-to-review through to merged. This skill runs all mandatory pre-merge gates before creating the PR and coordinates the merge process using outputs from code-review-quality-gates, release-readiness, and security-audit-secure-sdlc. Also trigger for: "create a PR", "open a pull request", "ready to merge", "merge the code", "submit for review", "pre-merge checklist", "PR description", "merge process", "tag the release", "ship it", "push to review".
Activate when assessing whether a release is ready for production, running a pre-release readiness review, creating a release checklist, writing a go/no-go decision, planning a deployment with rollback procedures, or tracking the resolution of pre-release blockers. Use for any production deployment decision where the consequences of getting it wrong are significant.
Activate when you need to validate that circuit breakers, retries, and fallbacks actually work under real failure conditions — not just in unit tests. Use after go-live to run quarterly chaos experiments against production-like environments, inject faults in CI to catch resilience regressions, or run a game day to rehearse incident response. Triggers: "do our circuit breakers actually work?", "what happens when the database is slow?", "prove the system handles a pod failure gracefully", "quarterly resilience check".
cloud cost, AWS bill, GCP cost, FinOps, cost optimization, right-sizing, cost per feature, budget alert, orphaned resources, reserved instances, cloud spending, cost tagging, cost attribution, Azure cost, cost anomaly, unattached volumes, savings plans, cost per user, egress cost, cloud waste
Activate when measuring or reporting delivery performance, calculating DORA metrics, evaluating a team's delivery velocity, investigating why deployment frequency has dropped, analysing lead time regressions, reporting on change failure rate after a string of incidents, or building a metrics dashboard for engineering leadership. Use when delivery data needs to be turned into insights, or when leadership needs evidence to support an investment decision about engineering capability.
Activate when auditing the health of third-party dependencies across services, evaluating whether a partner company's codebase has unacceptable dependency risks, defining a dependency update policy for the engagement, investigating a CVE that affects a delivered system, planning a major framework or runtime upgrade, assessing the impact of a dependency reaching end-of-life, reviewing an SBOM for a delivered artefact, or establishing the dependency governance process that both companies must follow. Use when outdated dependencies are creating security, compatibility, or operational risk, or when a framework version is approaching end-of-support and an upgrade must be planned.
Activate when a new engineer is joining, re-onboarding after extended leave, or the team needs to codify engineering norms. Triggers include "new engineer joining", "onboarding checklist", "local dev setup", "engineering norms", "engineering handbook", "first week tasks", and "knowledge transfer to new hire". Produces day 1 / week 1 / month 1 checklists, a tool-pinned local development setup, and an engineering norms doc the whole team agrees on.
Activate when conducting a post-incident review, writing a post-mortem document, running a blameless incident retrospective, analysing a production outage to identify contributing factors and systemic improvements, tracking action items from prior post-mortems, or building the incident response process for a team. Use when something went wrong in production and the goal is to learn from it and prevent recurrence.
Governs the formal end of a project: documentation audit, deliverables sign-off, knowledge transfer, operational handover, DORA final report, and lessons learned. Trigger when: a project is wrapping up, a system is being handed to a sustaining team, a contract engagement is ending, a major version is closed and moving to maintenance, or the team needs to verify readiness for handover. Also trigger for: "close out the project", "handover to ops", "final handover", "project done checklist", "project wrap-up", "lessons learned", "DORA final report", "is the project done", "hand off to sustaining team", "engagement wrap-up".
Activate when assessing engineering team health, coaching engineers on technical practices, identifying cultural gaps that drive quality or delivery problems, establishing shared engineering values and standards, addressing recurring issues that stem from team dynamics rather than technical problems, planning capability development, running retrospectives, tracking growth goals, or identifying knowledge concentration risks.
Activate when inventorying technical debt, prioritising debt repayment, making the case for debt remediation work to stakeholders, categorising architectural versus code-level debt, estimating the cost of carrying debt versus paying it down, or tracking debt items over time. Use when delivery velocity is degrading due to accumulated debt, when a new team is taking over a codebase and needs to understand its liabilities, or when reviewing a codebase to produce a debt report.
Activate when designing a custom distributed protocol that must be provably correct — consensus algorithms, leader election, two-phase commit, idempotency guarantees, exactly-once delivery, or any concurrent protocol where an incorrect interleaving causes data loss or corruption. Use TLA+ to specify and model-check the protocol before writing a line of implementation code.
The master workflow skill for the full software development lifecycle. Activate when the user wants to start a new feature, task, or project and drive the complete pipeline end-to-end: from idea through design, implementation, testing, PR, and docs. Also activate when the user wants to know where they are in the pipeline, resume a paused pipeline, skip or re-run a stage, or check overall workflow status. This is the single entry point for "build something properly". Trigger for: "start a new feature", "begin the pipeline", "run the sdlc", "what stage are we on", "resume the workflow", "orchestrate the build", "full pipeline", "start from scratch", "end to end build", "build this from requirements to production", "new task", "new project".