agent-performance-benchmarker

name: performance-benchmarker type: analyst color: "#607D8B" description: Implements comprehensive performance benchmarking for distributed consensus protocols capabilities:

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "agent-performance-benchmarker" with this command: npx skills add ruvnet/claude-flow/ruvnet-claude-flow-agent-performance-benchmarker

name: performance-benchmarker type: analyst color: "#607D8B" description: Implements comprehensive performance benchmarking for distributed consensus protocols capabilities:

  • throughput_measurement

  • latency_analysis

  • resource_monitoring

  • comparative_analysis

  • adaptive_tuning priority: medium hooks: pre: | echo "📊 Performance Benchmarker analyzing: $TASK" Initialize monitoring systems

if [[ "$TASK" == "benchmark" ]]; then echo "⚡ Starting performance metric collection" fi post: | echo "📈 Performance analysis complete" Generate performance report

echo "📋 Compiling benchmarking results and recommendations"

Performance Benchmarker

Implements comprehensive performance benchmarking and optimization analysis for distributed consensus protocols.

Core Responsibilities

  • Protocol Benchmarking: Measure throughput, latency, and scalability across consensus algorithms

  • Resource Monitoring: Track CPU, memory, network, and storage utilization patterns

  • Comparative Analysis: Compare Byzantine, Raft, and Gossip protocol performance

  • Adaptive Tuning: Implement real-time parameter optimization and load balancing

  • Performance Reporting: Generate actionable insights and optimization recommendations

Technical Implementation

Core Benchmarking Framework

class ConsensusPerformanceBenchmarker { constructor() { this.benchmarkSuites = new Map(); this.performanceMetrics = new Map(); this.historicalData = new TimeSeriesDatabase(); this.currentBenchmarks = new Set(); this.adaptiveOptimizer = new AdaptiveOptimizer(); this.alertSystem = new PerformanceAlertSystem(); }

// Register benchmark suite for specific consensus protocol registerBenchmarkSuite(protocolName, benchmarkConfig) { const suite = new BenchmarkSuite(protocolName, benchmarkConfig); this.benchmarkSuites.set(protocolName, suite);

return suite;

}

// Execute comprehensive performance benchmarks async runComprehensiveBenchmarks(protocols, scenarios) { const results = new Map();

for (const protocol of protocols) {
  const protocolResults = new Map();
  
  for (const scenario of scenarios) {
    console.log(`Running ${scenario.name} benchmark for ${protocol}`);
    
    const benchmarkResult = await this.executeBenchmarkScenario(
      protocol, scenario
    );
    
    protocolResults.set(scenario.name, benchmarkResult);
    
    // Store in historical database
    await this.historicalData.store({
      protocol: protocol,
      scenario: scenario.name,
      timestamp: Date.now(),
      metrics: benchmarkResult
    });
  }
  
  results.set(protocol, protocolResults);
}

// Generate comparative analysis
const analysis = await this.generateComparativeAnalysis(results);

// Trigger adaptive optimizations
await this.adaptiveOptimizer.optimizeBasedOnResults(results);

return {
  benchmarkResults: results,
  comparativeAnalysis: analysis,
  recommendations: await this.generateOptimizationRecommendations(results)
};

}

async executeBenchmarkScenario(protocol, scenario) { const benchmark = this.benchmarkSuites.get(protocol); if (!benchmark) { throw new Error(No benchmark suite found for protocol: ${protocol}); }

// Initialize benchmark environment
const environment = await this.setupBenchmarkEnvironment(scenario);

try {
  // Pre-benchmark setup
  await benchmark.setup(environment);
  
  // Execute benchmark phases
  const results = {
    throughput: await this.measureThroughput(benchmark, scenario),
    latency: await this.measureLatency(benchmark, scenario),
    resourceUsage: await this.measureResourceUsage(benchmark, scenario),
    scalability: await this.measureScalability(benchmark, scenario),
    faultTolerance: await this.measureFaultTolerance(benchmark, scenario)
  };
  
  // Post-benchmark analysis
  results.analysis = await this.analyzeBenchmarkResults(results);
  
  return results;
  
} finally {
  // Cleanup benchmark environment
  await this.cleanupBenchmarkEnvironment(environment);
}

} }

Throughput Measurement System

class ThroughputBenchmark { constructor(protocol, configuration) { this.protocol = protocol; this.config = configuration; this.metrics = new MetricsCollector(); this.loadGenerator = new LoadGenerator(); }

async measureThroughput(scenario) { const measurements = []; const duration = scenario.duration || 60000; // 1 minute default const startTime = Date.now();

// Initialize load generator
await this.loadGenerator.initialize({
  requestRate: scenario.initialRate || 10,
  rampUp: scenario.rampUp || false,
  pattern: scenario.pattern || 'constant'
});

// Start metrics collection
this.metrics.startCollection(['transactions_per_second', 'success_rate']);

let currentRate = scenario.initialRate || 10;
const rateIncrement = scenario.rateIncrement || 5;
const measurementInterval = 5000; // 5 seconds

while (Date.now() - startTime < duration) {
  const intervalStart = Date.now();
  
  // Generate load for this interval
  const transactions = await this.generateTransactionLoad(
    currentRate, measurementInterval
  );
  
  // Measure throughput for this interval
  const intervalMetrics = await this.measureIntervalThroughput(
    transactions, measurementInterval
  );
  
  measurements.push({
    timestamp: intervalStart,
    requestRate: currentRate,
    actualThroughput: intervalMetrics.throughput,
    successRate: intervalMetrics.successRate,
    averageLatency: intervalMetrics.averageLatency,
    p95Latency: intervalMetrics.p95Latency,
    p99Latency: intervalMetrics.p99Latency
  });
  
  // Adaptive rate adjustment
  if (scenario.rampUp && intervalMetrics.successRate > 0.95) {
    currentRate += rateIncrement;
  } else if (intervalMetrics.successRate < 0.8) {
    currentRate = Math.max(1, currentRate - rateIncrement);
  }
  
  // Wait for next interval
  const elapsed = Date.now() - intervalStart;
  if (elapsed < measurementInterval) {
    await this.sleep(measurementInterval - elapsed);
  }
}

// Stop metrics collection
this.metrics.stopCollection();

// Analyze throughput results
return this.analyzeThroughputMeasurements(measurements);

}

async generateTransactionLoad(rate, duration) { const transactions = []; const interval = 1000 / rate; // Interval between transactions in ms const endTime = Date.now() + duration;

while (Date.now() < endTime) {
  const transactionStart = Date.now();
  
  const transaction = {
    id: `tx_${Date.now()}_${Math.random()}`,
    type: this.getRandomTransactionType(),
    data: this.generateTransactionData(),
    timestamp: transactionStart
  };
  
  // Submit transaction to consensus protocol
  const promise = this.protocol.submitTransaction(transaction)
    .then(result => ({
      ...transaction,
      result: result,
      latency: Date.now() - transactionStart,
      success: result.committed === true
    }))
    .catch(error => ({
      ...transaction,
      error: error,
      latency: Date.now() - transactionStart,
      success: false
    }));
  
  transactions.push(promise);
  
  // Wait for next transaction interval
  await this.sleep(interval);
}

// Wait for all transactions to complete
return await Promise.all(transactions);

}

analyzeThroughputMeasurements(measurements) { const totalMeasurements = measurements.length; const avgThroughput = measurements.reduce((sum, m) => sum + m.actualThroughput, 0) / totalMeasurements; const maxThroughput = Math.max(...measurements.map(m => m.actualThroughput)); const avgSuccessRate = measurements.reduce((sum, m) => sum + m.successRate, 0) / totalMeasurements;

// Find optimal operating point (highest throughput with >95% success rate)
const optimalPoints = measurements.filter(m => m.successRate >= 0.95);
const optimalThroughput = optimalPoints.length > 0 ? 
  Math.max(...optimalPoints.map(m => m.actualThroughput)) : 0;

return {
  averageThroughput: avgThroughput,
  maxThroughput: maxThroughput,
  optimalThroughput: optimalThroughput,
  averageSuccessRate: avgSuccessRate,
  measurements: measurements,
  sustainableThroughput: this.calculateSustainableThroughput(measurements),
  throughputVariability: this.calculateThroughputVariability(measurements)
};

}

calculateSustainableThroughput(measurements) { // Find the highest throughput that can be sustained for >80% of the time const sortedThroughputs = measurements.map(m => m.actualThroughput).sort((a, b) => b - a); const p80Index = Math.floor(sortedThroughputs.length * 0.2); return sortedThroughputs[p80Index]; } }

Latency Analysis System

class LatencyBenchmark { constructor(protocol, configuration) { this.protocol = protocol; this.config = configuration; this.latencyHistogram = new LatencyHistogram(); this.percentileCalculator = new PercentileCalculator(); }

async measureLatency(scenario) { const measurements = []; const sampleSize = scenario.sampleSize || 10000; const warmupSize = scenario.warmupSize || 1000;

console.log(`Measuring latency with ${sampleSize} samples (${warmupSize} warmup)`);

// Warmup phase
await this.performWarmup(warmupSize);

// Measurement phase
for (let i = 0; i < sampleSize; i++) {
  const latencyMeasurement = await this.measureSingleTransactionLatency();
  measurements.push(latencyMeasurement);
  
  // Progress reporting
  if (i % 1000 === 0) {
    console.log(`Completed ${i}/${sampleSize} latency measurements`);
  }
}

// Analyze latency distribution
return this.analyzeLatencyDistribution(measurements);

}

async measureSingleTransactionLatency() { const transaction = { id: latency_tx_${Date.now()}_${Math.random()}, type: 'benchmark', data: { value: Math.random() }, phases: {} };

// Phase 1: Submission
const submissionStart = performance.now();
const submissionPromise = this.protocol.submitTransaction(transaction);
transaction.phases.submission = performance.now() - submissionStart;

// Phase 2: Consensus
const consensusStart = performance.now();
const result = await submissionPromise;
transaction.phases.consensus = performance.now() - consensusStart;

// Phase 3: Application (if applicable)
let applicationLatency = 0;
if (result.applicationTime) {
  applicationLatency = result.applicationTime;
}
transaction.phases.application = applicationLatency;

// Total end-to-end latency
const totalLatency = transaction.phases.submission + 
                    transaction.phases.consensus + 
                    transaction.phases.application;

return {
  transactionId: transaction.id,
  totalLatency: totalLatency,
  phases: transaction.phases,
  success: result.committed === true,
  timestamp: Date.now()
};

}

analyzeLatencyDistribution(measurements) { const successfulMeasurements = measurements.filter(m => m.success); const latencies = successfulMeasurements.map(m => m.totalLatency);

if (latencies.length === 0) {
  throw new Error('No successful latency measurements');
}

// Calculate percentiles
const percentiles = this.percentileCalculator.calculate(latencies, [
  50, 75, 90, 95, 99, 99.9, 99.99
]);

// Phase-specific analysis
const phaseAnalysis = this.analyzePhaseLatencies(successfulMeasurements);

// Latency distribution analysis
const distribution = this.analyzeLatencyHistogram(latencies);

return {
  sampleSize: successfulMeasurements.length,
  mean: latencies.reduce((sum, l) => sum + l, 0) / latencies.length,
  median: percentiles[50],
  standardDeviation: this.calculateStandardDeviation(latencies),
  percentiles: percentiles,
  phaseAnalysis: phaseAnalysis,
  distribution: distribution,
  outliers: this.identifyLatencyOutliers(latencies)
};

}

analyzePhaseLatencies(measurements) { const phases = ['submission', 'consensus', 'application']; const phaseAnalysis = {};

for (const phase of phases) {
  const phaseLatencies = measurements.map(m => m.phases[phase]);
  const validLatencies = phaseLatencies.filter(l => l > 0);
  
  if (validLatencies.length > 0) {
    phaseAnalysis[phase] = {
      mean: validLatencies.reduce((sum, l) => sum + l, 0) / validLatencies.length,
      p50: this.percentileCalculator.calculate(validLatencies, [50])[50],
      p95: this.percentileCalculator.calculate(validLatencies, [95])[95],
      p99: this.percentileCalculator.calculate(validLatencies, [99])[99],
      max: Math.max(...validLatencies),
      contributionPercent: (validLatencies.reduce((sum, l) => sum + l, 0) / 
                           measurements.reduce((sum, m) => sum + m.totalLatency, 0)) * 100
    };
  }
}

return phaseAnalysis;

} }

Resource Usage Monitor

class ResourceUsageMonitor { constructor() { this.monitoringActive = false; this.samplingInterval = 1000; // 1 second this.measurements = []; this.systemMonitor = new SystemMonitor(); }

async measureResourceUsage(protocol, scenario) { console.log('Starting resource usage monitoring');

this.monitoringActive = true;
this.measurements = [];

// Start monitoring in background
const monitoringPromise = this.startContinuousMonitoring();

try {
  // Execute the benchmark scenario
  const benchmarkResult = await this.executeBenchmarkWithMonitoring(
    protocol, scenario
  );
  
  // Stop monitoring
  this.monitoringActive = false;
  await monitoringPromise;
  
  // Analyze resource usage
  const resourceAnalysis = this.analyzeResourceUsage();
  
  return {
    benchmarkResult: benchmarkResult,
    resourceUsage: resourceAnalysis
  };
  
} catch (error) {
  this.monitoringActive = false;
  throw error;
}

}

async startContinuousMonitoring() { while (this.monitoringActive) { const measurement = await this.collectResourceMeasurement(); this.measurements.push(measurement);

  await this.sleep(this.samplingInterval);
}

}

async collectResourceMeasurement() { const timestamp = Date.now();

// CPU usage
const cpuUsage = await this.systemMonitor.getCPUUsage();

// Memory usage
const memoryUsage = await this.systemMonitor.getMemoryUsage();

// Network I/O
const networkIO = await this.systemMonitor.getNetworkIO();

// Disk I/O
const diskIO = await this.systemMonitor.getDiskIO();

// Process-specific metrics
const processMetrics = await this.systemMonitor.getProcessMetrics();

return {
  timestamp: timestamp,
  cpu: {
    totalUsage: cpuUsage.total,
    consensusUsage: cpuUsage.process,
    loadAverage: cpuUsage.loadAverage,
    coreUsage: cpuUsage.cores
  },
  memory: {
    totalUsed: memoryUsage.used,
    totalAvailable: memoryUsage.available,
    processRSS: memoryUsage.processRSS,
    processHeap: memoryUsage.processHeap,
    gcStats: memoryUsage.gcStats
  },
  network: {
    bytesIn: networkIO.bytesIn,
    bytesOut: networkIO.bytesOut,
    packetsIn: networkIO.packetsIn,
    packetsOut: networkIO.packetsOut,
    connectionsActive: networkIO.connectionsActive
  },
  disk: {
    bytesRead: diskIO.bytesRead,
    bytesWritten: diskIO.bytesWritten,
    operationsRead: diskIO.operationsRead,
    operationsWrite: diskIO.operationsWrite,
    queueLength: diskIO.queueLength
  },
  process: {
    consensusThreads: processMetrics.consensusThreads,
    fileDescriptors: processMetrics.fileDescriptors,
    uptime: processMetrics.uptime
  }
};

}

analyzeResourceUsage() { if (this.measurements.length === 0) { return null; }

const cpuAnalysis = this.analyzeCPUUsage();
const memoryAnalysis = this.analyzeMemoryUsage();
const networkAnalysis = this.analyzeNetworkUsage();
const diskAnalysis = this.analyzeDiskUsage();

return {
  duration: this.measurements[this.measurements.length - 1].timestamp - 
           this.measurements[0].timestamp,
  sampleCount: this.measurements.length,
  cpu: cpuAnalysis,
  memory: memoryAnalysis,
  network: networkAnalysis,
  disk: diskAnalysis,
  efficiency: this.calculateResourceEfficiency(),
  bottlenecks: this.identifyResourceBottlenecks()
};

}

analyzeCPUUsage() { const cpuUsages = this.measurements.map(m => m.cpu.consensusUsage);

return {
  average: cpuUsages.reduce((sum, usage) => sum + usage, 0) / cpuUsages.length,
  peak: Math.max(...cpuUsages),
  p95: this.calculatePercentile(cpuUsages, 95),
  variability: this.calculateStandardDeviation(cpuUsages),
  coreUtilization: this.analyzeCoreUtilization(),
  trends: this.analyzeCPUTrends()
};

}

analyzeMemoryUsage() { const memoryUsages = this.measurements.map(m => m.memory.processRSS); const heapUsages = this.measurements.map(m => m.memory.processHeap);

return {
  averageRSS: memoryUsages.reduce((sum, usage) => sum + usage, 0) / memoryUsages.length,
  peakRSS: Math.max(...memoryUsages),
  averageHeap: heapUsages.reduce((sum, usage) => sum + usage, 0) / heapUsages.length,
  peakHeap: Math.max(...heapUsages),
  memoryLeaks: this.detectMemoryLeaks(),
  gcImpact: this.analyzeGCImpact(),
  growth: this.calculateMemoryGrowth()
};

}

identifyResourceBottlenecks() { const bottlenecks = [];

// CPU bottleneck detection
const avgCPU = this.measurements.reduce((sum, m) => sum + m.cpu.consensusUsage, 0) / 
               this.measurements.length;
if (avgCPU > 80) {
  bottlenecks.push({
    type: 'CPU',
    severity: 'HIGH',
    description: `High CPU usage (${avgCPU.toFixed(1)}%)`
  });
}

// Memory bottleneck detection
const memoryGrowth = this.calculateMemoryGrowth();
if (memoryGrowth.rate > 1024 * 1024) { // 1MB$s growth
  bottlenecks.push({
    type: 'MEMORY',
    severity: 'MEDIUM',
    description: `High memory growth rate (${(memoryGrowth.rate / 1024 / 1024).toFixed(2)} MB$s)`
  });
}

// Network bottleneck detection
const avgNetworkOut = this.measurements.reduce((sum, m) => sum + m.network.bytesOut, 0) / 
                      this.measurements.length;
if (avgNetworkOut > 100 * 1024 * 1024) { // 100 MB$s
  bottlenecks.push({
    type: 'NETWORK',
    severity: 'MEDIUM',
    description: `High network output (${(avgNetworkOut / 1024 / 1024).toFixed(2)} MB$s)`
  });
}

return bottlenecks;

} }

Adaptive Performance Optimizer

class AdaptiveOptimizer { constructor() { this.optimizationHistory = new Map(); this.performanceModel = new PerformanceModel(); this.parameterTuner = new ParameterTuner(); this.currentOptimizations = new Map(); }

async optimizeBasedOnResults(benchmarkResults) { const optimizations = [];

for (const [protocol, results] of benchmarkResults) {
  const protocolOptimizations = await this.optimizeProtocol(protocol, results);
  optimizations.push(...protocolOptimizations);
}

// Apply optimizations gradually
await this.applyOptimizations(optimizations);

return optimizations;

}

async optimizeProtocol(protocol, results) { const optimizations = [];

// Analyze performance bottlenecks
const bottlenecks = this.identifyPerformanceBottlenecks(results);

for (const bottleneck of bottlenecks) {
  const optimization = await this.generateOptimization(protocol, bottleneck);
  if (optimization) {
    optimizations.push(optimization);
  }
}

// Parameter tuning based on performance characteristics
const parameterOptimizations = await this.tuneParameters(protocol, results);
optimizations.push(...parameterOptimizations);

return optimizations;

}

identifyPerformanceBottlenecks(results) { const bottlenecks = [];

// Throughput bottlenecks
for (const [scenario, result] of results) {
  if (result.throughput && result.throughput.optimalThroughput < result.throughput.maxThroughput * 0.8) {
    bottlenecks.push({
      type: 'THROUGHPUT_DEGRADATION',
      scenario: scenario,
      severity: 'HIGH',
      impact: (result.throughput.maxThroughput - result.throughput.optimalThroughput) / 
             result.throughput.maxThroughput,
      details: result.throughput
    });
  }
  
  // Latency bottlenecks
  if (result.latency && result.latency.p99 > result.latency.p50 * 10) {
    bottlenecks.push({
      type: 'LATENCY_TAIL',
      scenario: scenario,
      severity: 'MEDIUM',
      impact: result.latency.p99 / result.latency.p50,
      details: result.latency
    });
  }
  
  // Resource bottlenecks
  if (result.resourceUsage && result.resourceUsage.bottlenecks.length > 0) {
    bottlenecks.push({
      type: 'RESOURCE_CONSTRAINT',
      scenario: scenario,
      severity: 'HIGH',
      details: result.resourceUsage.bottlenecks
    });
  }
}

return bottlenecks;

}

async generateOptimization(protocol, bottleneck) { switch (bottleneck.type) { case 'THROUGHPUT_DEGRADATION': return await this.optimizeThroughput(protocol, bottleneck); case 'LATENCY_TAIL': return await this.optimizeLatency(protocol, bottleneck); case 'RESOURCE_CONSTRAINT': return await this.optimizeResourceUsage(protocol, bottleneck); default: return null; } }

async optimizeThroughput(protocol, bottleneck) { const optimizations = [];

// Batch size optimization
if (protocol === 'raft') {
  optimizations.push({
    type: 'PARAMETER_ADJUSTMENT',
    parameter: 'max_batch_size',
    currentValue: await this.getCurrentParameter(protocol, 'max_batch_size'),
    recommendedValue: this.calculateOptimalBatchSize(bottleneck.details),
    expectedImprovement: '15-25% throughput increase',
    confidence: 0.8
  });
}

// Pipelining optimization
if (protocol === 'byzantine') {
  optimizations.push({
    type: 'FEATURE_ENABLE',
    feature: 'request_pipelining',
    description: 'Enable request pipelining to improve throughput',
    expectedImprovement: '20-30% throughput increase',
    confidence: 0.7
  });
}

return optimizations.length > 0 ? optimizations[0] : null;

}

async tuneParameters(protocol, results) { const optimizations = [];

// Use machine learning model to suggest parameter values
const parameterSuggestions = await this.performanceModel.suggestParameters(
  protocol, results
);

for (const suggestion of parameterSuggestions) {
  if (suggestion.confidence > 0.6) {
    optimizations.push({
      type: 'PARAMETER_TUNING',
      parameter: suggestion.parameter,
      currentValue: suggestion.currentValue,
      recommendedValue: suggestion.recommendedValue,
      expectedImprovement: suggestion.expectedImprovement,
      confidence: suggestion.confidence,
      rationale: suggestion.rationale
    });
  }
}

return optimizations;

}

async applyOptimizations(optimizations) { // Sort by confidence and expected impact const sortedOptimizations = optimizations.sort((a, b) => (b.confidence * parseFloat(b.expectedImprovement)) - (a.confidence * parseFloat(a.expectedImprovement)) );

// Apply optimizations gradually
for (const optimization of sortedOptimizations) {
  try {
    await this.applyOptimization(optimization);
    
    // Wait and measure impact
    await this.sleep(30000); // 30 seconds
    const impact = await this.measureOptimizationImpact(optimization);
    
    if (impact.improvement < 0.05) {
      // Revert if improvement is less than 5%
      await this.revertOptimization(optimization);
    } else {
      // Keep optimization and record success
      this.recordOptimizationSuccess(optimization, impact);
    }
    
  } catch (error) {
    console.error(`Failed to apply optimization:`, error);
    await this.revertOptimization(optimization);
  }
}

} }

MCP Integration Hooks

Performance Metrics Storage

// Store comprehensive benchmark results await this.mcpTools.memory_usage({ action: 'store', key: benchmark_results_${protocol}_${Date.now()}, value: JSON.stringify({ protocol: protocol, timestamp: Date.now(), throughput: throughputResults, latency: latencyResults, resourceUsage: resourceResults, optimizations: appliedOptimizations }), namespace: 'performance_benchmarks', ttl: 604800000 // 7 days });

// Real-time performance monitoring await this.mcpTools.metrics_collect({ components: [ 'consensus_throughput', 'consensus_latency_p99', 'cpu_utilization', 'memory_usage', 'network_io_rate' ] });

Neural Performance Learning

// Learn performance optimization patterns await this.mcpTools.neural_patterns({ action: 'learn', operation: 'performance_optimization', outcome: JSON.stringify({ optimizationType: optimization.type, performanceGain: measurementResults.improvement, resourceImpact: measurementResults.resourceDelta, networkConditions: currentNetworkState }) });

// Predict optimal configurations const configPrediction = await this.mcpTools.neural_predict({ modelId: 'consensus_performance_model', input: JSON.stringify({ workloadPattern: currentWorkload, networkTopology: networkState, resourceConstraints: systemResources }) });

This Performance Benchmarker provides comprehensive performance analysis, optimization recommendations, and adaptive tuning capabilities for distributed consensus protocols.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

agent-trading-predictor

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

agentic-jujutsu

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

hooks automation

No summary provided by upstream source.

Repository SourceNeeds Review