ping-model

Measure and display AI model response latency. Use when the user types /ping or /ping followed by a model name to test round-trip time. Captures precise timing between command receipt and response generation, with smart duration formatting (ms, seconds, or minutes). Supports cross-model testing by temporarily switching models and measuring latency.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "ping-model" with this command: npx skills add dofbi/ping-model

Ping Model

Measure AI model response latency with consistent formatting.

Quick Start

Simple Ping (current model)

bash command:"node {baseDir}/ping-model.js"

Ping Specific Model

bash command:"node {baseDir}/ping-model.js --model minimax"

Compare Multiple Models

bash command:"node {baseDir}/ping-model.js --compare kimi,minimax,deepseek"

Command Reference

CommandDescription
/pingPing current active model
/ping kimiSwitch to kimi, ping, return
/ping minimaxSwitch to minimax, ping, return
/ping deepseekSwitch to deepseek, ping, return
/ping allCompare all available models

Output Format

Required format - ALWAYS use this exact structure:

🧪 PING {model-name}

📤 Sent:     {HH:MM:SS.mmm}
📥 Received: {HH:MM:SS.mmm}
⏱️  Latency:  {formatted-duration}

🎯 Pong!

Latency Formatting Rules

  • < 1 second: Display as XXXms (e.g., 847ms)
  • ≥ 1 second, < 60 seconds: Display as X.XXs (e.g., 1.23s)
  • ≥ 60 seconds: Display as X.XXmin (e.g., 2.5min)

Examples

Fast response (< 1s):

🧪 PING kimi

📤 Sent:     09:34:15.123
📥 Received: 09:34:15.247
⏱️  Latency:  124ms

🎯 Pong!

Medium response (1-60s):

🧪 PING minimax

📤 Sent:     09:34:15.123
📥 Received: 09:34:16.456
⏱️  Latency:  1.33s

🎯 Pong!

Slow response (> 60s):

🧪 PING gemini

📤 Sent:     09:34:15.123
📥 Received: 09:35:25.456
⏱️  Latency:  1.17min

🎯 Pong!

Cross-Model Testing

When testing a non-active model:

  1. Save current model context
  2. Switch to target model
  3. Execute ping
  4. Measure latency
  5. Restore original model
  6. Display result

Critical: Always return to the original model after testing.

Comparison Mode

bash command:"node {baseDir}/ping-model.js --compare kimi,minimax,deepseek,gpt"

Output format:

══════════════════════════════════════════════════
🧪 MODEL COMPARISON
══════════════════════════════════════════════════

🥇 kimi      124ms
🥈 minimax   1.33s
🥉 deepseek  2.45s
4️⃣  openai    5.67s

🏆 Fastest: kimi (124ms)

Implementation

The ping latency is measured as the time between:

  • T1: Message received by the agent
  • T2: Response ready to send

This captures the model's internal processing time, not network latency.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

Model Usage Monitor

监控并统计模型调用次数和成本,计算缓存命中率,支持实时监控与每小时自动告警。

Registry SourceRecently Updated
0142
Profile unavailable
General

Smart Model Switcher V2 (Optimized)

Optimized Smart Model Switcher (v2) - Zero-latency, no restart required. Automatically selects and switches to the best available model for each task from yo...

Registry SourceRecently Updated
0107
Profile unavailable
General

Model Failover Guard

Automatically monitors model health and switches between primary and fallback models to maintain stability and recover when possible.

Registry SourceRecently Updated
0162
Profile unavailable