competitive_analysis

Poll competitive crawl triggers, aggregate the last 6 months of product, review, and QA data by category, produce structured analysis context and a report skeleton, upload outputs to OSS, then send a DingTalk summary. Use for database-driven scheduled competitor analysis in OpenClaw.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "competitive_analysis" with this command: npx skills add wandervine/bbt-competitive-analysis

Competitive Analysis

Use When

  • The competitor analysis tables already exist.
  • You need to poll competitive_crawl_trigger on a schedule.
  • You need standardized reports grouped by category.
  • You need to send summaries to a DingTalk robot.

Do not use this skill for:

  • one-off ad hoc analysis
  • open-ended research without database inputs
  • flexible report generation without a fixed template

Required Inputs

  • Database connection: COMPETITIVE_ANALYSIS_DSN
  • OSS endpoint: OSS_ENDPOINT
  • OSS bucket: OSS_BUCKET
  • OSS access key id: OSS_ACCESS_KEY_ID
  • OSS access key secret: OSS_ACCESS_KEY_SECRET
  • DingTalk webhook: DINGTALK_WEBHOOK
  • Optional DingTalk signing secret: DINGTALK_SECRET
  • In OpenClaw, prefer environment injection through skills.entries.competitive_analysis.env

Goal

  1. Find unconsumed trigger rows where status='success'.
  2. Load the last 6 months of product, review, and QA data.
  3. Aggregate results by category.
  4. Produce analysis_context.json for the host to continue narrative generation.
  5. Generate a Markdown/HTML skeleton that follows the reference PDF structure.
  6. Send a DingTalk summary.
  7. Mark trigger rows as consumed after success.

Entry Points

Primary command:

python3 {baseDir}/scripts/run_report.py

Common arguments:

  • --category CATEGORY
  • --since-months 6
  • --limit 20

Files

  • SKILL.md: skill entry instructions
  • references/report-outline.md: report structure contract
  • references/data-contract.md: data contract and field expectations
  • references/openclaw-setup.md: OpenClaw setup example
  • scripts/run_report.py: main CLI
  • scripts/render_report.py: Markdown/HTML rendering
  • scripts/send_dingtalk.py: DingTalk delivery
  • analysis_context.json: structured analysis context for the host runtime

Rules

  • Follow the reference PDF for section order.
  • If fields are missing, keep the section and mark values as 未采集 or 待补充.
  • Keep the CLI stateless and let an external scheduler trigger it.
  • Do not call any external LLM API from the script.
  • Let the host runtime generate deeper narrative content from analysis_context.json and references/report-outline.md.
  • In OpenClaw, prefer host-managed environment injection over .env.

Minimal Workflow

  1. Read references/data-contract.md.
  2. Confirm that the trigger table already includes the consumption fields.
  3. Configure skills.entries.competitive_analysis.env as shown in references/openclaw-setup.md.
  4. Start a new OpenClaw session so the skill reloads.
  5. Run python3 {baseDir}/scripts/run_report.py or invoke it from an external scheduler.
  6. Read the generated analysis_context.json.
  7. Let the host runtime generate the final narrative based on references/report-outline.md.
  8. Validate the final output against the report outline.

Success Criteria

  • New successful trigger rows are detected.
  • Reports are generated per category.
  • Section structure matches the reference report.
  • DingTalk receives the summary message.
  • Trigger rows are marked as consumed.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Research

Ai News Researcher

Researches latest AI news, summarizes 5 top stories with sources.

Registry SourceRecently Updated
Research

Incident Fupan (事故复盘) — Structured Root Cause Analysis

事故复盘 / Incident Fupan — structured root cause analysis for production failures, outages, bugs, and near-misses. Use when: (1) 事故复盘 or incident review is need...

Registry SourceRecently Updated
3950Profile unavailable
Research

SupportForge

AI customer support via SupportForge API — ticket creation, auto-replies, routing, knowledge base search. Use when user needs customer support automation, ti...

Registry SourceRecently Updated
4010Profile unavailable
Research

Doc Orchestrator

Orchestrate multi-chapter document generation using sub-agents. Use when producing long structured documents (PRDs, technical specs, research reports, design...

Registry SourceRecently Updated
3640Profile unavailable