Competitive Analysis
Use When
- The competitor analysis tables already exist.
- You need to poll
competitive_crawl_triggeron a schedule. - You need standardized reports grouped by
category. - You need to send summaries to a DingTalk robot.
Do not use this skill for:
- one-off ad hoc analysis
- open-ended research without database inputs
- flexible report generation without a fixed template
Required Inputs
- Database connection:
COMPETITIVE_ANALYSIS_DSN - OSS endpoint:
OSS_ENDPOINT - OSS bucket:
OSS_BUCKET - OSS access key id:
OSS_ACCESS_KEY_ID - OSS access key secret:
OSS_ACCESS_KEY_SECRET - DingTalk webhook:
DINGTALK_WEBHOOK - Optional DingTalk signing secret:
DINGTALK_SECRET - In OpenClaw, prefer environment injection through
skills.entries.competitive_analysis.env
Goal
- Find unconsumed trigger rows where
status='success'. - Load the last 6 months of product, review, and QA data.
- Aggregate results by
category. - Produce
analysis_context.jsonfor the host to continue narrative generation. - Generate a
Markdown/HTMLskeleton that follows the reference PDF structure. - Send a DingTalk summary.
- Mark trigger rows as consumed after success.
Entry Points
Primary command:
python3 {baseDir}/scripts/run_report.py
Common arguments:
--category CATEGORY--since-months 6--limit 20
Files
SKILL.md: skill entry instructionsreferences/report-outline.md: report structure contractreferences/data-contract.md: data contract and field expectationsreferences/openclaw-setup.md: OpenClaw setup examplescripts/run_report.py: main CLIscripts/render_report.py: Markdown/HTML renderingscripts/send_dingtalk.py: DingTalk deliveryanalysis_context.json: structured analysis context for the host runtime
Rules
- Follow the reference PDF for section order.
- If fields are missing, keep the section and mark values as
未采集or待补充. - Keep the CLI stateless and let an external scheduler trigger it.
- Do not call any external LLM API from the script.
- Let the host runtime generate deeper narrative content from
analysis_context.jsonandreferences/report-outline.md. - In OpenClaw, prefer host-managed environment injection over
.env.
Minimal Workflow
- Read
references/data-contract.md. - Confirm that the trigger table already includes the consumption fields.
- Configure
skills.entries.competitive_analysis.envas shown inreferences/openclaw-setup.md. - Start a new OpenClaw session so the skill reloads.
- Run
python3 {baseDir}/scripts/run_report.pyor invoke it from an external scheduler. - Read the generated
analysis_context.json. - Let the host runtime generate the final narrative based on
references/report-outline.md. - Validate the final output against the report outline.
Success Criteria
- New successful trigger rows are detected.
- Reports are generated per
category. - Section structure matches the reference report.
- DingTalk receives the summary message.
- Trigger rows are marked as consumed.