generate-daily-sports-update

Runs the sports science crawler to generate a daily report, sync to Notion, and prevent duplicate content.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "generate-daily-sports-update" with this command: npx skills add w2478328197-arch/generate-daily-sports-update

Generate Daily Sports Update

This skill runs the daily_sports_update.py script to fetch the latest sports science research and wearable tech news. It automatically handles deduplication, so you can run it frequently without worrying about seeing the same content twice.

Prerequisites

  • Python 3: Requires python3 to be installed.
  • Dependencies: The required Python packages defined in requirements.txt must be installed (pip3 install -r requirements.txt).
  • Environment Variables:
    • NOTION_TOKEN: The integration token for Notion API.
    • NOTION_PAGE_ID: The ID of the Notion page to sync the daily update to.
  • Tools Needed: You must have access to the run_command tool to execute the script in a bash terminal.

Instructions

  1. Locate the Script: The script daily_sports_update.py is located in the user's sports-science-daily directory. You should first ensure you are in the correct working directory.

  2. Run the update: Use the run_command tool to execute the python script.

    python3 daily_sports_update.py --days 2
    
    • --days N: (Optional) Number of days to look back (default is 7). If you haven't run it in a while, increase this (e.g., --days 7 or --days 30).
    • --no-history: (Optional) Use this ONLY if you want to force re-fetching of already seen items (e.g., for debugging).
  3. Output & Sync:

    • The script will generate a local Markdown file named: YYYY-MM-DD_运动科学日报.md
    • It will automatically sync the compiled blocks directly to the specified Notion page using the Notion APIs.
    • It updates processed_history.json locally to mark fetched URLs/PMIDs as seen.
  4. Handling "No New Content":

    • If the script's terminal output contains "🎉 没有发现新内容" (No new content found), it means all found items in the lookback period have already been processed and synced previously. You can try running with a larger --days argument.

Security & Privacy Note

  • External Endpoints Called:
    • https://eutils.ncbi.nlm.nih.gov: Accessed to fetch PubMed paper statistics and abstracts.
    • https://api.notion.com: Accessed to create and populate the daily reports.
    • Various RSS feed URLs (e.g., Garmin, MySportScience, YouTube RSS).
  • Files Checked: Opens and updates processed_history.json and creates .md reporting files locally in the working directory.
  • This skill invokes web requests to fetch relevant sports science data but does not expose any user PII.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

Google Meet

Google Meet API integration with managed OAuth. Create meeting spaces, list conference records, and manage meeting participants. Use this skill when users wa...

Registry SourceRecently Updated
16.9K15byungkyu
General

API Changelog Generator

Generate and maintain API changelogs from OpenAPI/Swagger specs — track endpoints added, removed, deprecated, or modified between versions. Detect breaking c...

Registry SourceRecently Updated
General

Incident Response Runbook

Create, maintain, and execute detailed incident response runbooks to guide triage, communication, and post-incident reviews for production outages.

Registry SourceRecently Updated
General

Procore

Procore integration. Manage Projects, Users, Roles, Organizations. Use when the user wants to interact with Procore data.

Registry SourceRecently Updated
1430Profile unavailable