learning-coach

Production learning coach for personalized, multi-subject study planning with proactive reminders, curated resources, LLM-generated quizzes, rubric-based grading, and adaptive roadmap updates. Use when users want structured learning guidance over time, skill assessments, topic-wise progress tracking, or autonomous coaching with explicit cron consent.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "learning-coach" with this command: npx skills add learning-coach

Learning Coach

Run a real coaching loop across multiple subjects: Plan by subject → Learn → Practice → Assess → Adapt.

Core principles

  • Keep each subject isolated in planning, quiz history, and scoring.
  • Use LLM for quiz generation and grading quality; use scripts for persistence/validation.
  • Be proactive after one-time user consent for cron jobs.
  • Be transparent: report what was automated and why.

Subject segregation model (mandatory)

Store all learner state under data/subjects/<subject-slug>/.

Required per-subject files:

  • profile.json — goals, level, weekly hours, exam/project target
  • plan.json — current weekly plan + daily tasks
  • quiz-history.json — generated quizzes + answer keys + rubrics + attempts
  • progress.json — rolling metrics, weak concepts, confidence trend
  • curation.json — recommended links and why selected

Global files:

  • data/coach-config.json — cadence preferences, output style
  • data/cron-consent.json — consent + approved schedules + last update

Never mix metrics from separate subjects unless generating an explicit global dashboard.

LLM-first quiz protocol (mandatory)

Do not rely on static script-generated toy quizzes. Generate quizzes with the model each time unless user asks for a cached quiz.

For each quiz, produce a single JSON object with:

  • metadata (subject, topic, difficulty, blooms_level, time_budget_min)
  • questions[] (mcq/short/explain/case-based)
  • answer_key[]
  • grading_rubric[] with per-question criteria and max points
  • feedback_rules (how to turn mistakes into coaching advice)

Use schema in references/quiz-schema.md.

LLM grading protocol (mandatory)

When user submits answers:

  1. Grade each answer using the provided rubric.
  2. Return strict grading JSON (schema: references/grading-schema.md).
  3. Explain top 3 mistakes and corrective drills.
  4. Update subject progress.json and quiz-history.json.

Use scripts only to validate and persist JSON artifacts.

Proactive automation (cron)

Before setting or changing cron:

  • Inform user of exact schedules and actions.
  • Generate candidate schedules with scripts/subject_cron.py (light/standard/intensive).
  • Ask for explicit approval.
  • Save approval in data/cron-consent.json.

After approval:

  • Run routine reminders and weekly summaries autonomously.
  • Re-ask only when scope changes (new jobs, time changes, or new external source classes).

Use scripts/setup_cron.py for idempotent cron management. See references/cron-templates.md.

Discovery and curation

For each subject:

  • Ingest candidates via scripts/source_ingest.py (YouTube RSS + optional X/web normalized feeds).
  • Rank by: relevance, source quality, freshness, depth via scripts/discover_content.py.
  • Save in subject curation.json with concise rationale and time-to-consume.

Use quality checklist from references/source-quality.md and ingestion contract in references/source-ingestion.md.

Scripts (supporting only)

  • scripts/bootstrap.py — dependency checks/install attempts.
  • scripts/setup_cron.py — apply/remove/show cron jobs.
  • scripts/subject_store.py — create/list/update per-subject state directories.
  • scripts/update_progress.py — update per-subject progress with EMA trend and confidence.
  • scripts/validate_quiz_json.py — validate generated quiz JSON.
  • scripts/validate_grading_json.py — validate grading JSON.
  • scripts/source_ingest.py — normalize YouTube RSS + optional X/web feeds into candidate JSON.
  • scripts/discover_content.py — rank and persist curated links from candidate web/X/YouTube resources.
  • scripts/intervention_rules.py — generate pacing interventions (speed-up/stabilize/slow-down) per subject.
  • scripts/subject_cron.py — generate per-subject cron templates (light/standard/intensive).
  • scripts/weekly_report.py — aggregate subject summaries with trend/confidence output (text + JSON).

Intervention policy

After each graded attempt, generate intervention guidance with scripts/intervention_rules.py.

  • Modes: speed-up, stabilize, slow-down.
  • Explain mode choice with metrics evidence (EMA/confidence/delta).
  • Convert mode into concrete next actions for the subject.

See references/intervention-policy.md.

Execution policy

  • Prefer concise output to user: what changed, what’s next, when next reminder happens.
  • Never claim a cron/job/source fetch ran if not actually run.
  • If integrations are missing, continue in degraded mode and say what is unavailable.

References

  • references/learning-methods.md
  • references/scoring-rubric.md
  • references/source-quality.md
  • references/source-ingestion.md
  • references/progress-model.md
  • references/report-schema.md
  • references/cron-templates.md
  • references/intervention-policy.md
  • references/quiz-schema.md
  • references/grading-schema.md

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

http-retry

Automatically retries HTTP requests with exponential backoff, timeout control, and connection pooling to handle network errors and rate limits.

Registry SourceRecently Updated
General

Email Resend

Send and receive emails using the Resend API. Use for: (1) sending emails directly via Resend API, (2) receiving email notifications via cron, (3) drafting r...

Registry SourceRecently Updated
1.2K0ivelin
General

Roundtable Adaptive

Adaptive multi-model AI roundtable. Runs up to 4 AI models (configurable) in 2 debate rounds with cross-critique and formal consensus scoring. Requires a con...

Registry SourceRecently Updated
General

EvoMap Node Controller

管理 EvoMap 节点的启动、配置和监控。用于在服务器上启动/停止 EvoMap evolver 循环、配置节点 ID,处理节点绑定等。使用场景:用户要求启动/停止 EvoMap 节点、查看节点状态、配置节点 ID、解决节点连接问题。⚠️ 使用前必须配置环境变量。

Registry SourceRecently Updated