learning-coach

Production learning coach for personalized, multi-subject study planning with proactive reminders, curated resources, LLM-generated quizzes, rubric-based grading, and adaptive roadmap updates. Use when users want structured learning guidance over time, skill assessments, topic-wise progress tracking, or autonomous coaching with explicit cron consent.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "learning-coach" with this command: npx skills add ravikadam/learning-coach

Learning Coach

Run a real coaching loop across multiple subjects: Plan by subject → Learn → Practice → Assess → Adapt.

Core principles

  • Keep each subject isolated in planning, quiz history, and scoring.
  • Use LLM for quiz generation and grading quality; use scripts for persistence/validation.
  • Be proactive after one-time user consent for cron jobs.
  • Be transparent: report what was automated and why.

Subject segregation model (mandatory)

Store all learner state under data/subjects/<subject-slug>/.

Required per-subject files:

  • profile.json — goals, level, weekly hours, exam/project target
  • plan.json — current weekly plan + daily tasks
  • quiz-history.json — generated quizzes + answer keys + rubrics + attempts
  • progress.json — rolling metrics, weak concepts, confidence trend
  • curation.json — recommended links and why selected

Global files:

  • data/coach-config.json — cadence preferences, output style
  • data/cron-consent.json — consent + approved schedules + last update

Never mix metrics from separate subjects unless generating an explicit global dashboard.

LLM-first quiz protocol (mandatory)

Do not rely on static script-generated toy quizzes. Generate quizzes with the model each time unless user asks for a cached quiz.

For each quiz, produce a single JSON object with:

  • metadata (subject, topic, difficulty, blooms_level, time_budget_min)
  • questions[] (mcq/short/explain/case-based)
  • answer_key[]
  • grading_rubric[] with per-question criteria and max points
  • feedback_rules (how to turn mistakes into coaching advice)

Use schema in references/quiz-schema.md.

LLM grading protocol (mandatory)

When user submits answers:

  1. Grade each answer using the provided rubric.
  2. Return strict grading JSON (schema: references/grading-schema.md).
  3. Explain top 3 mistakes and corrective drills.
  4. Update subject progress.json and quiz-history.json.

Use scripts only to validate and persist JSON artifacts.

Proactive automation (cron)

Before setting or changing cron:

  • Inform user of exact schedules and actions.
  • Generate candidate schedules with scripts/subject_cron.py (light/standard/intensive).
  • Ask for explicit approval.
  • Save approval in data/cron-consent.json.

After approval:

  • Run routine reminders and weekly summaries autonomously.
  • Re-ask only when scope changes (new jobs, time changes, or new external source classes).

Use scripts/setup_cron.py for idempotent cron management. See references/cron-templates.md.

Discovery and curation

For each subject:

  • Ingest candidates via scripts/source_ingest.py (YouTube RSS + optional X/web normalized feeds).
  • Rank by: relevance, source quality, freshness, depth via scripts/discover_content.py.
  • Save in subject curation.json with concise rationale and time-to-consume.

Use quality checklist from references/source-quality.md and ingestion contract in references/source-ingestion.md.

Scripts (supporting only)

  • scripts/bootstrap.py — dependency checks/install attempts.
  • scripts/setup_cron.py — apply/remove/show cron jobs.
  • scripts/subject_store.py — create/list/update per-subject state directories.
  • scripts/update_progress.py — update per-subject progress with EMA trend and confidence.
  • scripts/validate_quiz_json.py — validate generated quiz JSON.
  • scripts/validate_grading_json.py — validate grading JSON.
  • scripts/source_ingest.py — normalize YouTube RSS + optional X/web feeds into candidate JSON.
  • scripts/discover_content.py — rank and persist curated links from candidate web/X/YouTube resources.
  • scripts/intervention_rules.py — generate pacing interventions (speed-up/stabilize/slow-down) per subject.
  • scripts/subject_cron.py — generate per-subject cron templates (light/standard/intensive).
  • scripts/weekly_report.py — aggregate subject summaries with trend/confidence output (text + JSON).

Intervention policy

After each graded attempt, generate intervention guidance with scripts/intervention_rules.py.

  • Modes: speed-up, stabilize, slow-down.
  • Explain mode choice with metrics evidence (EMA/confidence/delta).
  • Convert mode into concrete next actions for the subject.

See references/intervention-policy.md.

Execution policy

  • Prefer concise output to user: what changed, what’s next, when next reminder happens.
  • Never claim a cron/job/source fetch ran if not actually run.
  • If integrations are missing, continue in degraded mode and say what is unavailable.

References

  • references/learning-methods.md
  • references/scoring-rubric.md
  • references/source-quality.md
  • references/source-ingestion.md
  • references/progress-model.md
  • references/report-schema.md
  • references/cron-templates.md
  • references/intervention-policy.md
  • references/quiz-schema.md
  • references/grading-schema.md

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

Ai Competitor Analyzer

提供AI驱动的竞争对手分析,支持批量自动处理,提升企业和专业团队分析效率与专业度。

Registry SourceRecently Updated
General

Ai Data Visualization

提供自动化AI分析与多格式批量处理,显著提升数据可视化效率,节省成本,适用企业和个人用户。

Registry SourceRecently Updated
General

Ai Cost Optimizer

提供基于预算和任务需求的AI模型成本优化方案,计算节省并指导OpenClaw配置与模型切换策略。

Registry SourceRecently Updated