Total Skills
21
Skills published by aman-bhandari with real stars/downloads and source-aware metadata.
Total Skills
21
Total Stars
0
Total Downloads
0
Comparison chart based on real stars and downloads signals from source data.
adr-capture
0
adversarial-review
0
architect-first
0
blog-drafting
0
brainstorming-knowledge-builder
0
code-reviewing
0
session-start
0
debug-systems
0
ALWAYS activate when the student makes a high-level design choice -- library selection, database choice, mathematical approach, architecture pattern, or trade-off decision. Captures an Architectural Decision Record proving leadership-level thinking. Invoke with /adr.
Activate before accepting any implementation or architecture. Shift from teacher to Chief Systems Architect. Try to break the student's design with production failure scenarios. Record stress-tests in session logs as Leadership Stress Tests.
Forces a planning phase before any code is written. Triggers on /architect or at the start of BUILD sessions. Requires the student to articulate inputs, outputs, failure modes, and data flow before implementation begins. Inspired by Copilot Plan Mode and production engineering practices.
Generates blog post outlines from exercises and learnings for portfolio building. Triggers on /blog, blog idea, or after significant exercises. Produces structure and key points only -- the student writes the prose. Maintains a running ideas backlog in blog-drafts/ideas.md. Prioritizes topics that demonstrate hireable AI engineering skills.
Structured ideation and LLM knowledge base building using Karpathy's LLM Wiki architecture with Obsidian integration. Triggers on /brainstorm, BRAINSTORM: prefix, or when thinking through a problem, building knowledge, or ingesting sources. Three modes -- ideation (think through problems), discuss (create wiki from conversation), and knowledge (build persistent wiki from raw sources).
Deep coach-style code review of code the student wrote. Triggers on /review or REVIEW prefix. Checks understanding first (why decisions were made), then correctness, then all standards (type hints, error handling, no print, meaningful names, docstrings, edge cases), then production thinking. Never rewrites code -- points to problems, the student fixes them.
Starts every session with context sync, time check, and collaborative planning. Triggers on /checkin, start session, or session start. Reads PROGRESS.md and ROADMAP.md, knows exactly where we left off, asks how much time is available, then plans the session together. No rigid format -- adapts to 15 minutes or 5 hours.
Use when the student encounters an error, says something is broken, or asks for debugging help. Enforces a three-layer debugging protocol: reproduce, trace the mental model, check OS state, identify production failure mode. Invoke with /debug.
Activate at the end of every Topic completion. Generates a System Evolution Log comparing code complexity, identifying meta-patterns mastered, documenting intentional agent stress-tests, and evaluating the student's growth trajectory. Invoke with /evolution.
Runs a full 5-round FAANG AI Engineer interview simulation. Triggers on /faang-loop or when student declares FAANG-readiness. Rounds — coding, ML theory, ML system design, research depth, behavioral. Ruthless scoring. 8/10 average = ready. Below = specific gaps flagged and re-run scheduled.
Generates daily short-form social posts AND long-form technical blog drafts. Triggers on /post (short daily) or /longform (topic-milestone technical post). Short-form solves social anxiety by removing friction. Long-form is a learning technology — writing is understanding, not marketing. Never preachy, never cringe.
Simulates technical assessments calibrated to real AI engineer interview patterns from 100+ analyzed take-home assignments and hiring processes. Triggers on /assess, test me, or mock interview. Three modes -- quick (15 min), milestone (45 min), and interview (4-round simulation). Scoring is ruthless. 8/10 means hireable.
Use when the student wants to see what the OS does when Python code runs. Wraps dis, sys.settrace, tracemalloc, strace, and /proc inspection. Invoke with /os-lens.
Core Partner behavior for the AI Engineer Lab. Automatically applies to every session. Defines session modes, partnership philosophy, brainstorm integration, and interaction patterns. Writes exercise scaffolds, never solution code. Always asks why. Connects everything to production. Stays aware of DONE/NOW/NEXT at all times.
Updates PROGRESS.md with session entry and captures exactly where we stopped. Triggers on /progress, update progress, or at end of session before commit. Tracks what was done, what was learned, where we are in the flow, and what comes next. The repo never lies.
Systems-aware code review that extends the standard code-review skill. Checks resource management, memory footprint, GIL implications, fd hygiene, and failure modes at scale. Invoke with /review-systems.
Tracks concepts the student has learned and schedules recall questions using FSRS-inspired spaced repetition. Triggers on /review-deck, quiz me, or at session start when concepts are due. Maintains a JSON deck in reflections/spaced-review-deck.json. If you cannot explain it from memory, you do not know it.
Audits and upgrades the learning system itself. Evaluates coaching effectiveness, curriculum alignment with market, skill quality, architecture health, and architecture currency. Triggers on /system-check, during Friday reflections, or when the system suspects underperformance. Every check produces at least one concrete improvement. This is how the system evolves.
Use when introducing a new Python/ML/AI concept, explaining how code works, or when the student asks 'what happens when...'. Generates a three-layer explanation: Mental Model (runtime trace), OS/Hardware Model (kernel/memory), Production Model (what breaks at scale).
Actively scans the AI engineering landscape using WebSearch for trending repos, blogs, job postings, new tools, and framework releases. Triggers on /trends, during weekly reflections, or when checking industry relevance. Surfaces findings relevant to current topic and flags curriculum adjustments.
Friday depth test from memory. Triggers on /reflect, weekly reflection, or friday reflection. The student writes answers from memory with NO code lookup allowed. Gaps in writing equal gaps in understanding. Connects to spaced review deck, trend scout, and system check for a complete weekly audit.