ai-session-analysis

Use when the user wants to analyze local AI coding assistant session data (Claude Code, Codex, Kimi Code). Triggers on requests like "analyze my coding sessions", "what tools do I use most", "compare Claude Code vs Codex usage", "give me a summary of what I've been working on", or similar session-analysis inquiries.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "ai-session-analysis" with this command: npx skills add tanktechnology/ai-session-analysis

AI Session Analysis

Overview

Analyze session data from local AI coding tools. Three bundled scripts extract raw data. You (the agent) read the output and form your own observations — there are no hardcoded rules or thresholds in the scripts.

Data Sources

Scripts read directly from these paths, no copying needed:

ToolPathContent
Claude Code~/.claude/projects/*.jsonlFull transcripts with tool_use blocks
Claude Code~/.claude/history.jsonlUser input history
Claude Code~/.claude/sessions/*.jsonSession metadata
Codex~/.codex/sessions/*/*.jsonlStructured events
Kimi Code~/.kimi/sessions/*/wire.jsonlWire protocol with ToolCall entries

How to Run

# Text-based analysis (terminal output)
python3 ~/.claude/skills/ai-session-analysis/scripts/analyze.py
python3 ~/.claude/skills/ai-session-analysis/scripts/tool_analysis.py

# HTML report with charts (opens in browser)
python3 ~/.claude/skills/ai-session-analysis/scripts/generate_report.py [output.html]

Zero dependencies, Python 3 stdlib only. All scripts are pure data extractors — they present numbers, distributions, and timelines without any interpretation.

What Each Script Extracts

analyze.py — Overview:

  • Per-tool: session count, message volume, tool distribution, project ranking, daily activity timeline, date range
  • Cross-tool comparison table
  • Recent 7-day activity: daily tool calls, active projects, tool mix, per-session timeline

tool_analysis.py — Deep dive:

  • Shell command categories per tool (git, grep/find, npm/yarn, python, etc.)
  • File operation counts (Read/Write/Edit) and file type distribution
  • Project-level tool usage breakdown
  • Codex: token usage, web search topics, errors
  • Kimi: domain-specific tools beyond the standard set

generate_report.py — HTML report:

  • Self-contained HTML file with embedded data and Chart.js visualizations
  • Summary cards, daily activity timeline, tool distribution doughnut, project breakdown
  • Shell command comparison across all three tools
  • File operations, token usage, session timeline scatter plot
  • Session duration distribution, hour-of-day activity heatmap
  • Codex deep dive (patches, web searches, exit codes)
  • Kimi domain tools

After Running

Read both outputs carefully. Then tell the user what you see. Consider:

  • What are they actually doing? (exploring code vs building features vs debugging vs reviewing)
  • Which projects are getting attention, which ones went quiet?
  • Tool use patterns: are they leaning heavily on one tool? is the tool mix appropriate for their work?
  • Cross-tool: did they try one tool then switch? what might that say?
  • Recent vs historical: any shifts in behavior, intensity, or focus?
  • Anything that looks like friction: lots of failed commands, retries, errors?

Do not apply fixed thresholds or rules ("if bash > 40% then warn"). Look at the whole picture and use judgment. If something stands out, mention it. If nothing does, say so.

Common Issues

  • Claude Code shows few tools: Tools are in ~/.claude/projects/*.jsonl (assistant → content blocks with "type": "tool_use"), not in history.jsonl.
  • Codex sessions empty: Check ~/.codex/sessions/2026/ subdirectories (organized by year/month/day).
  • Kimi tool names missing: ToolCall entries are in wire.jsonl via payload.function.name.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

Recorder Free

edit raw screen footage into polished recorded clips with this skill. Works with MP4, MOV, AVI, WebM files up to 500MB. educators, freelancers, and content c...

Registry SourceRecently Updated
Coding

Game Generator

generate game concepts or text into gameplay demo videos with this skill. Works with MP4, MOV, PNG, WebM files up to 500MB. indie game developers use it for...

Registry SourceRecently Updated
Coding

V19 Certified Agent Workflow

V19认证——首个通过认证的Agent工作流。社区巡检自动化:GitHub Issue检查→TboxBook巡检→Agent档案更新→治理审计提交,全链路可追溯。当前审计16次,目标100次。

Registry SourceRecently Updated
Coding

Generator Japanese

generate video clips into Japanese-localized videos with this skill. Works with MP4, MOV, AVI, WebM files up to 500MB. marketers use it for generating Japane...

Registry SourceRecently Updated