Repository Analyzer Skill
This skill provides a systematic approach to analyzing any code repository through multi-source investigation and critical evaluation.
Purpose
Enable Claude to perform thorough repository analysis by:
-
Exploring codebase structure and organization using Explore agents
-
Identifying technology stack and dependencies
-
Researching best practices via web search
-
Fetching authoritative documentation via Context7
-
Evaluating configuration quality and completeness
-
Assessing documentation and maintainability
-
Providing actionable recommendations
When to Use This Skill
Activate this skill when users request:
-
"Analyze this repository"
-
"Audit the codebase"
-
"Report on this repository's structure"
-
"Investigate the current repository"
-
"What technologies does this repo use?"
-
"Review the repository configuration"
-
"Assess the documentation quality"
Analysis Framework
Phase 1: Repository Discovery & Scoping
Identify Repository Context:
Location & Scope
Determine working directory
pwd
Check if it's a git repository
git rev-parse --show-toplevel
Get repository metadata
git remote -v git log --oneline -n 10
Initial Structure Assessment
-
Use Explore agent with "medium" thoroughness
-
Identify root-level files and directories
-
Locate key configuration files
-
Determine primary language(s)
Prompt for Explore Agent (Phase 1):
Explore the repository at [path] with medium thoroughness.
Find and report:
- Directory structure (major folders and their purposes)
- Configuration files (package.json, Cargo.toml, pyproject.toml, etc.)
- Primary programming languages (by file extensions)
- Entry points (main files, executables)
- Documentation files (README, docs/, etc.)
- Build/deployment files (Dockerfile, Makefile, CI configs)
Phase 2: Technology Stack Analysis
Identify All Technologies:
Programming Languages
-
Analyze file extensions
-
Check language-specific config files
-
Determine primary vs. secondary languages
Package Managers & Dependencies
-
Node.js: package.json , package-lock.json , yarn.lock , pnpm-lock.yaml
-
Python: requirements.txt , pyproject.toml , Pipfile , poetry.lock
-
Rust: Cargo.toml , Cargo.lock
-
Ruby: Gemfile , Gemfile.lock
-
Go: go.mod , go.sum
-
PHP: composer.json , composer.lock
-
Other: Makefile , CMakeLists.txt , etc.
Frameworks & Libraries
-
Read package manager files
-
Identify major dependencies
-
Categorize: frontend, backend, testing, tooling
Development Tools
-
Version managers: .nvmrc , .ruby-version , .tool-versions
-
Formatters: .prettierrc , .eslintrc , rustfmt.toml
-
Linters: .eslintrc , pylint.rc , .rubocop.yml
-
Type checkers: tsconfig.json , mypy.ini
Infrastructure & Deployment
-
Docker: Dockerfile , docker-compose.yml
-
CI/CD: .github/workflows/ , .gitlab-ci.yml , .circleci/
-
Cloud: terraform/ , cloudformation/ , k8s/
Prompt for Explore Agent (Phase 2):
Explore the repository for technology stack identification with very thorough mode.
Search for and analyze:
- All package manager files (package.json, Cargo.toml, requirements.txt, etc.)
- Configuration files for development tools
- Docker and containerization files
- CI/CD pipeline configurations
- Infrastructure as code files
- Build system files
For each found file, provide:
- File path
- Type/purpose
- Key dependencies or configurations
Phase 3: Context7 Documentation Research
Research Identified Technologies:
For each major technology/library identified in Phase 2:
Resolve Library IDs
Use Context7's resolve-library-id for:
-
Major frameworks (e.g., "next.js", "react", "vue")
-
Key libraries (e.g., "axios", "lodash", "sqlalchemy")
-
Development tools (e.g., "vitest", "eslint", "prettier")
Fetch Documentation
Use Context7's get-library-docs with topics:
-
"configuration best practices"
-
"project structure"
-
"common patterns"
-
"performance optimization"
Compare Against Best Practices
-
How does the repository's usage align with official recommendations?
-
Are there deprecated patterns being used?
-
Are there missed optimization opportunities?
Phase 4: Web Search for Best Practices
Research Industry Standards:
General Repository Practices
-
Web search: "[primary language] project structure best practices 2025"
-
Web search: "[framework] application architecture patterns"
-
Web search: "dotfiles management best practices" (if applicable)
Technology-Specific Patterns
-
Web search: "[technology] configuration optimization"
-
Web search: "[framework] performance best practices"
-
Web search: "[tool] common mistakes to avoid"
Security & Maintenance
-
Web search: "[language] security best practices"
-
Web search: "dependency management [package manager]"
-
Web search: "CI/CD pipeline optimization"
Phase 5: Configuration Deep Dive
Analyze Key Configuration Files:
Read Critical Configs
-
Use Read tool for important configuration files
-
Check for:
-
Version pinning vs. version ranges
-
Security configurations
-
Performance settings
-
Environment-specific configurations
-
Secret management
Evaluate Configuration Quality
-
Completeness: Are all necessary configs present?
-
Consistency: Do configs align across the project?
-
Documentation: Are configs well-commented?
-
Security: Are secrets properly handled?
-
Maintainability: Are configs modular and DRY?
Prompt for Explore Agent (Phase 5):
Explore configuration files in depth with very thorough mode.
Focus on:
- Environment configuration (.env templates, config files)
- Secret management patterns
- Build configuration optimization
- Development vs production configs
- Version locking strategies
Phase 6: Documentation Assessment
Evaluate Documentation Quality:
Existence Check
-
README.md at root
-
CONTRIBUTING.md
-
LICENSE
-
CHANGELOG.md
-
docs/ directory
-
API documentation
-
Code comments
-
Type annotations/JSDoc
Quality Assessment
README.md:
-
Clear project description?
-
Installation instructions?
-
Usage examples?
-
Configuration guide?
-
Contribution guidelines?
-
License information?
Code Documentation:
-
Function/method documentation?
-
Complex logic explained?
-
Type signatures/annotations?
-
Examples provided?
Documentation Gaps
-
Identify missing documentation
-
Note outdated information
-
Flag confusing sections
Phase 7: Code Quality & Patterns
Analyze Code Organization:
Directory Structure
-
Is structure logical and consistent?
-
Are concerns properly separated?
-
Is there clear separation of layers?
-
Are naming conventions followed?
Code Patterns
-
Identify architectural patterns
-
Check for anti-patterns
-
Assess modularity
-
Evaluate reusability
Testing Strategy
-
Test files location and organization
-
Test coverage indicators
-
Testing frameworks used
-
CI test automation
Prompt for Explore Agent (Phase 7):
Explore code organization and patterns with very thorough mode.
Analyze:
- Directory structure and naming conventions
- Test files and testing strategy
- Shared/common code organization
- Configuration management patterns
- Error handling approaches
- Logging and monitoring setup
Phase 8: Dependencies & Security
Audit Dependencies:
Dependency Health
-
Count total dependencies
-
Identify outdated packages
-
Check for security vulnerabilities
-
Assess dependency tree depth
Version Management
-
Are versions locked properly?
-
Overly restrictive version constraints?
-
Too loose version ranges?
License Compliance
-
What licenses are dependencies using?
-
Any licensing conflicts?
-
Attribution requirements met?
Phase 9: CI/CD & DevOps
Evaluate Automation:
CI/CD Pipeline
-
What tests run automatically?
-
Deployment automation level?
-
Code quality checks?
-
Security scanning?
Development Workflow
-
Pre-commit hooks?
-
Code review process?
-
Branch protection?
-
Release process?
Infrastructure as Code
-
Environment reproducibility?
-
Configuration management?
-
Deployment consistency?
Phase 10: Synthesis & Reporting
Generate Comprehensive Report:
Report Structure
Repository Analysis Report: [Repository Name]
Executive Summary
Repository: [Name/Path] Primary Language: [Language] Main Framework/Purpose: [Description] Overall Health Score: [X/10]
Key Findings:
- [Finding 1]
- [Finding 2]
- [Finding 3]
Critical Issues: [Count] Recommendations: [Count]
1. Repository Overview
Basic Information
- Location: [Path]
- Git Remote: [URL if available]
- Last Updated: [Date]
- Total Files: [Count]
- Total Lines: [Estimate]
Purpose & Scope
[Description of what this repository does]
2. Technology Stack
Programming Languages
| Language | Percentage | Files | Purpose |
|---|---|---|---|
| [Lang] | [X%] | [N] | [Desc] |
Major Dependencies
Production:
- [Dependency 1] ([version]) - [Purpose]
- [Dependency 2] ([version]) - [Purpose]
Development:
- [Dependency 1] ([version]) - [Purpose]
- [Dependency 2] ([version]) - [Purpose]
Frameworks & Tools
- Frontend: [Frameworks]
- Backend: [Frameworks]
- Testing: [Frameworks]
- Build Tools: [Tools]
- CI/CD: [Tools]
3. Repository Structure
Directory Organization
repository/ ├── [dir1]/ # [Purpose] ├── [dir2]/ # [Purpose] └── [dir3]/ # [Purpose]
Key Files
[file1]- [Purpose][file2]- [Purpose]
Structure Assessment: [Evaluation]
4. Configuration Analysis
Configuration Files Found
- [Config file 1] - [Quality: Good/Fair/Poor]
- [Config file 2] - [Quality: Good/Fair/Poor]
Configuration Quality
Strengths:
- [Strength 1]
- [Strength 2]
Weaknesses:
- [Weakness 1]
- [Weakness 2]
Secret Management
[Assessment of how secrets are handled]
5. Documentation Quality
Existing Documentation
- README.md - [Quality score]
- CONTRIBUTING.md - [Missing/Present]
- Code comments - [Quality score]
Documentation Assessment
Coverage: [X/10] Quality: [X/10] Maintainability: [X/10]
Gaps Identified:
- [Gap 1]
- [Gap 2]
6. Code Quality & Patterns
Architectural Patterns
[Identified patterns and their appropriateness]
Code Organization
Rating: [X/10]
Strengths:
- [Strength 1]
Concerns:
- [Concern 1]
Testing Strategy
- Test Coverage: [Estimated %]
- Test Frameworks: [Frameworks]
- CI Integration: [Yes/No]
7. Dependencies & Security
Dependency Health
- Total Dependencies: [Count]
- Outdated: [Count]
- Security Vulnerabilities: [Count]
Dependency Management
[Assessment of version locking, update strategy]
8. DevOps & Automation
CI/CD Pipeline
[Description of automation setup]
Automated Checks:
- Tests
- Linting
- Security scanning
- Build verification
Development Workflow
[Description of git workflow, hooks, etc.]
9. Best Practices Comparison
Alignment with Industry Standards
| Practice | Current | Recommended | Gap |
|---|---|---|---|
| [Practice 1] | [Status] | [Standard] | [Description] |
| [Practice 2] | [Status] | [Standard] | [Description] |
Context7 Insights
[Findings from official documentation comparison]
Web Research Findings
[Insights from industry best practices research]
10. Recommendations
Critical (Fix Immediately)
- [Issue]
- Impact: [High/Medium/Low]
- Effort: [High/Medium/Low]
- Action: [Specific steps]
Important (Fix Soon)
- [Issue]
- Impact: [High/Medium/Low]
- Effort: [High/Medium/Low]
- Action: [Specific steps]
Nice to Have (Consider)
- [Issue]
- Impact: [High/Medium/Low]
- Effort: [High/Medium/Low]
- Action: [Specific steps]
11. Quick Wins
[List of easy improvements with high impact]
- [Quick win 1] - [Estimated time: Xm]
- [Quick win 2] - [Estimated time: Xm]
12. Technical Debt Assessment
Overall Technical Debt: [High/Medium/Low]
Areas of Concern:
- [Area 1]
- [Area 2]
Suggested Refactoring:
- [Refactoring 1]
- [Refactoring 2]
13. Maintainability Score
| Aspect | Score | Notes |
|---|---|---|
| Code Organization | [X/10] | [Notes] |
| Documentation | [X/10] | [Notes] |
| Testing | [X/10] | [Notes] |
| Configuration | [X/10] | [Notes] |
| Dependencies | [X/10] | [Notes] |
| Overall | [X/10] |
14. Research Sources
Context7 Documentation Consulted
- [Library 1] - [Topic]
- [Library 2] - [Topic]
Web Research Conducted
- [Search query 1] - [Key finding]
- [Search query 2] - [Key finding]
Explore Agent Investigations
- [Investigation 1] - [Finding]
- [Investigation 2] - [Finding]
15. Next Steps
Immediate Actions
- [Action 1]
- [Action 2]
Short-term (1-2 weeks)
- [Action 1]
- [Action 2]
Long-term (1-3 months)
- [Action 1]
- [Action 2]
Appendix
Full Technology List
[Complete list of all technologies found]
All Configuration Files
[Complete list with brief descriptions]
Dependency Tree
[If relevant and not too large]
Report Generated: [Date] Analysis Duration: [Estimated time] Claude Skills Used: repository-analyzer, deep-research (if applicable) Tools Used: Explore agent, Context7, WebSearch, Read, Grep, Glob
Execution Strategy
Parallel vs Sequential
Run in Parallel (when possible):
-
Multiple Explore agent investigations (different aspects)
-
Web searches for different technologies
-
Context7 lookups for multiple libraries
Run Sequentially (when needed):
-
Discovery first (need to know structure)
-
Technology identification second (need to know what to research)
-
Deep dives third (need context from earlier phases)
-
Synthesis last (need all information)
Time Management
Estimated Duration:
-
Small repository (< 100 files): 10-15 minutes
-
Medium repository (100-1000 files): 15-30 minutes
-
Large repository (> 1000 files): 30-60 minutes
Optimization:
-
Use Explore agent efficiently (right thoroughness level)
-
Don't read every file - sample strategically
-
Focus on high-impact findings
-
Prioritize actionable insights
Thoroughness Levels
Quick Audit (10 min):
-
Phase 1-3 only
-
Basic structure, tech stack, major issues
Standard Analysis (30 min):
-
Phase 1-6
-
Complete overview with recommendations
Deep Audit (60 min):
-
All phases
-
Comprehensive analysis with detailed recommendations
Custom Focus: Ask user what aspects to prioritize if time is limited.
Special Repository Types
Dotfiles Repository (like chezmoi)
Focus on:
-
File organization and chezmoi-specific patterns
-
Template usage and variable management
-
Secret management (age encryption, 1Password integration)
-
Cross-platform compatibility
-
Backup and restore strategies
-
Documentation for setup
Key Files:
-
.chezmoi.toml.tmpl , chezmoi.toml
-
Template files (.tmpl extension)
-
Encrypted files (.age extension)
-
Scripts and hooks
-
README and setup guides
Monorepo
Focus on:
-
Package/workspace organization
-
Shared dependencies management
-
Build orchestration
-
Independent deployments
-
Code sharing patterns
Library/Package
Focus on:
-
API design and documentation
-
Versioning strategy
-
Breaking changes handling
-
Examples and usage guides
-
Publishing workflow
Web Application
Focus on:
-
Frontend/backend separation
-
State management
-
Routing structure
-
API design
-
Performance optimization
-
Security headers and practices
Quality Checklist
Before finalizing the report, verify:
-
All major directories explored
-
All configuration files identified
-
Technology stack fully documented
-
Context7 consulted for major technologies
-
Web research conducted for best practices
-
Specific, actionable recommendations provided
-
Evidence cited for all claims
-
Quick wins identified
-
Critical issues highlighted
-
Report is well-structured and readable
Usage Examples
Example 1: Analyze Current Repository
User: "Analyze this repository"
Claude:
-
Runs pwd to get location
-
Launches Explore agent (Phase 1)
-
Identifies it's a chezmoi dotfiles repo
-
Launches Phase 2 exploration for tech stack
-
Uses Context7 for chezmoi documentation
-
Web searches for dotfiles best practices
-
Analyzes configuration quality
-
Generates comprehensive report
Example 2: Focus on Security
User: "Audit the security configuration of this repository"
Claude:
-
Focuses on Phase 5 (Configuration) and Phase 8 (Security)
-
Searches for secret management patterns
-
Checks for exposed credentials
-
Evaluates dependency vulnerabilities
-
Reviews CI/CD security
-
Provides security-focused report
Example 3: Technology Stack Report
User: "What technologies does this repository use?"
Claude:
-
Runs Phase 2 (Technology Stack Analysis)
-
Uses Explore agent to find all config files
-
Parses package managers files
-
Uses Context7 to get documentation for major libraries
-
Provides structured technology report
Tips for Effective Analysis
Start Broad, Then Deep
-
Get the big picture first
-
Then drill into specifics
Let Explore Agent Do the Work
-
Don't manually grep every file
-
Use appropriate thoroughness level
-
Multiple focused explorations > one massive exploration
Context7 for Authority
-
Official documentation is most reliable
-
Use for major frameworks/libraries
-
Compare repo's usage against official recommendations
Web Search for Trends
-
Industry best practices
-
Common pitfalls
-
Recent developments (2025 standards)
Be Actionable
-
Every finding should have a recommendation
-
Prioritize by impact and effort
-
Provide specific steps, not vague advice
Be Honest
-
Acknowledge what you don't know
-
State confidence levels
-
Suggest areas for human expert review
Notes
-
Scope Management: For very large repositories, ask user to specify focus areas
-
Permissions: Some operations may require user approval (file reads, web searches)
-
Time Awareness: Inform user of estimated duration upfront
-
Iteration: Offer to deep-dive into specific areas after initial report
-
Export: Offer to save report as markdown file in repository