Performance Profiling & Bottleneck Detection
I'll profile your application to identify performance bottlenecks and provide optimization recommendations.
Supported Environments:
- Node.js profiling (--inspect, clinic.js)
- Browser performance (Chrome DevTools)
- Python profiling (cProfile, line_profiler)
- Bottleneck identification
- Memory leak detection
- Optimization recommendations
Arguments: $ARGUMENTS - optional: node|python|browser or specific file/route to profile
Token Optimization
This skill uses efficient patterns to minimize token consumption during performance profiling setup and analysis.
Optimization Strategies
1. Runtime Detection Caching (Saves 700 tokens per invocation)
Cache runtime environment and tool availability to avoid repeated detection:
CACHE_FILE=".claude/cache/performance-profile/runtime.json"
CACHE_TTL=86400 # 24 hours (tools don't change frequently)
mkdir -p .claude/cache/performance-profile
if [ -f "$CACHE_FILE" ]; then
CACHE_AGE=$(($(date +%s) - $(stat -c %Y "$CACHE_FILE" 2>/dev/null || stat -f %m "$CACHE_FILE" 2>/dev/null)))
if [ $CACHE_AGE -lt $CACHE_TTL ]; then
# Use cached runtime info
RUNTIME=$(jq -r '.runtime' "$CACHE_FILE")
NODE_VERSION=$(jq -r '.node_version' "$CACHE_FILE")
PYTHON_VERSION=$(jq -r '.python_version' "$CACHE_FILE")
TOOLS_INSTALLED=$(jq -r '.tools_installed' "$CACHE_FILE")
echo "Using cached runtime: $RUNTIME"
echo "Tools available: $TOOLS_INSTALLED"
SKIP_DETECTION="true"
fi
fi
# First run: detect and cache
if [ "$SKIP_DETECTION" != "true" ]; then
detect_runtime # Expensive operation
detect_tools # Expensive operation
# Cache results
jq -n \
--arg runtime "$RUNTIME" \
--arg node "$NODE_VERSION" \
--arg python "$PYTHON_VERSION" \
--arg tools "$TOOLS_INSTALLED" \
'{runtime: $runtime, node_version: $node, python_version: $python, tools_installed: $tools}' \
> "$CACHE_FILE"
fi
Savings: 700 tokens (no repeated package.json reads, no command checks, no file system scans)
2. Early Exit for Configured Environments (Saves 85%)
If profiling already set up, provide quick summary and exit:
if [ -f ".claude/profiling/profile-node.sh" ] && [ -f ".claude/profiling/profile-$TIMESTAMP.md" ]; then
if [ "$RECONFIGURE" != "true" ]; then
echo "Profiling already configured"
echo ""
echo "Available profiling scripts:"
ls .claude/profiling/profile-*.sh 2>/dev/null
echo ""
echo "Last report: $(ls -t .claude/profiling/profile-*.md | head -1)"
echo ""
echo "Use --reconfigure to regenerate setup"
exit 0
fi
fi
Savings: 85% reduction when setup exists (4,000 → 600 tokens)
3. Script Generation on Demand (Saves 70%)
Only generate profiling scripts for the detected runtime:
# Default: Only create scripts for detected runtime
case "$RUNTIME" in
nodejs)
create_node_profiling_script # 800 tokens
# Skip Python and browser scripts
;;
python)
create_python_profiling_script # 700 tokens
# Skip Node and browser scripts
;;
browser)
create_browser_profiling_guide # 600 tokens
# Skip Node and Python scripts
;;
esac
# Only create all scripts if explicitly requested
if [ "$GENERATE_ALL" = "true" ]; then
create_node_profiling_script
create_python_profiling_script
create_browser_profiling_guide
fi
Savings: 70% (generate 1 script vs 3 scripts)
4. Sample-Based Anti-Pattern Detection (Saves 80%)
Show only first 10 issues, not exhaustive list:
# Efficient anti-pattern detection with head limit
echo "Scanning for performance anti-patterns..."
# Show only first 10 sync blocking operations
SYNC_BLOCKING=$(grep -r "readFileSync\|writeFileSync\|execSync" \
--include="*.js" --include="*.ts" \
--exclude-dir=node_modules --exclude-dir=dist \
. 2>/dev/null | head -10)
if [ -n "$SYNC_BLOCKING" ]; then
COUNT=$(echo "$SYNC_BLOCKING" | wc -l)
echo "⚠️ Found synchronous blocking operations (showing first 10):"
echo "$SYNC_BLOCKING" | head -5 | while read line; do
echo " - $line"
done
if [ $COUNT -gt 5 ]; then
echo " ... and $((COUNT - 5)) more"
fi
fi
Savings: 80% (show samples vs exhaustive 500+ line output)
5. Grep-Based Pattern Detection (Saves 90%)
Use Grep for anti-pattern detection instead of reading all files:
# Efficient: Grep returns only matching lines
detect_performance_issues() {
local issue_count=0
# Check for console.log (only in src directories)
if grep -q -r "console\.log" --include="*.{js,ts,jsx,tsx}" src/ 2>/dev/null; then
issue_count=$((issue_count + 1))
echo "⚠️ Issue $issue_count: console.log statements in source code"
fi
# Check for sync operations
if grep -q -r "readFileSync\|writeFileSync" --include="*.{js,ts}" src/ 2>/dev/null; then
issue_count=$((issue_count + 1))
echo "⚠️ Issue $issue_count: Synchronous file operations detected"
fi
# Show count, not full lists (efficient summary)
echo ""
echo "Total issues detected: $issue_count"
}
Savings: 90% vs reading all files (Grep returns boolean or count)
6. Bash-Based Tool Execution (Saves 70%)
All profiling operations use pure bash (no Task agent, no Read/Edit tools):
# All operations in bash
case "$PROFILE_TYPE" in
node)
# Direct bash execution
clinic doctor -- node index.js
;;
python)
# Direct bash execution
python -m cProfile -o profile.stats main.py
;;
browser)
# Direct bash execution
lighthouse http://localhost:3000 --output html
;;
esac
Savings: 70% vs using Task agents for profiling operations
7. Minimal Report Generation (Saves 60%)
Generate summary report by default, full report on demand:
REPORT_LEVEL="${REPORT_LEVEL:-summary}"
case "$REPORT_LEVEL" in
summary)
# Minimal report (400 tokens)
cat > "$REPORT" << EOF
# Performance Profile Summary
**Runtime:** $RUNTIME
**Tools:** $TOOLS_INSTALLED
## Quick Start
Run profiling:
$PROFILE_DIR/profile-$RUNTIME.sh
See full report: cat $REPORT --full
EOF
;;
full)
# Comprehensive report (2,000 tokens)
generate_full_performance_report
;;
esac
Savings: 60% for default runs (summary 400 tokens vs full 2,000 tokens)
8. Tool Installation Guidance Only (Saves 95%)
Don't install tools automatically, provide installation commands:
check_tool_availability() {
local tool="$1"
if ! command -v "$tool" >/dev/null 2>&1; then
echo "⚠️ $tool not installed"
echo " Install: $(get_install_command $tool)"
TOOLS_MISSING="true"
else
echo "✓ $tool available"
fi
}
# No actual npm install, just guidance
if [ "$TOOLS_MISSING" = "true" ]; then
echo ""
echo "Install missing tools to proceed with profiling"
exit 0
fi
Savings: 95% (no npm install execution which can take 1,000+ tokens)
Cache Invalidation
Caches are invalidated when:
- Runtime environment changes (different package.json, new tool installs)
- 24 hours elapsed (time-based for tool availability)
- User runs
--clear-cacheflag - Major version update detected (Node.js 18 → 20)
Real-World Token Usage
Typical profiling workflow:
-
First-time setup: 1,500-2,500 tokens
- Runtime detection: 400 tokens
- Tool checking: 300 tokens
- Script generation (1 runtime): 800 tokens
- Summary report: 400 tokens
-
Cached environment: 400-800 tokens
- Cached runtime: 100 tokens (85% savings)
- Skip tool checks: 0 tokens
- Skip script regeneration: 0 tokens
- Quick summary: 300 tokens
-
Profiling execution: 200-600 tokens
- Execute profiling script: 200 tokens (bash operation)
- Parse results summary: 400 tokens (with --results flag)
-
Anti-pattern scan: 300-800 tokens
- Grep-based detection: 200 tokens
- Sample-based reporting: 100-600 tokens (show 5-10 examples)
-
Full report generation: 1,500-2,000 tokens
- Only when explicitly requested with --full flag
Average usage distribution:
- 70% of runs: Cached environment + quick summary (400-800 tokens) ✅ Most common
- 20% of runs: First-time setup (1,500-2,500 tokens)
- 10% of runs: Full analysis + report (2,000-3,000 tokens)
Expected token range: 400-2,500 tokens (60% reduction from 1,000-6,000 baseline)
Progressive Disclosure
Three levels of detail:
-
Default (summary): Quick setup and guidance
claude "/performance-profile" # Shows: runtime detected, tools available, profiling commands # Tokens: 400-800 -
Verbose (detailed): Anti-pattern scan + profiling scripts
claude "/performance-profile --verbose" # Shows: full anti-pattern detection, all profiling options # Tokens: 1,200-1,800 -
Full (exhaustive): Complete analysis + all runtime scripts
claude "/performance-profile --full" # Shows: detailed report, all runtimes, complete recommendations # Tokens: 2,000-3,000
Implementation Notes
Key patterns applied:
- ✅ Runtime detection caching (700 token savings)
- ✅ Early exit for configured environments (85% reduction)
- ✅ Script generation on demand (70% savings)
- ✅ Sample-based anti-pattern detection (80% savings)
- ✅ Grep-based pattern detection (90% savings)
- ✅ Bash-based tool execution (70% savings)
- ✅ Minimal report generation (60% savings)
- ✅ Tool guidance only, no auto-install (95% savings)
Cache locations:
.claude/cache/performance-profile/runtime.json- Runtime environment and tools.claude/cache/performance-profile/anti-patterns.txt- Recent scan results (1 hour TTL)
Flags:
--reconfigure- Force regeneration of profiling scripts--verbose- Show anti-pattern scan + detailed options--full- Generate comprehensive report for all runtimes--all-runtimes- Create scripts for Node.js + Python + Browser--clear-cache- Force cache invalidation
Environment-specific optimizations:
- Node.js: Focus on async patterns, readFileSync detection, package.json analysis
- Python: Focus on list comprehensions, database queries, cProfile setup
- Browser: Focus on bundle size, console.log removal, image optimization
Extended Thinking for Performance Analysis
<think> Performance profiling requires understanding: - CPU-bound vs I/O-bound operations - Synchronous vs asynchronous bottlenecks - Memory allocation patterns - Database query performance - Network latency impacts - Bundle size and loading performanceComplex scenarios:
- Multi-threaded applications
- Distributed systems
- Microservice latency
- Real-time applications
- Large dataset processing
- Memory leaks over time </think>
Phase 1: Environment Detection
I'll detect the runtime environment and available profiling tools:
#!/bin/bash
# Performance Profiling - Environment Detection
echo "=== Performance Profiling Setup ==="
echo ""
# Create profiling directory
mkdir -p .claude/profiling
PROFILE_DIR=".claude/profiling"
TIMESTAMP=$(date +%Y%m%d-%H%M%S)
REPORT="$PROFILE_DIR/profile-$TIMESTAMP.md"
detect_runtime() {
local runtime=""
# Check for Node.js
if [ -f "package.json" ]; then
if command -v node >/dev/null 2>&1; then
runtime="nodejs"
NODE_VERSION=$(node --version)
echo "✓ Node.js detected: $NODE_VERSION"
fi
fi
# Check for Python
if [ -f "requirements.txt" ] || [ -f "setup.py" ] || [ -f "pyproject.toml" ]; then
if command -v python >/dev/null 2>&1; then
runtime="python"
PYTHON_VERSION=$(python --version)
echo "✓ Python detected: $PYTHON_VERSION"
elif command -v python3 >/dev/null 2>&1; then
runtime="python"
PYTHON_VERSION=$(python3 --version)
echo "✓ Python detected: $PYTHON_VERSION"
fi
fi
# Check for browser-based app
if [ -f "package.json" ]; then
if grep -q "\"react\"\|\"vue\"\|\"angular\"\|\"svelte\"" package.json; then
if [ -z "$runtime" ]; then
runtime="browser"
else
runtime="$runtime,browser"
fi
echo "✓ Frontend framework detected - browser profiling available"
fi
fi
echo "$runtime"
}
RUNTIME=$(detect_runtime)
if [ -z "$RUNTIME" ]; then
echo "❌ Unable to detect runtime environment"
echo ""
echo "Supported environments:"
echo " - Node.js applications"
echo " - Python applications"
echo " - Browser-based applications (React, Vue, Angular, Svelte)"
exit 1
fi
echo ""
echo "Environment: $RUNTIME"
Phase 2: Install Profiling Tools
I'll check and install necessary profiling tools:
echo ""
echo "=== Installing Profiling Tools ==="
install_profiling_tools() {
case "$RUNTIME" in
*nodejs*)
echo "Setting up Node.js profiling tools..."
# Check for clinic.js
if ! npm list -g clinic >/dev/null 2>&1; then
echo "Installing clinic.js (performance profiling suite)..."
echo " npm install -g clinic"
echo ""
echo "Clinic.js tools:"
echo " - clinic doctor - Diagnose performance issues"
echo " - clinic flame - CPU flame graphs"
echo " - clinic bubbleprof - Async operations"
echo " - clinic heapprofiler - Memory profiling"
else
echo "✓ clinic.js already installed"
fi
# Check for autocannon (benchmarking)
if ! npm list -g autocannon >/dev/null 2>&1; then
echo ""
echo "Installing autocannon (HTTP benchmarking)..."
echo " npm install -g autocannon"
else
echo "✓ autocannon already installed"
fi
echo ""
echo "Node.js profiling methods available:"
echo " 1. Built-in --inspect flag"
echo " 2. clinic.js suite"
echo " 3. Chrome DevTools"
;;
*python*)
echo "Setting up Python profiling tools..."
# Check for profiling packages
if ! python -c "import cProfile" 2>/dev/null; then
echo "cProfile not available (should be in stdlib)"
else
echo "✓ cProfile available (built-in)"
fi
# Check for line_profiler
if ! python -c "import line_profiler" 2>/dev/null; then
echo ""
echo "Install line_profiler for line-by-line profiling:"
echo " pip install line_profiler"
else
echo "✓ line_profiler available"
fi
# Check for memory_profiler
if ! python -c "import memory_profiler" 2>/dev/null; then
echo ""
echo "Install memory_profiler for memory analysis:"
echo " pip install memory_profiler"
else
echo "✓ memory_profiler available"
fi
echo ""
echo "Python profiling methods available:"
echo " 1. cProfile (CPU profiling)"
echo " 2. line_profiler (line-by-line)"
echo " 3. memory_profiler (memory usage)"
;;
*browser*)
echo "Browser profiling setup..."
echo ""
echo "Browser profiling tools:"
echo " 1. Chrome DevTools Performance tab"
echo " 2. Lighthouse CI"
echo " 3. webpack-bundle-analyzer"
echo " 4. React DevTools Profiler"
# Check for bundle analyzer
if [ -f "package.json" ]; then
if ! grep -q "webpack-bundle-analyzer" package.json; then
echo ""
echo "Install webpack-bundle-analyzer:"
echo " npm install --save-dev webpack-bundle-analyzer"
fi
fi
;;
esac
}
install_profiling_tools
Phase 3: Node.js Profiling
For Node.js applications, I'll set up and run profiling:
if [[ "$RUNTIME" == *"nodejs"* ]]; then
echo ""
echo "=== Node.js Performance Profiling ==="
# Create profiling scripts
cat > "$PROFILE_DIR/profile-node.sh" << 'NODEPROFILE'
#!/bin/bash
# Node.js Performance Profiling
echo "Node.js Profiling Options:"
echo ""
echo "1. CPU Profiling (clinic doctor)"
echo "2. Flame Graph (clinic flame)"
echo "3. Async Operations (clinic bubbleprof)"
echo "4. Memory Profiling (clinic heapprofiler)"
echo "5. Built-in V8 Profiler (--prof)"
echo "6. HTTP Load Testing (autocannon)"
echo ""
# Option 1: Clinic Doctor (comprehensive diagnosis)
clinic_doctor() {
echo "Running clinic doctor..."
echo "This will start your app and collect performance data"
echo ""
# Find entry point
ENTRY=$(node -p "require('./package.json').main || 'index.js'")
clinic doctor -- node "$ENTRY"
echo ""
echo "✓ Profile complete! Opening report in browser..."
}
# Option 2: Flame Graph
clinic_flame() {
echo "Running clinic flame..."
echo "Generating CPU flame graph"
echo ""
ENTRY=$(node -p "require('./package.json').main || 'index.js'")
clinic flame -- node "$ENTRY"
echo ""
echo "✓ Flame graph generated!"
}
# Option 3: Async Operations
clinic_bubble() {
echo "Running clinic bubbleprof..."
echo "Analyzing async operations"
echo ""
ENTRY=$(node -p "require('./package.json').main || 'index.js'")
clinic bubbleprof -- node "$ENTRY"
echo ""
echo "✓ Async analysis complete!"
}
# Option 4: V8 Profiler
v8_profiler() {
echo "Running V8 profiler..."
echo ""
ENTRY=$(node -p "require('./package.json').main || 'index.js'")
node --prof "$ENTRY"
# Process the profile
PROFILE=$(ls isolate-*.log | head -1)
if [ -n "$PROFILE" ]; then
node --prof-process "$PROFILE" > processed-profile.txt
echo "✓ Profile processed: processed-profile.txt"
fi
}
# Option 5: HTTP Load Test
http_load_test() {
echo "HTTP Load Testing with autocannon"
echo ""
read -p "Enter URL to test (e.g., http://localhost:3000): " URL
read -p "Duration in seconds (default: 10): " DURATION
DURATION=${DURATION:-10}
echo ""
echo "Running load test..."
autocannon -d $DURATION "$URL"
}
# Main menu
case "${1:-1}" in
1) clinic_doctor ;;
2) clinic_flame ;;
3) clinic_bubble ;;
4) clinic_heapprofiler -- node $(node -p "require('./package.json').main || 'index.js'") ;;
5) v8_profiler ;;
6) http_load_test ;;
*) echo "Invalid option" ;;
esac
NODEPROFILE
chmod +x "$PROFILE_DIR/profile-node.sh"
echo "✓ Created Node.js profiling script: $PROFILE_DIR/profile-node.sh"
# Identify potential bottlenecks in code
echo ""
echo "Scanning for potential performance issues..."
# Check for synchronous blocking operations
SYNC_BLOCKING=$(grep -r "readFileSync\|writeFileSync\|execSync" \
--include="*.js" --include="*.ts" \
--exclude-dir=node_modules \
--exclude-dir=dist \
. 2>/dev/null | wc -l)
if [ "$SYNC_BLOCKING" -gt 0 ]; then
echo "⚠️ Found $SYNC_BLOCKING synchronous blocking operations"
echo " Consider using async versions (readFile, writeFile, exec)"
fi
# Check for missing await
MISSING_AWAIT=$(grep -r "^\s*async.*function" \
--include="*.js" --include="*.ts" \
--exclude-dir=node_modules \
. 2>/dev/null | wc -l)
echo "💡 Async functions found: $MISSING_AWAIT"
echo " Verify all async calls use await or proper promise handling"
# Check for large dependencies
echo ""
echo "Analyzing bundle size..."
if [ -f "package.json" ]; then
DEPS_COUNT=$(cat package.json | jq '.dependencies | length' 2>/dev/null || echo "N/A")
echo " Dependencies: $DEPS_COUNT"
# Suggest bundle analyzer
echo ""
echo "To analyze bundle size:"
echo " npm install --save-dev webpack-bundle-analyzer"
echo " # Add to webpack config and run build"
fi
fi
Phase 4: Python Profiling
For Python applications, I'll set up profiling:
if [[ "$RUNTIME" == *"python"* ]]; then
echo ""
echo "=== Python Performance Profiling ==="
cat > "$PROFILE_DIR/profile-python.sh" << 'PYTHONPROFILE'
#!/bin/bash
# Python Performance Profiling
echo "Python Profiling Options:"
echo ""
echo "1. cProfile (CPU profiling)"
echo "2. line_profiler (line-by-line)"
echo "3. memory_profiler (memory usage)"
echo ""
# Option 1: cProfile
cpu_profile() {
echo "Running cProfile..."
echo ""
# Find main Python file
if [ -f "main.py" ]; then
ENTRY="main.py"
elif [ -f "app.py" ]; then
ENTRY="app.py"
else
read -p "Enter Python file to profile: " ENTRY
fi
python -m cProfile -o profile.stats "$ENTRY"
# Analyze profile
python << 'ANALYZE'
import pstats
from pstats import SortKey
p = pstats.Stats('profile.stats')
print("\n=== Top 20 Functions by Cumulative Time ===")
p.sort_stats(SortKey.CUMULATIVE).print_stats(20)
print("\n=== Top 20 Functions by Total Time ===")
p.sort_stats(SortKey.TIME).print_stats(20)
ANALYZE
echo ""
echo "✓ Profile saved to: profile.stats"
}
# Option 2: line_profiler
line_profile() {
echo "Using line_profiler..."
echo ""
echo "Add @profile decorator to functions you want to profile"
echo "Example:"
echo " @profile"
echo " def slow_function():"
echo " pass"
echo ""
read -p "Enter Python file with @profile decorators: " ENTRY
kernprof -l -v "$ENTRY"
}
# Option 3: memory_profiler
memory_profile() {
echo "Using memory_profiler..."
echo ""
echo "Add @profile decorator to functions you want to profile"
echo ""
read -p "Enter Python file with @profile decorators: " ENTRY
python -m memory_profiler "$ENTRY"
}
case "${1:-1}" in
1) cpu_profile ;;
2) line_profile ;;
3) memory_profile ;;
*) echo "Invalid option" ;;
esac
PYTHONPROFILE
chmod +x "$PROFILE_DIR/profile-python.sh"
echo "✓ Created Python profiling script: $PROFILE_DIR/profile-python.sh"
# Identify potential bottlenecks
echo ""
echo "Scanning for potential performance issues..."
# Check for list comprehensions that could be generators
LIST_COMP=$(grep -r "\[.*for .* in .*\]" \
--include="*.py" \
--exclude-dir=venv \
--exclude-dir=env \
. 2>/dev/null | wc -l)
echo "💡 List comprehensions found: $LIST_COMP"
echo " Consider using generators for large datasets"
# Check for database queries without indexing hints
DB_QUERIES=$(grep -r "SELECT.*FROM\|\.filter(\|\.get(" \
--include="*.py" \
. 2>/dev/null | wc -l)
if [ "$DB_QUERIES" -gt 0 ]; then
echo "💡 Database queries found: $DB_QUERIES"
echo " Ensure proper indexing and use select_related/prefetch_related"
fi
fi
Phase 5: Browser Profiling
For frontend applications, I'll set up browser profiling:
if [[ "$RUNTIME" == *"browser"* ]]; then
echo ""
echo "=== Browser Performance Profiling ==="
cat > "$PROFILE_DIR/profile-browser.md" << 'BROWSERPROFILE'
# Browser Performance Profiling Guide
## Chrome DevTools Performance Profiling
1. **Open Chrome DevTools**
- F12 or Right-click → Inspect
- Go to "Performance" tab
2. **Record Performance**
- Click Record button (circle)
- Interact with your app (navigate, click, scroll)
- Click Stop
3. **Analyze Results**
- Look for long tasks (> 50ms)
- Identify JavaScript execution time
- Check layout/reflow operations
- Analyze network waterfall
4. **Key Metrics**
- FCP (First Contentful Paint) - < 1.8s
- LCP (Largest Contentful Paint) - < 2.5s
- TBT (Total Blocking Time) - < 200ms
- CLS (Cumulative Layout Shift) - < 0.1
## Lighthouse Audit
```bash
# Install Lighthouse CLI
npm install -g lighthouse
# Run audit
lighthouse http://localhost:3000 --view
# Or programmatically
npx lighthouse http://localhost:3000 --output html --output-path ./lighthouse-report.html
Bundle Analysis
Webpack Bundle Analyzer
npm install --save-dev webpack-bundle-analyzer
# Add to webpack.config.js
const BundleAnalyzerPlugin = require('webpack-bundle-analyzer').BundleAnalyzerPlugin;
module.exports = {
plugins: [
new BundleAnalyzerPlugin()
]
}
# Run build
npm run build
Vite Bundle Analysis
npm install --save-dev rollup-plugin-visualizer
# Add to vite.config.js
import { visualizer } from 'rollup-plugin-visualizer';
export default {
plugins: [
visualizer({ open: true })
]
}
# Run build
npm run build
React Profiler
import { Profiler } from 'react';
function onRenderCallback(
id, phase, actualDuration, baseDuration, startTime, commitTime
) {
console.log(`${id} took ${actualDuration}ms to render`);
}
<Profiler id="MyComponent" onRender={onRenderCallback}>
<MyComponent />
</Profiler>
Common Performance Issues
1. Unnecessary Re-renders
- Use React.memo() for expensive components
- Implement shouldComponentUpdate or use useMemo
- Avoid inline function definitions in JSX
2. Large Bundle Size
- Code splitting with React.lazy()
- Tree shaking unused code
- Analyze and remove large dependencies
3. Unoptimized Images
- Use WebP format
- Implement lazy loading
- Add proper width/height attributes
4. Too Many Network Requests
- Bundle similar resources
- Use HTTP/2 multiplexing
- Implement resource hints (preload, prefetch)
5. Blocking JavaScript
- Defer non-critical scripts
- Use async attribute
- Move scripts to bottom of body
Optimization Checklist
-
Bundle size < 200KB (gzipped)
-
First load < 3 seconds
-
Images optimized and lazy-loaded
-
Code split by route
-
Critical CSS inlined
-
Font loading optimized
-
Service worker for caching
-
Compression enabled (gzip/brotli) BROWSERPROFILE
echo "✓ Created browser profiling guide: $PROFILE_DIR/profile-browser.md"
Check for performance anti-patterns
echo "" echo "Scanning for performance anti-patterns..."
Check for console.log in production
CONSOLE_LOGS=$(grep -r "console.log|console.debug"
--include=".js" --include=".jsx" --include=".ts" --include=".tsx"
--exclude-dir=node_modules
. 2>/dev/null | wc -l)if [ "$CONSOLE_LOGS" -gt 10 ]; then echo "⚠️ Found $CONSOLE_LOGS console.log statements" echo " Remove or use conditional logging for production" fi
Check for large images
if [ -d "public" ] || [ -d "static" ] || [ -d "assets" ]; then echo "" echo "Checking for large images..." find public static assets -type f ( -name ".jpg" -o -name ".png" -o -name "*.jpeg" ) -size +500k 2>/dev/null | while read img; do SIZE=$(du -h "$img" | cut -f1) echo " ⚠️ Large image: $img ($SIZE)" done fi fi
## Phase 6: Generate Performance Report
I'll create a comprehensive performance analysis report:
```bash
echo ""
echo "=== Generating Performance Report ==="
cat > "$REPORT" << EOF
# Performance Profile Report
**Generated:** $(date)
**Runtime:** $RUNTIME
**Project:** $(basename $(pwd))
---
## Profiling Setup
### Environment
- **Runtime:** $RUNTIME
- **Node Version:** ${NODE_VERSION:-N/A}
- **Python Version:** ${PYTHON_VERSION:-N/A}
### Tools Available
EOF
# Add tool-specific sections
if [[ "$RUNTIME" == *"nodejs"* ]]; then
cat >> "$REPORT" << 'EOF'
#### Node.js Profiling Tools
- clinic.js suite (doctor, flame, bubbleprof, heapprofiler)
- V8 built-in profiler (--prof)
- Chrome DevTools inspector
- autocannon (HTTP benchmarking)
**Run profiling:**
```bash
./claude/profiling/profile-node.sh 1 # Clinic doctor
./claude/profiling/profile-node.sh 2 # Flame graph
./claude/profiling/profile-node.sh 6 # Load test
EOF fi
if [[ "$RUNTIME" == "python" ]]; then cat >> "$REPORT" << 'EOF'
Python Profiling Tools
- cProfile (CPU profiling)
- line_profiler (line-by-line analysis)
- memory_profiler (memory usage)
Run profiling:
./claude/profiling/profile-python.sh 1 # cProfile
./claude/profiling/profile-python.sh 2 # line_profiler
EOF fi
if [[ "$RUNTIME" == "browser" ]]; then cat >> "$REPORT" << 'EOF'
Browser Profiling Tools
- Chrome DevTools Performance
- Lighthouse CI
- webpack-bundle-analyzer
- React DevTools Profiler
See guide:
cat .claude/profiling/profile-browser.md
EOF fi
cat >> "$REPORT" << 'EOF'
Quick Wins
Immediate Optimizations
-
Enable Compression
- gzip/brotli compression
- Reduce bundle size by 60-80%
-
Code Splitting
- Split by route
- Lazy load heavy components
- Reduce initial load time
-
Optimize Images
- Use WebP format
- Lazy loading
- Proper sizing
-
Remove Unnecessary Dependencies
- Audit package.json
- Replace heavy libraries with lighter alternatives
- Tree shake unused code
-
Database Query Optimization
- Add proper indexes
- Use connection pooling
- Implement caching
Performance Monitoring
Continuous Monitoring
- Set up Lighthouse CI in your pipeline
- Monitor Core Web Vitals
- Track bundle size over time
- Profile in production
Recommended Tools
Next Steps
- Run profiling session
- Identify top 3 bottlenecks
- Implement optimizations
- Measure improvements
- Add performance budgets to CI
- Set up continuous monitoring
Resources
EOF
echo "✓ Created performance report: $REPORT"
## Summary
```bash
echo ""
echo "=== ✓ Performance Profiling Setup Complete ==="
echo ""
echo "🎯 Runtime Environment: $RUNTIME"
echo ""
echo "📁 Generated files:"
echo " - $REPORT"
if [[ "$RUNTIME" == *"nodejs"* ]]; then
echo " - $PROFILE_DIR/profile-node.sh"
fi
if [[ "$RUNTIME" == *"python"* ]]; then
echo " - $PROFILE_DIR/profile-python.sh"
fi
if [[ "$RUNTIME" == *"browser"* ]]; then
echo " - $PROFILE_DIR/profile-browser.md"
fi
echo ""
echo "🚀 Quick Start:"
echo ""
if [[ "$RUNTIME" == *"nodejs"* ]]; then
echo "Node.js Profiling:"
echo " $PROFILE_DIR/profile-node.sh 1 # Comprehensive diagnosis"
echo " $PROFILE_DIR/profile-node.sh 2 # CPU flame graph"
echo " $PROFILE_DIR/profile-node.sh 6 # HTTP load test"
echo ""
fi
if [[ "$RUNTIME" == *"python"* ]]; then
echo "Python Profiling:"
echo " $PROFILE_DIR/profile-python.sh 1 # CPU profiling"
echo " $PROFILE_DIR/profile-python.sh 2 # Line-by-line"
echo ""
fi
if [[ "$RUNTIME" == *"browser"* ]]; then
echo "Browser Profiling:"
echo " lighthouse http://localhost:3000 --view"
echo " cat $PROFILE_DIR/profile-browser.md # Full guide"
echo ""
fi
echo "📊 Performance Analysis Workflow:"
echo ""
echo "1. Establish baseline performance"
echo " - Run profiling on current code"
echo " - Record key metrics (response time, memory, CPU)"
echo ""
echo "2. Identify bottlenecks"
echo " - Find slowest functions/routes"
echo " - Check database queries"
echo " - Analyze bundle size"
echo ""
echo "3. Optimize iteratively"
echo " - Fix highest-impact issues first"
echo " - Re-profile after each change"
echo " - Measure improvement"
echo ""
echo "4. Set performance budgets"
echo " - Bundle size limits"
echo " - Response time targets"
echo " - Memory usage constraints"
echo ""
echo "💡 Common Bottlenecks:"
echo " - Synchronous blocking operations"
echo " - N+1 database queries"
echo " - Large bundle sizes"
echo " - Unoptimized images"
echo " - Missing caching"
echo " - Inefficient algorithms"
echo ""
echo "🔗 Integration Points:"
echo " - /debug-root-cause - Investigate performance issues"
echo " - /test - Add performance tests"
echo " - /ci-setup - Add performance budgets to CI"
echo ""
echo "View full report: cat $REPORT"
Best Practices
Profiling Strategy:
- Profile in production-like environment
- Use representative workloads
- Run multiple samples for accuracy
- Focus on 80/20 rule (biggest bottlenecks first)
- Measure before and after optimizations
Common Optimizations:
- Cache frequently accessed data
- Use connection pooling for databases
- Implement lazy loading
- Optimize database queries with indexes
- Use CDN for static assets
- Enable compression
- Minimize bundle size
Performance Budgets:
- Set and enforce limits on bundle size
- Target response times for critical paths
- Monitor memory usage patterns
- Track Core Web Vitals (LCP, FID, CLS)
Credits: Performance profiling methodology based on Node.js profiling guide, Python performance documentation, Chrome DevTools documentation, and Web.dev performance best practices. Clinic.js integration patterns from NearForm. Lighthouse CI guidance from Google Chrome team.