Rails Audit Skill (thoughtbot Best Practices)
Perform comprehensive Ruby on Rails application audits based on thoughtbot's Ruby Science and Testing Rails best practices, with emphasis on Plain Old Ruby Objects (POROs) over Service Objects.
Audit Scope
The audit can be run in two modes:
- Full Application Audit: Analyze entire Rails application
- Targeted Audit: Analyze specific files or directories
Execution Flow
Step 1: Determine Audit Scope
Ask user or infer from request:
- Full audit: Analyze all of
app/,spec/ortest/,config/,db/,lib/ - Targeted audit: Analyze specified paths only
Step 2: Collect Test Coverage Data (Optional)
Before doing anything else in this step, use AskUserQuestion to ask the user:
- Question: "Would you like to collect actual test coverage data using SimpleCov? This will temporarily set up SimpleCov (if not already present), run the test suite, and capture real coverage metrics."
- Options: "Yes, collect coverage (Recommended)" / "No, use estimation"
If the user declines: skip the rest of this step entirely. Use estimation mode in Steps 4 and 5. Do NOT spawn the subagent.
If the user accepts: use the Task tool to spawn a general-purpose subagent with this prompt:
Read the file
agents/simplecov_agent.mdand follow all steps described in it. The audit scope is: {{SCOPE from Step 1}}. Return the coverage data in the output format specified in that file.
After the agent finishes, run rm -rf coverage/ to ensure the coverage directory is removed even if the agent failed to clean up.
Interpreting the agent's response:
- If the response starts with
COVERAGE_FAILED: no coverage data — use estimation mode in Steps 4 and 5. Note the failure reason in the report. - If the response starts with
COVERAGE_DATA: parse the structured data and keep it in context for Steps 4 and 5. The data includes overall coverage, per-directory breakdowns, lowest-coverage files, and zero-coverage files.
Step 2b: Collect Code Quality Metrics (Optional)
Before doing anything else in this step, use AskUserQuestion to ask the user:
- Question: "Would you like to run RubyCritic for code quality metrics (complexity, duplication, code smells)? This will temporarily set up RubyCritic (if not already present) and analyze your codebase."
- Options: "Yes, run RubyCritic (Recommended)" / "No, skip static analysis"
If the user declines: skip the rest of this step entirely. Do NOT spawn the subagent.
If the user accepts: use the Task tool to spawn a general-purpose subagent with this prompt:
Read the file
agents/rubycritic_agent.mdand follow all steps described in it. The audit scope is: {{SCOPE from Step 1}}. Return the code quality data in the output format specified in that file.
After the agent finishes, run rm -rf tmp/rubycritic/ to ensure the output directory is removed even if the agent failed to clean up.
Interpreting the agent's response:
- If the response starts with
RUBYCRITIC_FAILED: no code quality data — note the failure reason in the report. - If the response starts with
RUBYCRITIC_DATA: parse the structured data and keep it in context for Steps 4 and 5. The data includes overall score, per-directory ratings, worst-rated files, top smells, and most complex files.
Step 3: Load Reference Materials
Before analyzing, read the relevant reference files:
references/code_smells.md- Code smell patterns to identifyreferences/testing_guidelines.md- Testing best practicesreferences/poro_patterns.md- PORO and ActiveModel patternsreferences/security_checklist.md- Security vulnerability patternsreferences/rails_antipatterns.md- Rails-specific antipatterns (external services, migrations, performance)
Step 4: Analyze Code by Category
Analyze in this order:
-
Testing Coverage & Quality
- If SimpleCov data was collected in Step 2, use actual coverage percentages instead of estimates
- Cross-reference per-file SimpleCov data: files with 0% coverage = "missing tests"
- Check for missing test files
- Identify untested public methods
- Review test structure (Four Phase Test)
- Check for testing antipatterns
-
Security Vulnerabilities
- SQL injection risks
- Mass assignment vulnerabilities
- XSS vulnerabilities
- Authentication/authorization issues
- Sensitive data exposure
-
Models & Database
- Fat model detection
- Missing validations
- N+1 query risks
- Callback complexity
- Law of Demeter violations (voyeuristic models)
- If RubyCritic data was collected, flag models with D/F ratings or high complexity
-
Controllers
- Fat controller detection
- Business logic in controllers
- Missing strong parameters
- Response handling
- Monolithic controllers (non-RESTful actions, > 7 actions)
- Bloated sessions (storing objects instead of IDs)
- If RubyCritic data was collected, flag controllers with D/F ratings or high complexity
-
Code Design & Architecture
- Service Objects → recommend PORO refactoring
- Large classes
- Long methods
- Feature envy
- Law of Demeter violations
- Single Responsibility violations
- If RubyCritic data was collected, cross-reference D/F rated files and high-complexity files with manual code review findings
-
Views & Presenters
- Logic in views (PHPitis)
- Missing partials for DRY
- Helper complexity
- Query logic in views
-
External Services & Error Handling
- Fire and forget (missing exception handling for HTTP calls)
- Sluggish services (missing timeouts, synchronous calls that should be backgrounded)
- Bare rescue statements
- Silent failures (save without checking return value)
-
Database & Migrations
- Messy migrations (model references, missing down methods)
- Missing indexes on foreign keys, polymorphic associations, uniqueness validations
- Performance antipatterns (Ruby iteration vs SQL queries)
- Bulk operations without transactions
Step 5: Generate Audit Report
Create RAILS_AUDIT_REPORT.md in project root with structure defined in references/report_template.md.
When SimpleCov coverage data was collected in Step 2, use the SimpleCov variant of the Testing section in the report template. When coverage data is not available, use the estimation variant.
When RubyCritic data was collected in Step 2b, include the Code Quality Metrics section in the report using the RubyCritic variant from the report template. When RubyCritic data is not available, use the not available variant.
Severity Definitions
- Critical: Security vulnerabilities, data loss risks, production-breaking issues
- High: Performance issues, missing tests for critical paths, major code smells
- Medium: Code smells, convention violations, maintainability concerns
- Low: Style issues, minor improvements, suggestions
Key Detection Patterns
Service Object → PORO Refactoring
When you find classes in app/services/:
- Classes named
*Service,*Manager,*Handler - Classes with only
.callor.performmethods - Recommend: Rename to domain nouns, include
ActiveModel::Model
Fat Model Detection
Models with:
- More than 200 lines
- More than 15 public methods
- Multiple unrelated responsibilities
- Recommend: Extract to POROs using composition
Fat Controller Detection
Controllers with:
- Actions over 15 lines
- Business logic (not request/response handling)
- Multiple instance variable assignments
- Recommend: Extract to form objects or domain models
Missing Test Detection
For each Ruby file in app/:
- Check for corresponding
_spec.rbor_test.rb - Check for tested public methods
- Report untested files and methods
Analysis Commands
Use these bash patterns for file discovery:
# Find all Ruby files by type
find app/models -name "*.rb" -type f
find app/controllers -name "*.rb" -type f
find app/services -name "*.rb" -type f 2>/dev/null
# Find test files
find spec -name "*_spec.rb" -type f 2>/dev/null
find test -name "*_test.rb" -type f 2>/dev/null
# Count lines per file
wc -l app/models/*.rb
# Find long files (over 200 lines)
find app -name "*.rb" -exec wc -l {} + | awk '$1 > 200'
Report Output
Always save the audit report to /mnt/user-data/outputs/RAILS_AUDIT_REPORT.md and present it to the user.