mergeiq

Score the complexity of any GitLab MR or GitHub PR using a 4-dimension framework: Size (20%), Cognitive Load (30%), Review Effort (30%), and Risk/Impact (20%). Works with GitLab or GitHub. Zero external dependencies. Use when asked to review, triage, score, or prioritise pull requests and merge requests by complexity.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "mergeiq" with this command: npx skills add larry.l.fang@gmail.com/mergeiq

MR / PR Complexity Scorer

A provider-agnostic complexity scoring engine for Merge Requests (GitLab) and Pull Requests (GitHub). Built on a 4-dimension framework that captures what "complex" actually means in code review — not just lines changed.

Complexity Dimensions

DimensionWeightWhat it measures
Size20%Volume of code changed (logarithmic — big PRs saturate fast)
Cognitive Load30%Directory breadth, cross-module changes, file diversity
Review Effort30%Discussion depth, reviewer count, approval iterations
Risk / Impact20%Breaking changes, migrations, security labels, dependencies

Output tiers: trivial / simple / moderate / complex / highly_complex

When to Use

  • Triaging a backlog of open PRs by complexity before a review session
  • Flagging high-complexity MRs for mandatory second review
  • Generating weekly complexity trend reports for a team
  • Understanding why a PR is taking a long time (dimension breakdown)
  • Building engineering director dashboards (see score_mr.py)

Quick Start

# Score a GitHub PR (basic — just the PR object)
curl -s "https://api.github.com/repos/OWNER/REPO/pulls/NUMBER" \
     -H "Authorization: Bearer $GITHUB_TOKEN" \
     | python score_mr.py --provider github

# Score a GitLab MR (with diff stats)
curl -s "https://gitlab.com/api/v4/projects/PROJECT_ID/merge_requests/IID?include_diff_stats=true" \
     -H "PRIVATE-TOKEN: $GITLAB_TOKEN" \
     | python score_mr.py --provider gitlab

# Richer scoring — fetch files + reviews too
curl -s ".../pulls/NUMBER" > pr.json
curl -s ".../pulls/NUMBER/files" > files.json
curl -s ".../pulls/NUMBER/reviews" > reviews.json
python score_mr.py --provider github --pr pr.json --files files.json --reviews reviews.json

Example Output

{
  "provider": "github",
  "id": 412,
  "title": "Migrate auth service to OAuth2",
  "score": {
    "total": 74.2,
    "tier": "complex",
    "size": 68.0,
    "cognitive": 81.5,
    "review_effort": 72.0,
    "risk_impact": 60.0
  },
  "summary": "High mental load: 14 files across 6 directories, 3 reviewers involved",
  "tier_insight": "Needs careful review — high cognitive load and cross-module impact.",
  "stats": {
    "additions": 412,
    "deletions": 87,
    "files_changed": 14,
    "reviewers": 3,
    "discussions": 9,
    "net_lines": 325
  }
}

Files

mr-complexity-scorer/
  SKILL.md                      # This file
  mr_complexity_service.py      # Core 4-dimension scoring engine (pure Python)
  score_mr.py                   # CLI: pipe in API JSON, get complexity JSON out
  requirements.txt              # No external deps — stdlib only, Python 3.9+
  adapters/
    gitlab_adapter.py           # GitLab MR API dict → MRData
    github_adapter.py           # GitHub PR API dict → MRData

Using in Your Code

from mr_complexity_service import MRComplexityCalculator, MRData
from adapters.github_adapter import github_pr_to_mrdata

# Build MRData from a GitHub PR dict (from API or webhook payload)
mr_data = github_pr_to_mrdata(
    pr=pr_dict,
    files=files_list,       # optional: /pulls/:number/files
    commits=commits_list,   # optional: /pulls/:number/commits
    reviews=reviews_list,   # optional: /pulls/:number/reviews
)

calculator = MRComplexityCalculator()
result = calculator.calculate(mr_data)

print(result.complexity_tier)   # "complex"
print(result.total_score)       # 74.2
print(result.human_summary)     # "High mental load: ..."

Enrichment — What's Worth Fetching

Extra API callUnlocksWorth it?
/pulls/:n/filesFile path cognitive analysisYes, always
/pulls/:n/reviewsAccurate reviewer count + itersYes for review dim
/pulls/:n/commitsBreaking-change detectionNice to have
/pulls/:n/commentsInline discussion countOptional

Without enrichment, the scorer still works — it uses changed_files, review_comments, and requested_reviewers from the base PR object. Enriched data improves accuracy.

Extending to Other Providers

Implement a thin adapter that maps your provider's MR/PR dict to MRData:

from mr_complexity_service import MRData

def linear_issue_to_mrdata(issue: dict) -> MRData:
    return MRData(
        iid=issue["number"],
        title=issue["title"],
        # ... map your fields
    )

Works with: GitLab, GitHub, Gitea, Bitbucket, Azure DevOps — anything with MR/PR metadata.

Adjusting Weights

from mr_complexity_service import MRComplexityCalculator, ComplexityConfig

config = ComplexityConfig(
    weight_size=0.15,
    weight_cognitive=0.35,
    weight_review=0.30,
    weight_risk=0.20,
)
calculator = MRComplexityCalculator(config=config)

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

Engineering Manager Intelligence — Team Performance & Project Health

Engineering Manager Intelligence: track team performance, engineer contributions, and project health across GitLab/GitHub + Jira/GitHub Issues. Use when aske...

Registry SourceRecently Updated
3340Profile unavailable
Coding

git-mender

git-mender — Automatically fix GitHub issues end-to-end: reads the issue, analyzes repository code, implements a fix, and submits a pull request. Use when th...

Registry SourceRecently Updated
880Profile unavailable
Coding

Git Engineering & Repository Strategy

Expert guidance on designing branching strategies, commit standards, code review workflows, monorepo management, automated releases, and maintaining scalable...

Registry SourceRecently Updated
4910Profile unavailable
Security

AI Code Review

Provides detailed, prioritized code review feedback on security, performance, correctness, and maintainability issues for multiple major programming languages.

Registry SourceRecently Updated
1561Profile unavailable