github-issue-resolver

Autonomous GitHub Issue Resolver Agent with guardrails. Use when the user wants to discover, analyze, and fix open issues in GitHub repositories. Triggers on requests like "fix GitHub issues", "resolve issues in repo", "work on GitHub bugs", or when the user provides a GitHub repository URL and asks for issue resolution. Supports the full workflow from issue discovery to PR submission with safety guardrails preventing scope creep, unauthorized access, and dangerous operations.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "github-issue-resolver" with this command: npx skills add Ashwinhegde19/github-issue-resolver

GitHub Issue Resolver

Autonomous agent for discovering, analyzing, and fixing open GitHub issues — with a 5-layer guardrail system.

⚠️ GUARDRAILS — Read First

Every action goes through guardrails. Before any operation:

  1. Load guardrails.json config
  2. Validate scope (repo, branch, path)
  3. Check action gate (auto/notify/approve)
  4. Validate command against allowlist
  5. Log to audit trail

For guardrail details, see references/guardrails-guide.md.

Key Rules (Non-Negotiable)

  • Never touch protected branches (main, master, production)
  • Never modify .env, secrets, CI configs, credentials
  • Never force push
  • Never modify dependency files without explicit approval
  • Never modify own skill/plugin files
  • One issue at a time — finish or abandon before starting new
  • All dangerous actions require user approval (write code, commit, push, PR)
  • Everything is logged to audit/ directory

Workflow

Phase 1 — Issue Discovery

Trigger: User provides a GitHub repository (owner/repo).

Steps:

  1. Validate repo against guardrails:

    python3 scripts/guardrails.py repo <owner> <repo>
    

    If blocked, tell the user and stop.

  2. Fetch, score, and present issues using the recommendation engine:

    python3 scripts/recommend.py <owner> <repo>
    

    This automatically fetches open issues, filters out PRs, scores them by severity/impact/effort/freshness, and presents a formatted recommendation.

    Always use recommend.py — never manually format issue output. The script ensures consistent presentation every time.

    For raw JSON (e.g., for further processing):

    python3 scripts/recommend.py <owner> <repo> --json
    

⏹️ STOP. Wait for user to select an issue.


Phase 2 — Fixing

Trigger: User selects an issue.

Steps:

  1. Lock the issue (one-at-a-time enforcement):

    python3 scripts/guardrails.py issue_lock <owner> <repo> <issue_number>
    
  2. Read full issue thread including comments.

  3. Clone the repo (Gate: notify):

    python3 scripts/sandbox.py run git clone https://github.com/<owner>/<repo>.git /tmp/openclaw-work/<repo>
    
  4. Create a safe branch (Gate: auto):

    python3 scripts/sandbox.py run git checkout -b fix-issue-<number>
    
  5. Explore codebase — read relevant files. For each file:

    python3 scripts/guardrails.py path <file_path>
    
  6. Plan the fix — explain approach to user:

    ## Proposed Fix
    - Problem: [root cause]
    - Solution: [what changes]
    - Files: [list of files and what changes in each]
    - Estimated diff size: [lines]
    

⏹️ STOP. Wait for user to approve the plan before implementing.

  1. Implement the fix (Gate: approve):
    • Apply changes
    • Check diff size: python3 scripts/guardrails.py diff <line_count>
    • Log: python3 scripts/audit.py log_action write_code success

Phase 3 — Testing

After implementing:

  1. Find and run tests (Gate: notify):

    python3 scripts/sandbox.py run npm test   # or pytest, cargo test, etc.
    
  2. If tests fail AND autoRollbackOnTestFail is true:

    • Revert all changes
    • Notify user
    • Suggest alternative approach
  3. If no tests exist, write basic tests covering the fix.

  4. Report results to user.


Phase 4 — Draft PR for Review (Approval REQUIRED)

⚠️ NEVER create PR automatically. Always ask first.

Do NOT dump full diffs in chat. For any non-trivial project, push the branch and let the user review on GitHub where they get syntax highlighting, file-by-file navigation, and inline comments.

  1. Commit changes (Gate: approve):

    python3 scripts/sandbox.py run git add .
    python3 scripts/sandbox.py run git commit -m "Fix #<number>: <title>"
    
  2. Show a change summary (NOT the raw diff) — keep it concise:

    ## Changes
    - **src/models.py** — Added field validation (title length, enum checks)
    - **app.py** — Added validation to POST endpoint, 400 error responses
    - **tests/test_app.py** — 22 new tests covering validation rules
    - 4 files changed, ~100 lines of source + ~150 lines of tests
    - All tests passing ✅
    
  3. Ask explicitly: "Ready to push and create a draft PR?"

  4. Only after user says "yes" (Gate: approve):

    python3 scripts/sandbox.py run git push -u origin fix-issue-<number>
    python3 scripts/sandbox.py run gh pr create --draft --title "..." --body "..."
    

    Note: PRs are always created as draft by default. The PR body should include a detailed description of all changes, test results, and link to the issue (Closes #N).

  5. Share the PR link — user reviews on GitHub.

  6. Unlock the issue:

    python3 scripts/guardrails.py issue_unlock
    

Scripts Reference

ScriptPurposeRun Without Reading
scripts/recommend.pyPrimary entry point — fetch, score, and present issues
scripts/fetch_issues.pyRaw issue fetcher (used internally by recommend.py)
scripts/analyze_issue.pyDeep analysis of single issue
scripts/create_pr.pyPR creation wrapper
scripts/guardrails.pyGuardrail enforcement engine
scripts/sandbox.pySafe command execution wrapper
scripts/audit.pyAction logger

References

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

Agent Dev Workflow

Orchestrate coding agents (Claude Code, Codex, etc.) to implement coding tasks through a structured workflow. Use when the user gives a coding requirement, f...

Registry SourceRecently Updated
Coding

Cortex Engine

Persistent cognitive memory for AI agents — query, record, review, and consolidate knowledge across sessions with spreading activation, FSRS scheduling, and...

Registry SourceRecently Updated
Coding

Skill Blocker - 安全守卫

Blocks execution of dangerous commands and risky operations like destructive deletions, credential theft, code injection, and unauthorized system changes to...

Registry SourceRecently Updated
014
Profile unavailable