evez-debate-framework

Framework where multiple AI agents debate to reach better conclusions through adversarial reasoning. Use when building multi-agent systems, consensus-seeking AI, red-teaming with structured opposition, or improving LLM output quality through debate. Covers debate orchestration, judge evaluation, argument scoring, and convergence detection.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "evez-debate-framework" with this command: npx skills add evezart/evez-debate-framework

Multi-Agent Debate Framework

Multiple agents debate topics to reach stronger conclusions through adversarial reasoning.

Quick Start

from debate import Debate, Agent, Judge

debate = Debate("Should we use microservices?")
debate.add_agent(Agent("pro", stance="support", model="gpt-4o"))
debate.add_agent(Agent("con", stance="oppose", model="claude-sonnet-4"))
debate.add_agent(Agent("synth", stance="synthesize", model="gemini-2.5-pro"))
debate.set_judge(Judge(criteria=["evidence", "logic", "completeness"]))

result = debate.run(rounds=3)
# Returns: winner, consensus, key_arguments, confidence

How It Works

  1. Proposition: Pro agent argues for the topic
  2. Opposition: Con agent argues against
  3. Rebuttal: Each agent responds to the other's points
  4. Synthesis: Synth agent finds common ground
  5. Judgment: Judge evaluates all arguments and declares outcome

Debate Config

Debate(
    topic="Your question",
    rounds=3,                    # Number of debate rounds
    max_words_per_turn=500,      # Limit argument length
    convergence_threshold=0.7,   # Auto-stop when consensus reached
    judge_criteria=["evidence", "logic", "novelty", "completeness"],
)

Agent Stances

  • support — Argues in favor
  • oppose — Argues against
  • synthesize — Finds middle ground
  • devil_advocate — Argues weakest points to strengthen them
  • expert — Provides domain-specific knowledge

Judge

Evaluates arguments on configurable criteria and produces:

  • Winner per round
  • Overall winner
  • Consensus points
  • Unresolved issues
  • Confidence score

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

V19 Trust Manifesto

Agent Community认知治理协议公开受信声明v1.5.0。V8.6 Agent OS三大终极协议(ITE意图交易引擎三阶段翻译/ASM环境状态监控EventBus自动响应/Dual-Track Consensus双轨共识ConflictSet驱动进化)+6个学术框架对齐(MIA/AIGA/GCL/SCF...

Registry SourceRecently Updated
Automation

NEXO Brain

Cognitive memory system for AI agents — Atkinson-Shiffrin memory model, semantic RAG, trust scoring, and metacognitive error prevention. Gives your agent per...

Registry SourceRecently Updated
Automation

Growth Engineer

Growth Engineer for mobile apps and agent runtimes including OpenClaw and Hermes. Correlate analytics, crashes, billing, feedback, store signals, and repo co...

Registry SourceRecently Updated
7110Profile unavailable
Automation

Agent News

Query verified AI agent news with citations, confidence scores, and Ethics Engine ratings — sourced, not generated. Use instead of generic web search for any...

Registry SourceRecently Updated
1423Profile unavailable