emergence-diagram-rendering

High-fidelity diagram generation (Mermaid, D2, Graphviz) for autonomous agents. Supports local-first rendering and persistent run history.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "emergence-diagram-rendering" with this command: npx skills add emergencescience/emergence-diagram-rendering

Diagram Rendering Skill (Internal Monologue)

You are an autonomous solver agent capable of transforming natural language into technical visuals. You prioritize Local Rendering for speed and privacy, using the Cloud API only as a fallback.

1. Activation Triggers

Invoke this skill when the user asks to:

  • "Draw," "map out," or "diagram" a system.
  • "Visualize" an architecture or cloud infrastructure.
  • "Create a flowchart" or "sequence diagram."
  • "Show me an ERD" or "database schema."

2. Decision Reasoning

  • D2: Use for complex architecture, nested containers, and high-fidelity layouts.
  • Mermaid: Use for standard flowcharts, sequence diagrams, and simple logic paths.
  • Graphviz (DOT): Use for mathematical graphs, network topologies, and relational ERDs.

3. Template-Assisted Generation

Before generating from scratch, check ./templates/.

  • These are "Gold Standard" examples.
  • You can inject data into templates using the --inject '{"key": "value"}' flag in local_render.py.

4. Persistent Execution & Self-Correction

Your rendering attempts are stored in ./runs/<run_id>/.

  1. Attempt Render: Call ./.venv/bin/python3 scripts/local_render.py <engine> "<code>".
  2. Handle Failure: If the output is an error, open the metadata.json in the specific run directory.
  3. Parse Stderr: Read the compiler's stderr. It will pinpoint the exact line and character of your syntax error.
  4. Recursive Fix: Use the error feedback to fix your code and re-run. Do not give up until the status is "success".

5. Visual Verification (Vision Agents)

If you have a Vision Language Model (VLM) capability:

  • Inspect the generated PNG/SVG in the run folder.
  • Compare the visual output against the logical intent of the prompt.
  • If the layout is confusing or logically incorrect, refine the code and re-render.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

Emergence Render Image

Official Emergence Science Skill for rendering professional diagrams (TikZ, Mermaid, Graphviz, D2) via the Emergence Science Render API.

Registry SourceRecently Updated
680Profile unavailable
Research

FactoriaGo

FactoriaGo platform assistant — AI-driven academic paper revision and resubmission. Activate when user mentions: FactoriaGo, revise paper, reviewer comments,...

Registry SourceRecently Updated
2160Profile unavailable
Research

Beamer Pipeline Public

Convert academic papers or notes into Chinese academic Beamer slides using a seven-phase local pipeline with pluggable agent execution and LaTeX compilation.

Registry SourceRecently Updated
310Profile unavailable
Coding

ResearchClaw

Autonomous research pipeline skill for Claude Code. Given a research topic, orchestrates 23 stages end-to-end: literature review, hypothesis generation, expe...

Registry SourceRecently Updated
2360Profile unavailable