ollama-result-handling

Internal guidance for presenting ollama-companion output back to the user

Safety Notice

This listing is imported from SkillsMP metadata and should be treated as untrusted until upstream source review is completed.

Copy this and send it to your AI assistant to learn

Install skill "ollama-result-handling" with this command: npx skills add 941design/skillsmp-941design-941design-ollama-result-handling

No markdown body

This source entry does not include full markdown content beyond metadata.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

adversarial-review

Run an Ollama-backed adversarial review that challenges the implementation approach and design choices, not just defects. Mirrors /codex:adversarial-review but routes through the user's local LLM.

Repository SourceNeeds Review
Coding

ollama-cli-runtime

Internal helper contract for calling the ollama-companion runtime from Claude Code

Repository SourceNeeds Review
Coding

review

Run a code review on local git state using a local Ollama-served model via the Claude Code headless harness. Mirrors /codex:review but routes through the user's local LLM.

Repository SourceNeeds Review
Coding

setup

Verify the Ollama daemon is reachable, the configured review model is pulled, the Anthropic-compatibility endpoint responds, and the Claude CLI is wired up for local-LLM review. Run this before /ollama:review or /ollama:adversarial-review.

Repository SourceNeeds Review
ollama-result-handling | V50.AI