thought-based-reasoning

Use when tackling complex reasoning tasks requiring step-by-step logic, multi-step arithmetic, commonsense reasoning, symbolic manipulation, or problems where simple prompting fails - provides comprehensive guide to Chain-of-Thought and related prompting techniques

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "thought-based-reasoning" with this command: npx skills add zpankz/mcp-skillset/zpankz-mcp-skillset-thought-based-reasoning

Thought-Based Reasoning Techniques for LLMs

Overview

Chain-of-Thought (CoT) prompting and its variants encourage LLMs to generate intermediate reasoning steps before arriving at a final answer, significantly improving performance on complex reasoning tasks. These techniques transform how models approach problems by making implicit reasoning explicit.

Quick Reference

TechniqueWhen to UseComplexityAccuracy Gain
Zero-shot CoTQuick reasoning, no examples availableLow+20-60%
Few-shot CoTHave good examples, consistent format neededMedium+30-70%
Self-ConsistencyHigh-stakes decisions, need confidenceMedium+10-20% over CoT
Tree of ThoughtsComplex problems requiring explorationHigh+50-70% on hard tasks
Least-to-MostMulti-step problems with subproblemsMedium+30-80%
ReActTasks requiring external informationMedium+15-35%
PALMathematical/computational problemsMedium+10-15%
ReflexionIterative improvement, learning from errorsHigh+10-20%

When to Use Thought-Based Reasoning

Use CoT techniques when:

  • Multi-step arithmetic or math word problems
  • Commonsense reasoning requiring logical deduction
  • Symbolic reasoning tasks
  • Complex problems where simple prompting fails

Start with:

  • Zero-shot CoT for quick prototyping ("Let's think step by step")
  • Few-shot CoT when you have good examples
  • Self-Consistency for high-stakes decisions

Progressive Loading

L2 Content (loaded when core techniques needed):

  • See: references/core-techniques.md
    • Chain-of-Thought (CoT) Prompting
    • Zero-shot Chain-of-Thought
    • Self-Consistency Decoding
    • Tree of Thoughts (ToT)
    • Least-to-Most Prompting
    • ReAct (Reasoning + Acting)
    • PAL (Program-Aided Language Models)
    • Reflexion

L3 Content (loaded when decision guidance and best practices needed):

  • See: references/guidance.md
    • Decision Matrix: Which Technique to Use
    • Best Practices
    • Common Mistakes
    • References

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Research

network-meta-analysis-appraisal

No summary provided by upstream source.

Repository SourceNeeds Review
General

software-architecture

No summary provided by upstream source.

Repository SourceNeeds Review
General

cursor-skills

No summary provided by upstream source.

Repository SourceNeeds Review
General

api design

No summary provided by upstream source.

Repository SourceNeeds Review