cognitive-flexibility

Cognitive Flexibility Skill - AI cognitive flexibility with 4 modes. Supports automatic mode switching and metacognitive monitoring. Use when: - Complex reasoning and multi-step thinking needed - Self-assessment and reflection required - Cross-scenario knowledge transfer - Creative problem solving - Task complexity > medium (estimated >2 hours)

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "cognitive-flexibility" with this command: npx skills add DaoShi (optimizer)/cognitive-flexibility

Cognitive Flexibility Skill

Overview

This Skill implements four cognitive modes based on human cognitive science:

ModeNameDriverScenarioCore Ability
OOAExperience ModeMemory-drivenFamiliar scenariosPattern matching
OODAReasoning ModeKnowledge-drivenComplex problemsChain reasoning
OOCACreative ModeAssociation-drivenInnovation needsAnalogy generation
OOHADiscovery ModeHypothesis-drivenExplorationHypothesis generation

Quick Start

Basic Usage

from scripts.cognitive_controller import CognitiveController

# Create controller
controller = CognitiveController(confidence_threshold=0.7)

# Execute task (auto mode selection)
task = "Analyze user feedback data"
result = await controller.process(task, tools=tools)

# View result
print(f"Mode: {result['mode']}")
print(f"Answer: {result['answer']}")
print(f"Confidence: {result['assessment']['overall_score']:.2f}")

Manual Mode Selection

# OODA reasoning mode
from scripts.chain_reasoner import OODAReasoner
reasoner = OODAReasoner()
result = await reasoner.process(task, tools=tools)

# OOA experience mode
from scripts.pattern_matcher import PatternMatcher
matcher = PatternMatcher()
result = await matcher.match(task, tools=tools)

# OOCA creative mode
from scripts.creative_explorer import CreativeExplorer
explorer = CreativeExplorer()
result = await explorer.explore(task)

# OOHA discovery mode
from scripts.hypothesis_generator import HypothesisGenerator
generator = HypothesisGenerator()
result = await generator.discover(task)

Features

  • 4 Cognitive Modes: OOA/OODA/OOCA/OOHA
  • Auto Mode Switching: Cognitive Controller selects best mode
  • Metacognitive Monitoring: Self-assessment and confidence scoring
  • Usage Tracking: Complete usage logs and statistics
  • 100% Test Coverage: All tests passing

File Structure

cognitive-flexibility/
├── scripts/
│   ├── __init__.py
│   ├── chain_reasoner.py       # OODA reasoning
│   ├── pattern_matcher.py      # OOA pattern matching
│   ├── self_assessor.py        # Metacognitive monitoring
│   ├── cognitive_controller.py # Mode switching
│   ├── creative_explorer.py    # OOCA creative mode
│   ├── hypothesis_generator.py # OOHA discovery mode
│   └── usage_monitor.py        # Usage tracking
├── references/
│   └── ooda-guide.md
├── tests/
│   └── test_cognitive_skills.py
├── SKILL.md
├── README.md
└── MONITORING-GUIDE.md

Testing

# Run tests
python tests/test_cognitive_skills.py

# Expected output: 6/6 tests passed (100%)

Monitoring

from scripts.usage_monitor import UsageMonitor

monitor = UsageMonitor()

# Get usage stats
stats = monitor.get_stats(days=7)

# Generate report
report = monitor.generate_report(days=7)
print(report)

Requirements

  • Python >= 3.8
  • OpenClaw >= 2026.3.28
  • No external dependencies

License

MIT License

Support

  • Documentation: See README.md and MONITORING-GUIDE.md
  • Issues: GitHub Issues
  • Community: Discord #skills-feedback

DaoShi · Cognitive Flexibility Skill v2.1.0

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

MARL — Multi-stage Reasoning Middleware

Multi-stage multi-agent reasoning middleware that reduces LLM hallucination by 70%+. 9 specialized emergence engines for invention, creative, pharma, genomic...

Registry SourceRecently Updated
3746Profile unavailable
Automation

AI Benchmark — Measure How Your Agent Thinks

Experiential benchmark for AI reasoning — measures calibration, epistemic flexibility, risk assessment, and metacognition through interactive concert experie...

Registry SourceRecently Updated
1222Profile unavailable
Automation

Self-Evolution Cognition

核心自进化认知框架 —— 基于 SOUL 五律的 Agent 自我进化操作系统

Registry SourceRecently Updated
3290Profile unavailable
Research

Thinking Framework

Loads any thinker's, leader's, philosopher's, or organization's complete mental operating system directly into the AI — so the AI reasons FROM inside that co...

Registry SourceRecently Updated
2670Profile unavailable