RAG Pipeline Builder

Build retrieval-augmented generation systems that ground LLM responses in your data

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "RAG Pipeline Builder" with this command: npx skills add jmsktm/claude-settings/jmsktm-claude-settings-rag-pipeline-builder

RAG Pipeline Builder

The RAG Pipeline Builder skill guides you through designing and implementing Retrieval-Augmented Generation systems that enhance LLM responses with relevant context from your own data. RAG combines the power of large language models with the precision of information retrieval, reducing hallucinations and enabling AI to work with private, current, or domain-specific knowledge.

This skill covers the complete RAG stack: document ingestion, chunking strategies, embedding generation, vector storage, retrieval optimization, context injection, and response generation. It helps you make informed decisions at each stage based on your specific requirements for accuracy, latency, cost, and scale.

Whether you are building a documentation Q&A bot, a customer support system, or an enterprise knowledge assistant, this skill ensures your RAG implementation follows production best practices.

Core Workflows

Workflow 1: Design RAG Architecture

  1. Define requirements:
    • Data sources and formats
    • Query types and patterns
    • Accuracy requirements
    • Latency budget
    • Scale expectations
  2. Choose components:
    • Document loaders
    • Chunking strategy
    • Embedding model
    • Vector database
    • LLM for generation
    • Reranking layer (optional)
  3. Design data flow:
    Documents → Loader → Chunker → Embedder → Vector DB
                                                   ↓
    Query → Embedder → Vector Search → Reranker → Context
                                                   ↓
    Context + Query → LLM → Response
    
  4. Document architecture decisions

Workflow 2: Implement Ingestion Pipeline

  1. Set up document loaders:
    • PDF, Markdown, HTML parsers
    • API connectors for live sources
    • Incremental update handling
  2. Implement chunking:
    def smart_chunk(doc, chunk_size=500, overlap=50):
        # Respect document structure
        sections = extract_sections(doc)
        chunks = []
        for section in sections:
            if len(section) > chunk_size:
                chunks.extend(sliding_window(section, chunk_size, overlap))
            else:
                chunks.append(section)
        return add_metadata(chunks, doc)
    
  3. Generate embeddings with batching
  4. Store in vector database with metadata
  5. Verify ingestion quality

Workflow 3: Optimize Retrieval Quality

  1. Measure baseline retrieval performance:
    • Recall@k for known queries
    • Mean Reciprocal Rank (MRR)
    • Relevance scoring
  2. Apply optimization techniques:
    • Query expansion/rewriting
    • Hybrid search (semantic + keyword)
    • Reranking with cross-encoders
    • Metadata filtering
  3. Tune retrieval parameters:
    • Number of chunks to retrieve (k)
    • Similarity threshold
    • Diversity/MMR settings
  4. Validate improvements with test set

Quick Reference

ActionCommand/Trigger
Design RAG system"Help me design a RAG pipeline for [use case]"
Choose vector DB"Which vector database for RAG"
Optimize chunking"Best chunking strategy for [content type]"
Improve retrieval"My RAG has poor retrieval quality"
Reduce hallucinations"RAG still hallucinating, help fix"
Scale pipeline"Scale RAG to [X] documents"

Best Practices

  • Chunk at Semantic Boundaries: Preserve meaning in chunks

    • Good: Split at paragraphs, sections, or topic boundaries
    • Bad: Fixed-size splits that cut sentences mid-thought
    • Include section headers as context in chunks
  • Include Rich Metadata: Enable filtering and context

    • Source document, section, page number
    • Timestamps for temporal relevance
    • Categories, tags, or topics
    • Use metadata filters before semantic search
  • Use Hybrid Search: Combine semantic and keyword search

    • Semantic: Captures meaning and synonyms
    • Keyword (BM25): Catches exact terms, names, codes
    • Weight combination based on query type
  • Rerank for Quality: Two-stage retrieval improves precision

    • Stage 1: Fast vector search (retrieve 20-50)
    • Stage 2: Cross-encoder reranking (keep top 5-10)
    • Reranking is slower but much more accurate
  • Show Your Work: Include citations and sources

    • Return source chunks with responses
    • Enable users to verify and explore
    • Build trust through transparency
  • Handle Edge Cases: What happens when retrieval fails?

    • No relevant results found
    • Conflicting information in sources
    • Query outside knowledge base scope
    • Implement graceful fallbacks

Advanced Techniques

Multi-Index Strategy

Use different indexes for different content types:

Index 1: FAQs (short, self-contained)
Index 2: Documentation (long-form, structured)
Index 3: Conversations (temporal, contextual)

Route queries to appropriate index based on intent

Query Transformation Pipeline

Improve retrieval with query processing:

def transform_query(query):
    # Step 1: Classify query type
    query_type = classify_query(query)

    # Step 2: Extract entities
    entities = extract_entities(query)

    # Step 3: Generate search queries
    if query_type == "factual":
        return generate_keyword_queries(query, entities)
    elif query_type == "conceptual":
        return generate_semantic_queries(query)
    else:
        return [query]  # Use as-is

Contextual Compression

Reduce noise in retrieved context:

Retrieved chunks (verbose) → LLM compressor → Relevant excerpts only

Agentic RAG

Let the LLM control retrieval:

def agentic_rag(query):
    # LLM decides what to search for
    search_plan = llm.plan_searches(query)

    # Execute searches
    results = []
    for search in search_plan:
        results.extend(retriever.search(search.query, filters=search.filters))

    # LLM synthesizes answer
    return llm.synthesize(query, results)

Evaluation Framework

Continuously measure RAG quality:

Metrics:
- Retrieval: Precision@k, Recall@k, MRR
- Generation: Faithfulness, Answer Relevance, Context Utilization
- End-to-end: Task Success Rate, User Satisfaction

Tools: Ragas, TruLens, LangSmith

Common Pitfalls to Avoid

  • Chunking too large (loses specificity) or too small (loses context)
  • Not preserving document structure and hierarchy in chunks
  • Ignoring keyword search when exact matches matter
  • Retrieving too few chunks (missing information) or too many (context dilution)
  • Not handling conflicting information across sources
  • Assuming LLM will always use retrieved context correctly
  • Skipping evaluation and monitoring in production
  • Not updating embeddings when source documents change

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

MemSync Dual Memory System

Logs all OpenClaw conversations and events with role tags, saving to JSONL and Memvid for full context search and monthly sharded or single-file storage.

Registry SourceRecently Updated
0498
Profile unavailable
General

RAG Engineering

Expert guidance to build, optimize, and debug production-ready Retrieval-Augmented Generation (RAG) systems using a complete methodology from architecture to...

Registry SourceRecently Updated
0355
Profile unavailable
General

RAG System Builder

Build and deploy local RAG (Retrieval-Augmented Generation) systems with offline document processing, embedding models, and vector storage.

Registry SourceRecently Updated
0193
Profile unavailable