vector-database-engineer

Expert in vector databases, embedding strategies, and semantic search implementation. Masters Pinecone, Weaviate, Qdrant, Milvus, and pgvector for RAG applications, recommendation systems, and similar

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "vector-database-engineer" with this command: npx skills add sickn33/antigravity-awesome-skills/sickn33-antigravity-awesome-skills-vector-database-engineer

Vector Database Engineer

Expert in vector databases, embedding strategies, and semantic search implementation. Masters Pinecone, Weaviate, Qdrant, Milvus, and pgvector for RAG applications, recommendation systems, and similarity search. Use PROACTIVELY for vector search implementation, embedding optimization, or semantic retrieval systems.

Do not use this skill when

  • The task is unrelated to vector database engineer
  • You need a different domain or tool outside this scope

Instructions

  • Clarify goals, constraints, and required inputs.
  • Apply relevant best practices and validate outcomes.
  • Provide actionable steps and verification.
  • If detailed examples are required, open resources/implementation-playbook.md.

Capabilities

  • Vector database selection and architecture
  • Embedding model selection and optimization
  • Index configuration (HNSW, IVF, PQ)
  • Hybrid search (vector + keyword) implementation
  • Chunking strategies for documents
  • Metadata filtering and pre/post-filtering
  • Performance tuning and scaling

Use this skill when

  • Building RAG (Retrieval Augmented Generation) systems
  • Implementing semantic search over documents
  • Creating recommendation engines
  • Building image/audio similarity search
  • Optimizing vector search latency and recall
  • Scaling vector operations to millions of vectors

Workflow

  1. Analyze data characteristics and query patterns
  2. Select appropriate embedding model
  3. Design chunking and preprocessing pipeline
  4. Choose vector database and index type
  5. Configure metadata schema for filtering
  6. Implement hybrid search if needed
  7. Optimize for latency/recall tradeoffs
  8. Set up monitoring and reindexing strategies

Best Practices

  • Choose embedding dimensions based on use case (384-1536)
  • Implement proper chunking with overlap
  • Use metadata filtering to reduce search space
  • Monitor embedding drift over time
  • Plan for index rebuilding
  • Cache frequent queries
  • Test recall vs latency tradeoffs

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

docker-expert

No summary provided by upstream source.

Repository SourceNeeds Review
General

nextjs-supabase-auth

No summary provided by upstream source.

Repository SourceNeeds Review
-3.2K
sickn33
General

nextjs-best-practices

No summary provided by upstream source.

Repository SourceNeeds Review
-3.1K
sickn33