spice-setup

Get started with Spice.ai — install the runtime, initialize a project, run the runtime, and use the CLI. Use when setting up a new Spice project, installing Spice, running spice run, looking up CLI commands, API endpoints, deployment models, or creating a spicepod.yaml.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "spice-setup" with this command: npx skills add spiceai/skills/spiceai-skills-spice-setup

Getting Started with Spice

Spice is an open-source SQL query, search, and LLM-inference engine written in Rust. It federates queries across 30+ data sources, accelerates data locally, and integrates search and AI — all configured declaratively in YAML.

Spice is not a replacement for PostgreSQL/MySQL (use those for transactional workloads) or a data warehouse (use Snowflake/Databricks for centralized analytics). Think of it as the operational data & AI layer between your applications and your data infrastructure.

Install

macOS / Linux / WSL

curl https://install.spiceai.org | /bin/bash

Homebrew

brew install spiceai/spiceai/spice

Windows (PowerShell)

iex ((New-Object System.Net.WebClient).DownloadString("https://install.spiceai.org/Install.ps1"))

Verify & Upgrade

spice version
spice upgrade

If command not found, add to PATH: export PATH="$PATH:$HOME/.spice/bin"

Quick Start

spice init my_app
cd my_app
spice run

In another terminal:

spice sql
sql> show tables;

Spicepod Configuration (spicepod.yaml)

The Spicepod manifest defines all components for a Spice application:

version: v1
kind: Spicepod
name: my_app

secrets:
  - from: env
    name: env

datasets:
  - from: <connector>:<path>
    name: <dataset_name>

models:
  - from: <provider>:<model>
    name: <model_name>

embeddings:
  - from: <provider>:<model>
    name: <embedding_name>

All Sections

SectionPurposeSkill
datasetsData sources for SQL queriesspice-connect-data
modelsLLM/ML models for inferencespice-ai
embeddingsEmbedding models for vector searchspice-search
secretsSecure credential managementspice-secrets
catalogsExternal data catalog connectionsspice-connect-data
viewsVirtual tables from SQL queriesspice-connect-data
toolsLLM function calling capabilitiesspice-ai
workersModel load balancing and routingspice-ai
runtimeServer ports, caching, telemetryspice-caching
snapshotsAcceleration snapshot managementspice-acceleration
evalsModel evaluation definitionsspice-ai
dependenciesDependent Spicepods(below)

Dependencies

dependencies:
  - lukekim/demo
  - spiceai/quickstart

CLI Commands

CommandDescription
spice init <name>Initialize a new Spicepod
spice runStart the Spice runtime
spice sqlStart interactive SQL REPL
spice chatStart chat REPL (requires model)
spice searchPerform embeddings-based search
spice add <spicepod>Add a Spicepod dependency
spice datasetsList loaded datasets
spice modelsList loaded models
spice catalogsList loaded catalogs
spice statusShow runtime status
spice refresh <dataset>Refresh an accelerated dataset
spice loginLogin to the Spice.ai Platform
spice versionShow CLI and runtime version
spice upgradeUpgrade CLI to latest version

Runtime Endpoints

ServiceDefault AddressProtocol
HTTP APIhttp://127.0.0.1:8090REST, OpenAI-compatible
Arrow Flight127.0.0.1:50051Arrow Flight / Flight SQL
Metrics127.0.0.1:9090Prometheus
OpenTelemetry127.0.0.1:50052OTLP gRPC

HTTP API Paths

PathDescription
POST /v1/sqlExecute SQL query
POST /v1/searchEmbeddings-based search
POST /v1/nsqlNatural language to SQL
POST /v1/chat/completionsOpenAI-compatible chat
POST /v1/embeddingsGenerate embeddings
GET /v1/datasetsList datasets
GET /v1/modelsList models
GET /healthHealth check

Deployment Models

Spice ships as a single ~140MB binary with no external dependencies.

ModelBest For
StandaloneDevelopment, edge devices, simple workloads
SidecarLow-latency access, microservices
MicroserviceHeavy or varying traffic behind a load balancer
ClusterLarge-scale data, horizontal scaling
CloudAuto-scaling, built-in observability (Spice.ai Cloud)

Use Cases

Use CaseHow Spice Helps
Operational Data LakehouseServe real-time workloads from Iceberg/Delta/Parquet with sub-second latency
Data Lake AcceleratorAccelerate queries from seconds to milliseconds locally
Enterprise SearchCombine semantic and full-text search across data
RAG PipelinesFederated data + vector search + LLMs
Agentic AITool-augmented LLMs with fast data access
Real-Time AnalyticsStream from Kafka/DynamoDB with sub-second latency

Full Example

version: v1
kind: Spicepod
name: ai_app

secrets:
  - from: env
    name: env

embeddings:
  - from: openai:text-embedding-3-small
    name: embed
    params:
      openai_api_key: ${ secrets:OPENAI_API_KEY }

datasets:
  - from: postgres:public.users
    name: users
    params:
      pg_host: localhost
      pg_user: ${ env:PG_USER }
      pg_pass: ${ env:PG_PASS }
    acceleration:
      enabled: true
      engine: duckdb
      refresh_check_interval: 5m

  - from: memory:store
    name: llm_memory
    access: read_write

models:
  - from: openai:gpt-4o
    name: assistant
    params:
      openai_api_key: ${ secrets:OPENAI_API_KEY }
      tools: auto, memory, search

Documentation

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

spice-data-connector

No summary provided by upstream source.

Repository SourceNeeds Review
General

spice-models

No summary provided by upstream source.

Repository SourceNeeds Review
General

spice-accelerators

No summary provided by upstream source.

Repository SourceNeeds Review