spice-accelerators
Choose and configure the right acceleration engine — Arrow, DuckDB, SQLite, Cayenne, PostgreSQL, or Turso. Use this skill whenever the user needs to pick an accelerator engine, compare engines (e.g. "should I use DuckDB or Cayenne?"), configure engine-specific parameters (duckdb_file, sqlite_file), tune memory vs file mode, or understand engine capabilities and limitations. This skill is the engine selection and tuning guide. For the broader acceleration feature (refresh modes, retention, snapshots, indexes), see spice-acceleration.
spice-data-connector
Configure individual data source connectors in Spice — PostgreSQL, MySQL, S3, Databricks, Snowflake, DuckDB, GitHub, Kafka, and 25+ more. Use this skill whenever the user wants to add a dataset, connect to a specific database or data source, load data from S3 or files, configure connector-specific parameters, understand file formats (Parquet, CSV, PDF, DOCX), or set up hive partitioning. This skill is the reference for the `from:` and `params:` fields in dataset configuration. For cross-source federation, views, and catalogs, see spice-connect-data.
spice-acceleration
Accelerate data locally for sub-second query performance — the feature and its configuration. Use this skill whenever the user asks about data acceleration concepts, enabling acceleration on a dataset, choosing refresh modes (full, append, changes, caching), configuring retention policies, setting up snapshots for cold-start, adding indexes and constraints, or understanding the difference between federated and accelerated queries. This skill covers the "what and why" of acceleration. For choosing which acceleration engine to use (Arrow vs DuckDB vs SQLite vs Cayenne), see spice-accelerators.
spice-ai
Add AI and LLM capabilities to Spice — tools, NSQL (text-to-SQL), memory, model routing/workers, and evals. Use this skill whenever the user wants to enable LLM tools (SQL, search, memory, MCP, web search), set up text-to-SQL via /v1/nsql, add persistent conversational memory, configure model routing with workers (load balancing, fallback, weighted distribution), set up evals, or use the OpenAI-compatible chat API. This skill covers AI features and orchestration. For configuring individual model providers (OpenAI, Anthropic, etc.), see spice-models.
spice-caching
Configure Spice.ai in-memory result caching for SQL queries, search results, and embeddings. Use this skill whenever the user asks about caching configuration, tuning cache TTL or max size, choosing eviction policies (LRU vs TinyLFU), enabling stale-while-revalidate, setting up cache-control headers, using custom cache keys (Spice-Cache-Key), monitoring cache metrics, choosing between plan vs SQL cache key types, or enabling zstd compression for cached results. Also use when the user asks why they're getting MISS/STALE responses or wants to optimize cache hit rates.
spice-cloud-management
Manage Spice.ai Cloud resources via the Management API — apps, deployments, secrets, API keys, and org members. Use this skill whenever the user wants to create or manage a Spice.ai Cloud app, trigger a deployment, manage cloud secrets or API keys, list regions or runtime versions, add/remove org members, or automate any Spice.ai Cloud operation. Also use when the user mentions "spice.ai cloud", "deploy to spice", "cloud API", or wants to use the Spice.ai hosted platform. For infrastructure-as-code with Terraform, see spice-terraform.
spice-connect-data
Connect Spice to data sources and query across them with federated SQL — including datasets, catalogs, views, and writes. Use this skill whenever the user wants to set up federated queries across multiple sources, create views, configure catalogs (Unity Catalog, Databricks, Iceberg), write data with INSERT INTO, or understand how Spice's query federation works. This skill focuses on the federation layer — cross-source joins, views, catalogs, and data writes. For configuring individual data source connectors (PostgreSQL params, S3 file formats, etc.), see spice-data-connector.
spice-models
Configure AI/LLM model providers and connections in Spice — OpenAI, Anthropic, Azure, Google, xAI, Bedrock, Perplexity, Databricks, HuggingFace, and local GGUF models. Use this skill whenever the user wants to add a model, configure a specific LLM provider, set up an OpenAI-compatible endpoint (e.g. Groq, Ollama), serve a local model, configure system prompts, set parameter overrides (temperature, response format), or understand which providers are available. This skill is the model connector reference. For AI features like tools, memory, workers, and NSQL, see spice-ai.
spice-search
Search data using vector similarity, full-text keywords, or hybrid methods with Reciprocal Rank Fusion (RRF). Use this skill whenever the user wants to set up semantic search, full-text search, or hybrid search in Spice — including configuring embedding models and providers, enabling full_text_search on columns, writing vector_search/text_search/rrf SQL queries, using the /v1/search HTTP API, configuring vector engines (S3 Vectors), tuning RRF parameters (rank_weight, recency_decay), or setting up chunking for long documents. Also use when the user asks about search relevance, BM25 scoring, or embedding configuration.
spice-secrets
Configure secret stores in Spice — environment variables, Kubernetes, AWS Secrets Manager, and OS keyring. Use this skill whenever the user needs to manage credentials, API keys, passwords, or tokens in Spice, reference secrets in spicepod.yaml params with ${ store:KEY } syntax, set up .env files, configure secret store precedence, or understand how the `secrets:` section works. Also use when the user asks how to pass database passwords or API keys securely to Spice datasets or models.
spice-setup
Get started with Spice.ai — install the runtime, initialize a project, run the runtime, and use the CLI. Use this skill whenever the user mentions installing Spice, setting up a new Spice project, running `spice run`, looking up CLI commands or API endpoints, deployment models, or getting started with Spice. Also use when the user asks "how do I install Spice", "how do I start Spice", "what CLI commands does Spice have", or any question about Spice runtime setup and configuration basics.
spice-terraform
Manage Spice.ai Cloud infrastructure as code with Terraform or OpenTofu using the spiceai/spiceai provider. Use this skill whenever the user wants to write Terraform/OpenTofu configs for Spice apps, deployments, secrets, or org members, import existing Spice.ai resources into Terraform state, set up OAuth authentication for the provider, or use Terraform data sources for regions and container images. Also use when the user mentions "terraform" and "spice" together, or wants IaC for their Spice.ai Cloud infrastructure. For direct API management without Terraform, see spice-cloud-management.
spice-text-to-sql
Generate accurate SQL for Spice.ai's Apache DataFusion engine (PostgreSQL dialect), and build text-to-SQL workflows. Use this skill whenever the user wants to write SQL queries against Spice datasets, convert natural language to SQL, debug SQL errors, understand Spice/DataFusion data types and type casting, use Spice-specific functions (ai, embed, vector_search, text_search, rrf, JSON operators), build a text-to-SQL pipeline with schema introspection, or construct prompts for LLM-based SQL generation. Also use when the user hits SQL errors like "table not found", "cannot cast", or asks about DataFusion SQL dialect differences from PostgreSQL/MySQL.
spicepod-config
Create and configure Spicepod manifests (spicepod.yaml) — the central configuration file for Spice applications. Use this skill whenever the user wants to create a new spicepod.yaml from scratch, understand the overall spicepod structure and available sections, configure runtime settings (ports, caching, telemetry/observability), set up a complete Spice application combining datasets + models + search, or understand deployment models and use cases. This is the "glue" skill that shows how all Spice components fit together in one manifest. For details on specific sections (datasets, models, search, etc.), see the dedicated skills.