Skip to main content

Source: docs/manual/prompt-library.md

This page is generated by site/scripts/sync-manual-docs.mjs.

Prompt Library

The prompt library provides a versioned, searchable catalog of reusable prompt templates. Agents can discover and use prompts from the library, and operators can seed, query, and manage prompts via CLI tools. Prompt quality is tracked through a feedback mechanism.

Source: internal/promptlib/*

Overview

The prompt library provides a versioned, searchable catalog of prompt templates that agents can discover and reuse instead of constructing prompts from scratch. It uses content-hashed immutable versioning (mirroring the tool registry pattern), embedding-based semantic search via the existing vector infrastructure, and a salience-inspired ranking formula to surface the most effective prompts.

When enabled (CRUVERO_PROMPTLIB_ENABLED=true), agents gain two tools — prompt_search and prompt_create — that appear in the normal tool palette alongside memory and registry tools.

Prompt Types

TypeConstantDescription
SystemsystemSystem-level instructions for the LLM
UseruserUser-facing prompt templates
TasktaskTask decomposition and planning prompts
RepairrepairError recovery and repair prompts
RoutingroutingAgent routing and delegation prompts
Tool Descriptiontool_descriptionTool usage instruction prompts
Chain of Thoughtchain_of_thoughtReasoning chain prompts
CustomcustomUser-defined prompt types

Versioning and Immutability

Every prompt version is identified by a SHA256 content hash computed from (id, version, content, type). The store uses INSERT ... ON CONFLICT DO NOTHING with post-insert hash verification — the same pattern as the tool registry (internal/registry/store.go).

  • Immutable content: once a prompt version is stored, its content cannot change.
  • Hash collision detection: if the same (id, version) is stored with different content, the store returns an error.
  • Version incrementing: new prompt content requires a new version number.

Search Pipeline

Three-stage pipeline for semantic prompt discovery:

Stage 1: Vector Retrieval

  • Embed the query text using the configured embedding provider
  • Search the prompt_library vector store collection for top-K candidates (default K=20)
  • Apply tenant isolation filter

Stage 2: Re-Ranking

Score each candidate using a weighted formula:

score = W_sim * similarity + W_qual * quality + W_rec * recency + W_use * usage
WeightDefaultSource
Similarity0.4Vector cosine similarity from Stage 1
Quality0.3success_rate * avg_llm_rating from prompt_metrics
Recency0.2Exponential decay from created_at (half-life configurable)
Usage0.1Log-scaled usage frequency

Stage 3: Result Assembly

  • Sort by composite score, truncate to requested limit (default 5)
  • Optionally render templates with provided parameters
  • Return scored results with component breakdowns

Template Rendering

Prompts use Go text/template syntax for parameterized content:

You are a {{.Role}} agent. Your task is to {{.Task}}.
Available tools: {{range .Tools}}{{.Name}}, {{end}}

Parameter definitions (ParamDef) specify name, type, required flag, and defaults. The renderer validates parameters before execution and applies defaults for missing optional params.

Supported parameter types: string, int, float, bool, []string.

Feedback System

LLM Auto-Feedback

After each decision prompt persistence event, Cruvero can run a background low-context LLM evaluator (feature-gated) to score prompt usefulness. This records:

  • Usage count increment
  • Success/failure outcome
  • LLM rating (0.0-1.0) written to prompt_metrics.total_rating and prompt_metrics.rating_count

Evaluator controls:

  • CRUVERO_PROMPT_QUALITY_ENABLED
  • CRUVERO_PROMPT_QUALITY_TIMEOUT
  • CRUVERO_PROMPT_QUALITY_MAX_INPUT_BYTES
  • CRUVERO_PROMPT_QUALITY_MODEL

User Feedback

Non-blocking user feedback via the prompt-feedback CLI or API. Records a rating and optional comment, contributing to the prompt's running quality average.

Both feedback channels update the prompt_metrics table without modifying immutable prompt content.

Agent Tools

Read-only tool that searches the library by semantic similarity.

Input:

  • query (required): natural language search text
  • type: filter by prompt type
  • tags: filter by tags
  • params: template parameters for rendering
  • limit: max results (default 5)

prompt_create

Write tool that creates a new prompt in the library.

Input:

  • name (required): human-readable name
  • type (required): prompt type category
  • content (required): template content
  • description: what the prompt does
  • parameters: template parameter definitions
  • tags: tags for search and filtering

Configuration

VariableDefaultDescription
CRUVERO_PROMPTLIB_ENABLEDtrueEnable/disable prompt library
CRUVERO_PROMPTLIB_COLLECTIONprompt_libraryVector store collection name
CRUVERO_PROMPTLIB_SEARCH_K20Vector retrieval candidate count
CRUVERO_PROMPTLIB_RESULT_LIMIT5Max results returned to agent
CRUVERO_PROMPTLIB_W_SIMILARITY0.4Weight: vector similarity
CRUVERO_PROMPTLIB_W_QUALITY0.3Weight: quality score
CRUVERO_PROMPTLIB_W_RECENCY0.2Weight: recency decay
CRUVERO_PROMPTLIB_W_USAGE0.1Weight: usage frequency
CRUVERO_PROMPTLIB_HALF_LIFE168hRecency decay half-life
CRUVERO_PROMPTLIB_FEEDBACK_ENABLEDtrueEnable user feedback
CRUVERO_PROMPTLIB_AUTO_FEEDBACKtrueEnable LLM self-assessment

CLI Tools

prompt-seed

Load prompts from YAML/JSON files into the library.

prompt-seed --file prompts.yaml --tenant my_tenant
prompt-seed --dir ./prompts/ --dry-run

prompt-query

Search the prompt library from the command line.

prompt-query --query "system routing prompt" --type system --limit 3
prompt-query --query "task decomposition" --json

prompt-feedback

Submit user feedback for a prompt.

prompt-feedback --hash a3f2b1... --rating 0.85 --comment "Very effective"