Source:
docs/manual/prompt-library.mdThis page is generated by
site/scripts/sync-manual-docs.mjs.
Prompt Library
The prompt library provides a versioned, searchable catalog of reusable prompt templates. Agents can discover and use prompts from the library, and operators can seed, query, and manage prompts via CLI tools. Prompt quality is tracked through a feedback mechanism.
Source: internal/promptlib/*
Overview
The prompt library provides a versioned, searchable catalog of prompt templates that agents can discover and reuse instead of constructing prompts from scratch. It uses content-hashed immutable versioning (mirroring the tool registry pattern), embedding-based semantic search via the existing vector infrastructure, and a salience-inspired ranking formula to surface the most effective prompts.
When enabled (CRUVERO_PROMPTLIB_ENABLED=true), agents gain two tools — prompt_search and prompt_create — that appear in the normal tool palette alongside memory and registry tools.
Prompt Types
| Type | Constant | Description |
|---|---|---|
| System | system | System-level instructions for the LLM |
| User | user | User-facing prompt templates |
| Task | task | Task decomposition and planning prompts |
| Repair | repair | Error recovery and repair prompts |
| Routing | routing | Agent routing and delegation prompts |
| Tool Description | tool_description | Tool usage instruction prompts |
| Chain of Thought | chain_of_thought | Reasoning chain prompts |
| Custom | custom | User-defined prompt types |
Versioning and Immutability
Every prompt version is identified by a SHA256 content hash computed from (id, version, content, type). The store uses INSERT ... ON CONFLICT DO NOTHING with post-insert hash verification — the same pattern as the tool registry (internal/registry/store.go).
- Immutable content: once a prompt version is stored, its content cannot change.
- Hash collision detection: if the same
(id, version)is stored with different content, the store returns an error. - Version incrementing: new prompt content requires a new version number.
Search Pipeline
Three-stage pipeline for semantic prompt discovery:
Stage 1: Vector Retrieval
- Embed the query text using the configured embedding provider
- Search the
prompt_libraryvector store collection for top-K candidates (default K=20) - Apply tenant isolation filter
Stage 2: Re-Ranking
Score each candidate using a weighted formula:
score = W_sim * similarity + W_qual * quality + W_rec * recency + W_use * usage
| Weight | Default | Source |
|---|---|---|
| Similarity | 0.4 | Vector cosine similarity from Stage 1 |
| Quality | 0.3 | success_rate * avg_llm_rating from prompt_metrics |
| Recency | 0.2 | Exponential decay from created_at (half-life configurable) |
| Usage | 0.1 | Log-scaled usage frequency |
Stage 3: Result Assembly
- Sort by composite score, truncate to requested limit (default 5)
- Optionally render templates with provided parameters
- Return scored results with component breakdowns
Template Rendering
Prompts use Go text/template syntax for parameterized content:
You are a {{.Role}} agent. Your task is to {{.Task}}.
Available tools: {{range .Tools}}{{.Name}}, {{end}}
Parameter definitions (ParamDef) specify name, type, required flag, and defaults. The renderer validates parameters before execution and applies defaults for missing optional params.
Supported parameter types: string, int, float, bool, []string.
Feedback System
LLM Auto-Feedback
After each decision prompt persistence event, Cruvero can run a background low-context LLM evaluator (feature-gated) to score prompt usefulness. This records:
- Usage count increment
- Success/failure outcome
- LLM rating (0.0-1.0) written to
prompt_metrics.total_ratingandprompt_metrics.rating_count
Evaluator controls:
CRUVERO_PROMPT_QUALITY_ENABLEDCRUVERO_PROMPT_QUALITY_TIMEOUTCRUVERO_PROMPT_QUALITY_MAX_INPUT_BYTESCRUVERO_PROMPT_QUALITY_MODEL
User Feedback
Non-blocking user feedback via the prompt-feedback CLI or API. Records a rating and optional comment, contributing to the prompt's running quality average.
Both feedback channels update the prompt_metrics table without modifying immutable prompt content.
Agent Tools
prompt_search
Read-only tool that searches the library by semantic similarity.
Input:
query(required): natural language search texttype: filter by prompt typetags: filter by tagsparams: template parameters for renderinglimit: max results (default 5)
prompt_create
Write tool that creates a new prompt in the library.
Input:
name(required): human-readable nametype(required): prompt type categorycontent(required): template contentdescription: what the prompt doesparameters: template parameter definitionstags: tags for search and filtering
Configuration
| Variable | Default | Description |
|---|---|---|
CRUVERO_PROMPTLIB_ENABLED | true | Enable/disable prompt library |
CRUVERO_PROMPTLIB_COLLECTION | prompt_library | Vector store collection name |
CRUVERO_PROMPTLIB_SEARCH_K | 20 | Vector retrieval candidate count |
CRUVERO_PROMPTLIB_RESULT_LIMIT | 5 | Max results returned to agent |
CRUVERO_PROMPTLIB_W_SIMILARITY | 0.4 | Weight: vector similarity |
CRUVERO_PROMPTLIB_W_QUALITY | 0.3 | Weight: quality score |
CRUVERO_PROMPTLIB_W_RECENCY | 0.2 | Weight: recency decay |
CRUVERO_PROMPTLIB_W_USAGE | 0.1 | Weight: usage frequency |
CRUVERO_PROMPTLIB_HALF_LIFE | 168h | Recency decay half-life |
CRUVERO_PROMPTLIB_FEEDBACK_ENABLED | true | Enable user feedback |
CRUVERO_PROMPTLIB_AUTO_FEEDBACK | true | Enable LLM self-assessment |
CLI Tools
prompt-seed
Load prompts from YAML/JSON files into the library.
prompt-seed --file prompts.yaml --tenant my_tenant
prompt-seed --dir ./prompts/ --dry-run
prompt-query
Search the prompt library from the command line.
prompt-query --query "system routing prompt" --type system --limit 3
prompt-query --query "task decomposition" --json
prompt-feedback
Submit user feedback for a prompt.
prompt-feedback --hash a3f2b1... --rating 0.85 --comment "Very effective"