lexicon chat
lexicon chatDescription
Section titled “Description”Starts an interactive design session where an AI agent guides you through building a complete set of architecture artifacts — contracts, gates, conformance tests, behavior scenarios — then compiles everything into a constraint-aware implementation prompt.
chat is the primary interface for lexicon. Running lexicon with no subcommand also starts a chat session. It is a multi-turn conversation where the AI proactively:
- Decomposes your idea into concrete artifacts
- Suggests missing constraints, edge cases, and forbidden behaviors
- Challenges vague specifications and pushes for precision
- Tracks completeness across contracts, gates, conformance, and scoring
- Generates implementation prompts on request, or proactively when specs are reasonably complete
Requires authentication with an AI provider. Run lexicon auth login first.
Session continuity
Section titled “Session continuity”Sessions are automatically saved after every turn. When you start a new chat, lexicon checks for a previous session and offers to resume it:
Resume previous session (5 turns, 2h ago)? [Y/n]Pressing Enter (or y) restores the full conversation history with a context-rich recap:
Lexicon Design Session (resumed) ─────────────────────────────────────────────────────────── 5 previous turns restored. Topic: "I need an async key-value store with TTL" Last: "Here's the updated contract with TTL fields added..." 1 artifact(s) in session. Type 'exit' to end the session.- Topic shows the first substantive message that started the session (not short replies like “yes”).
- Last shows the most recent AI response so you know where you left off.
Type n to start fresh. Sessions are stored in .lexicon/conversations/ as JSON files.
How it works
Section titled “How it works”- Start — describe what you want to build in plain language (or resume a previous session)
- Design loop — the AI suggests artifacts and extensions; you confirm, adjust, or redirect
- Artifact creation — the AI creates contracts, gates, and tests automatically as you refine
- Prompt compilation — ask the AI to generate an implementation prompt, or it will suggest one when specs are reasonably complete
All artifacts are written to disk as they’re created.
Example
Section titled “Example”lexicon chat Lexicon Design Session ────────────────────────────────────────────────────────────
Describe what you want to build. I'll help you design the contracts, gates, and constraints, then generate an implementation prompt. Type 'exit' to end the session.
you> I need an async key-value store with TTL
⠋ Thinking...▸ Creating contract...✓ Created contract: kv-store (specs/contracts/kv-store.toml)
Let me break down the key-value store requirements. I've createdan initial contract with the core invariants. Here's what we haveso far:
**Contract: kv-store**- 3 invariants: get-after-set, TTL expiry, absent-key-returns-none- 2 required semantics: set must be durable, get must be consistent
I notice a few gaps we should address:
1. **No forbidden behaviors defined.** Should stale data ever be returned after TTL expiry? Should silent write failures be allowed?2. **No edge cases.** What about: cache full + new write? TTL=0? Concurrent reads during eviction?3. **No gates defined.** I'd recommend fmt, clippy, and unit test gates at minimum.
Which would you like to tackle first?AI action directives
Section titled “AI action directives”During the session, the AI uses structured directives to create artifacts:
| Directive | Effect |
|---|---|
CREATE_CONTRACT | Creates a new contract TOML file |
UPDATE_CONTRACT | Updates an existing contract |
CREATE_GATE | Adds a verification gate |
CREATE_CONFORMANCE | Generates conformance tests for a contract |
CREATE_BEHAVIOR | Generates behavior scenarios for a contract |
CREATE_PROPERTY_TESTS | Generates property-based tests (proptest) for a contract |
CREATE_FUZZ_TARGET | Generates a fuzz test harness (libfuzzer) for a contract |
CREATE_EDGE_CASES | Generates targeted edge case tests for a contract |
INFER_CONTRACT | Infers a contract from the public API source code |
COVERAGE_REPORT | Shows contract test coverage gaps |
API_SCAN | Scans public API and shows drift from baseline |
API_BASELINE | Saves current API as the baseline for drift detection |
GENERATE_PROMPT | Compiles implementation prompts from session or on-disk contracts |
PROMPT_LIST | Lists all generated implementation prompts |
PROMPT_STATUS | Checks which prompts are stale and need regeneration |
REGENERATE_PROMPTS | Regenerates all stale implementation prompts |
SYNC_CLAUDE | Syncs CLAUDE.md with the current repo state |
DOCTOR | Checks repo health (manifest, gates, scoring, CLAUDE.md, API baseline) |
RUN_VERIFY | Runs verification to check current state |
Error recovery
Section titled “Error recovery”When an action fails (e.g., generating tests for a contract that doesn’t exist), the error is fed back to the AI automatically. Instead of silently failing, the AI will:
- Diagnose what went wrong (e.g., “Contract not found:
content-addressing. Available contracts:kv-store,auth-service.”) - Suggest corrections (e.g., the right contract name, or creating the missing contract first)
This means you don’t need to manually debug action failures — the AI sees the errors and adjusts.
Implementation prompts
Section titled “Implementation prompts”The GENERATE_PROMPT action compiles an implementation prompt from your contracts. It works with:
- Contracts created during the current session, or
- All contracts already on disk in
specs/contracts/
Prompts are written to specs/prompts/<number>-<slug>.md. You don’t need to create contracts in the same session — if you have existing contracts, just ask the AI to generate implementation prompts and it will find them.
Completeness tracking
Section titled “Completeness tracking”The AI tracks completeness across these dimensions:
- Contract — invariants, required semantics, forbidden semantics, edge cases, examples
- Gates — verification gates defined and passing
- Conformance — tests for each invariant
- Scoring — quality model defined
- Architecture — dependency and layering constraints captured
Audit trail
Section titled “Audit trail”Every artifact created during the session is recorded in the audit log with the appropriate action type (e.g., ContractCreate, GateCreate) and actor Ai.
See also
Section titled “See also”lexicon verify— run verification gates and scoringlexicon auth— manage AI provider authentication