Skip to content

lexicon chat

Terminal window
lexicon chat

Starts an interactive design session where an AI agent guides you through building a complete set of architecture artifacts — contracts, gates, conformance tests, behavior scenarios — then compiles everything into a constraint-aware implementation prompt.

chat is the primary interface for lexicon. Running lexicon with no subcommand also starts a chat session. It is a multi-turn conversation where the AI proactively:

  • Decomposes your idea into concrete artifacts
  • Suggests missing constraints, edge cases, and forbidden behaviors
  • Challenges vague specifications and pushes for precision
  • Tracks completeness across contracts, gates, conformance, and scoring
  • Generates implementation prompts on request, or proactively when specs are reasonably complete

Requires authentication with an AI provider. Run lexicon auth login first.

Sessions are automatically saved after every turn. When you start a new chat, lexicon checks for a previous session and offers to resume it:

Resume previous session (5 turns, 2h ago)? [Y/n]

Pressing Enter (or y) restores the full conversation history with a context-rich recap:

Lexicon Design Session (resumed)
───────────────────────────────────────────────────────────
5 previous turns restored.
Topic: "I need an async key-value store with TTL"
Last: "Here's the updated contract with TTL fields added..."
1 artifact(s) in session.
Type 'exit' to end the session.
  • Topic shows the first substantive message that started the session (not short replies like “yes”).
  • Last shows the most recent AI response so you know where you left off.

Type n to start fresh. Sessions are stored in .lexicon/conversations/ as JSON files.

  1. Start — describe what you want to build in plain language (or resume a previous session)
  2. Design loop — the AI suggests artifacts and extensions; you confirm, adjust, or redirect
  3. Artifact creation — the AI creates contracts, gates, and tests automatically as you refine
  4. Prompt compilation — ask the AI to generate an implementation prompt, or it will suggest one when specs are reasonably complete

All artifacts are written to disk as they’re created.

Terminal window
lexicon chat
Lexicon Design Session
────────────────────────────────────────────────────────────
Describe what you want to build. I'll help you design the
contracts, gates, and constraints, then generate an
implementation prompt.
Type 'exit' to end the session.
you> I need an async key-value store with TTL
⠋ Thinking...
▸ Creating contract...
✓ Created contract: kv-store (specs/contracts/kv-store.toml)
Let me break down the key-value store requirements. I've created
an initial contract with the core invariants. Here's what we have
so far:
**Contract: kv-store**
- 3 invariants: get-after-set, TTL expiry, absent-key-returns-none
- 2 required semantics: set must be durable, get must be consistent
I notice a few gaps we should address:
1. **No forbidden behaviors defined.** Should stale data ever be
returned after TTL expiry? Should silent write failures be
allowed?
2. **No edge cases.** What about: cache full + new write? TTL=0?
Concurrent reads during eviction?
3. **No gates defined.** I'd recommend fmt, clippy, and unit test
gates at minimum.
Which would you like to tackle first?

During the session, the AI uses structured directives to create artifacts:

DirectiveEffect
CREATE_CONTRACTCreates a new contract TOML file
UPDATE_CONTRACTUpdates an existing contract
CREATE_GATEAdds a verification gate
CREATE_CONFORMANCEGenerates conformance tests for a contract
CREATE_BEHAVIORGenerates behavior scenarios for a contract
CREATE_PROPERTY_TESTSGenerates property-based tests (proptest) for a contract
CREATE_FUZZ_TARGETGenerates a fuzz test harness (libfuzzer) for a contract
CREATE_EDGE_CASESGenerates targeted edge case tests for a contract
INFER_CONTRACTInfers a contract from the public API source code
COVERAGE_REPORTShows contract test coverage gaps
API_SCANScans public API and shows drift from baseline
API_BASELINESaves current API as the baseline for drift detection
GENERATE_PROMPTCompiles implementation prompts from session or on-disk contracts
PROMPT_LISTLists all generated implementation prompts
PROMPT_STATUSChecks which prompts are stale and need regeneration
REGENERATE_PROMPTSRegenerates all stale implementation prompts
SYNC_CLAUDESyncs CLAUDE.md with the current repo state
DOCTORChecks repo health (manifest, gates, scoring, CLAUDE.md, API baseline)
RUN_VERIFYRuns verification to check current state

When an action fails (e.g., generating tests for a contract that doesn’t exist), the error is fed back to the AI automatically. Instead of silently failing, the AI will:

  • Diagnose what went wrong (e.g., “Contract not found: content-addressing. Available contracts: kv-store, auth-service.”)
  • Suggest corrections (e.g., the right contract name, or creating the missing contract first)

This means you don’t need to manually debug action failures — the AI sees the errors and adjusts.

The GENERATE_PROMPT action compiles an implementation prompt from your contracts. It works with:

  • Contracts created during the current session, or
  • All contracts already on disk in specs/contracts/

Prompts are written to specs/prompts/<number>-<slug>.md. You don’t need to create contracts in the same session — if you have existing contracts, just ask the AI to generate implementation prompts and it will find them.

The AI tracks completeness across these dimensions:

  • Contract — invariants, required semantics, forbidden semantics, edge cases, examples
  • Gates — verification gates defined and passing
  • Conformance — tests for each invariant
  • Scoring — quality model defined
  • Architecture — dependency and layering constraints captured

Every artifact created during the session is recorded in the audit log with the appropriate action type (e.g., ContractCreate, GateCreate) and actor Ai.