Skip to content

Lexicon

Define the behavioral law of your software. Verify it automatically. Keep AI agents aligned with your specifications.

Lexicon is a system for defining and enforcing the behavioral law of software systems. It lets you declare what your code must do, must not do, and what is out of scope — then continuously verifies that your implementation matches.

Lexicon provides a language for describing the meaning of a codebase. This language is the Lexicon.

THE LEXICON MODEL CORE SPECIFICATION Contracts the behavioral law Conformance proof of behavior Behavior scenario testing API Surface public interface Coverage clause verification ENFORCEMENT Gates pass/fail enforcement Scoring health metrics Verification Result pass / warn / fail AI Context agent boundaries reads

Tests alone do not define system law. Architecture conventions drift. Contracts are usually implicit — scattered across comments, wikis, and team memory. When AI agents enter the picture, the problem gets worse: tools that can change code need explicit boundaries, not tribal knowledge.

Lexicon makes these boundaries concrete:

  • Contracts make behavior explicit — not implied by tests or convention
  • Conformance suites make proof reusable — not scattered across ad hoc test files
  • Gates make enforcement deterministic — not dependent on review diligence
  • Scoring makes health measurable — not a subjective judgment call
  • Architecture rules make structure enforceable — not just a diagram in a wiki
  • AI context makes boundaries legible to agents — not hidden in human assumptions

Lexicon combines eight interlocking concepts into a governed verification system.

Contracts

Declare what your system must do, must not do, and what is out of scope. Contracts are the source of truth for behavioral law.

Conformance

Generate reusable test harnesses from contracts. Multiple implementations can be verified against the same specification.

Coverage

Measure which contract clauses are actually tested. Know whether your tests prove behavior, not just exercise code.

Gates

Enforce hard pass/fail verification checks. Gates prevent regressions from entering the system — by humans or AI.

Scoring

Measure system health across weighted dimensions. Deterministic, explainable, and resistant to gaming.

Architecture

Define structural rules: crate roles, dependency directions, layer boundaries. Detect drift before it becomes debt.

Ecosystem

Govern multiple repositories with shared contracts, role-based policies, and cross-repo compatibility checks.

AI Context

Generate structured context for AI agents. Agents understand system law and operate within defined boundaries.

Every lexicon verify run executes a deterministic pipeline. Contracts are checked against conformance tests, coverage is measured, API drift is validated, gates are enforced, and a weighted score produces a clear verdict.

VERIFICATION PIPELINE Contracts define Confor- mance prove Coverage measure API Validation drift check Gates enforce Scoring evaluate Verdict Pass / Warn / Fail + Audit Record lexicon verify

Every run writes an audit record. Every score is explainable. Every gate result is attributable.

Lexicon works at three levels. Start with what you need. Grow into what you want.

A single-library user gets contracts, tests, and verification in minutes. A systems architect gets architecture governance across an entire platform. The same tool serves both without forcing enterprise complexity on small projects.

Repo Mode

For single libraries

  • Contracts
  • Conformance tests
  • Coverage analysis
  • Verification gates
  • Scoring
  • AI context sync
+
Workspace Mode

For multi-crate workspaces

  • Crate roles
  • Dependency rules
  • Architecture graph
  • Local shared contracts
  • Workspace verify
+
Ecosystem Mode

For multi-repo platforms

  • Repository roles
  • Shared contracts
  • Dependency law
  • Impact analysis
  • Ecosystem governance

Lexicon generates structured context so AI agents understand system law. When used with Claude Code, it syncs contracts, gates, scoring, and edit policies into CLAUDE.md — so the AI always operates within defined boundaries.

AI can propose improvements, but every patch goes through verification. Patches that violate required gates are rejected. Every AI-driven change is recorded in the audit trail.

The architecture assumes AI can make locally clever but globally bad choices — and defends against that.