Lexicon
Define and enforce system behavior
Section titled “Define and enforce system behavior”Lexicon is a system for defining and enforcing the behavioral law of software systems. It lets you declare what your code must do, must not do, and what is out of scope — then continuously verifies that your implementation matches.
Lexicon provides a language for describing the meaning of a codebase. This language is the Lexicon.
Why Lexicon exists
Section titled “Why Lexicon exists”Tests alone do not define system law. Architecture conventions drift. Contracts are usually implicit — scattered across comments, wikis, and team memory. When AI agents enter the picture, the problem gets worse: tools that can change code need explicit boundaries, not tribal knowledge.
Lexicon makes these boundaries concrete:
- Contracts make behavior explicit — not implied by tests or convention
- Conformance suites make proof reusable — not scattered across ad hoc test files
- Gates make enforcement deterministic — not dependent on review diligence
- Scoring makes health measurable — not a subjective judgment call
- Architecture rules make structure enforceable — not just a diagram in a wiki
- AI context makes boundaries legible to agents — not hidden in human assumptions
The Lexicon Model
Section titled “The Lexicon Model”Lexicon combines eight interlocking concepts into a governed verification system.
Contracts
Declare what your system must do, must not do, and what is out of scope. Contracts are the source of truth for behavioral law.
Conformance
Generate reusable test harnesses from contracts. Multiple implementations can be verified against the same specification.
Coverage
Measure which contract clauses are actually tested. Know whether your tests prove behavior, not just exercise code.
Gates
Enforce hard pass/fail verification checks. Gates prevent regressions from entering the system — by humans or AI.
Scoring
Measure system health across weighted dimensions. Deterministic, explainable, and resistant to gaming.
Architecture
Define structural rules: crate roles, dependency directions, layer boundaries. Detect drift before it becomes debt.
Ecosystem
Govern multiple repositories with shared contracts, role-based policies, and cross-repo compatibility checks.
AI Context
Generate structured context for AI agents. Agents understand system law and operate within defined boundaries.
How verification works
Section titled “How verification works”Every lexicon verify run executes a deterministic pipeline. Contracts are checked against conformance tests, coverage is measured, API drift is validated, gates are enforced, and a weighted score produces a clear verdict.
Every run writes an audit record. Every score is explainable. Every gate result is attributable.
Progressive scope
Section titled “Progressive scope”Lexicon works at three levels. Start with what you need. Grow into what you want.
A single-library user gets contracts, tests, and verification in minutes. A systems architect gets architecture governance across an entire platform. The same tool serves both without forcing enterprise complexity on small projects.
For single libraries
- Contracts
- Conformance tests
- Coverage analysis
- Verification gates
- Scoring
- AI context sync
For multi-crate workspaces
- Crate roles
- Dependency rules
- Architecture graph
- Local shared contracts
- Workspace verify
For multi-repo platforms
- Repository roles
- Shared contracts
- Dependency law
- Impact analysis
- Ecosystem governance
Safe AI-assisted development
Section titled “Safe AI-assisted development”Lexicon generates structured context so AI agents understand system law. When used with Claude Code, it syncs contracts, gates, scoring, and edit policies into CLAUDE.md — so the AI always operates within defined boundaries.
AI can propose improvements, but every patch goes through verification. Patches that violate required gates are rejected. Every AI-driven change is recorded in the audit trail.
The architecture assumes AI can make locally clever but globally bad choices — and defends against that.