About Veritas

Architecture, design decisions, and how the dual verification layer works.

No legal conclusion without visible support

Veritas generates structured legal analysis where every claim is traceable to verified authority. Built to demonstrate the kind of trustworthy legal AI that eliminates hallucination risk and makes AI-generated legal work reviewable by attorneys.

Pipeline Architecture

Core stepsProcessingAgentic (additive)

Agentic features (quality assessment, search reformulation, self-critique) are additive — if any step fails, the system falls back to linear mode.

How It Works

STEP 1

Issue Extraction

GPT-4o identifies distinct legal issues from your question or uploaded document, along with jurisdiction assumptions and search queries.

STEP 2

Authority Retrieval

CourtListener's semantic search finds relevant case law across millions of opinions. In agentic mode, retrieval quality is assessed and queries are reformulated when results are weak.

STEP 3

IRAC Generation

GPT-4o produces structured analysis (Issue, Rule, Application, Counterarguments, Conclusion) grounded strictly in the retrieved authorities.

STEP 4

Self-Critique

A separate GPT-4o call reviews the analysis for logical gaps and unsupported claims, triggering revision if needed — the system criticizes its own work.

STEP 5

Dual Verification

Every citation is checked two ways: was it in the retrieval set (grounding check) and does it exist in CourtListener's database of 18M+ real citations (existence check)?

STEP 6

Confidence Scoring

Each citation receives a confidence rating based on verification results, and sections with weak support are flagged for human review.

Dual Verification Strategy

The technical centerpiece of the system.

Grounding Check

Every authority cited in the IRAC output must trace back to a case actually retrieved from CourtListener search. If the LLM invents a citation not in the retrieval set, it gets flagged immediately as potentially hallucinated.

Signal

"Is this citation in our retrieval set?"

Citation Existence Check

The entire generated analysis text is sent to CourtListener's Citation Lookup API, which parses every citation using Eyecite (built with Harvard Library Innovation Lab) and verifies each against their database of 18M+ real legal citations.

Signal

"Does this citation actually exist in real law?"

Confidence Ratings

LevelWhat It Means
STRONGHighest confidence — citation independently confirmed
MODERATEExists but wasn't found during retrieval
WEAKFound during search but couldn't be independently verified
UNVERIFIEDLikely hallucinated — flag for human review

Tech Stack

Evaluation Framework

28

Test Cases

10+

Practice Areas

13+

Metrics

2

Pipeline Modes

Includes LLM-as-judge rubrics, precision@K, MRR, citation accuracy, grounding rate, doctrine accuracy, cost/latency tracking, and head-to-head pipeline benchmarking between linear and agentic modes.

Limitations & Responsible Use

Important Considerations

Veritas is a research tool — not a replacement for professional legal judgment

Not Legal Advice

Veritas generates analysis that must be reviewed and validated by a qualified attorney. It does not provide legal advice and should not be relied upon as such.

Dataset Limitations

Coverage is limited to CourtListener’s database, which primarily includes U.S. federal and state case law. International, tribal, and some specialized court decisions may not be represented.

Jurisdiction Assumptions

Jurisdiction detection is inferred from the query text and may be imperfect for ambiguous or multi-jurisdictional questions. Always verify jurisdiction applicability.

Verification ≠ Correctness

Citation verification confirms that a case exists in the database — it does not confirm that the case supports the legal proposition for which it is cited. Reasoning quality must be independently assessed.

Probabilistic Outputs

LLM outputs are inherently probabilistic. Even when all citations are verified, the legal analysis, application of rules, and conclusions may contain errors or omissions.

Built as a demonstration of verifiable legal AI — the kind of system where no legal conclusion exists without visible support.