About

Simon Paris

I design AI systems that don't break.

Most LLM systems running in production today are clever prototypes duct-taped into production. They work in the demo. They fail in ways you can't reproduce. Post-mortems end with "the model did something weird."

I came up through backend systems — the kind where failure has consequences. When teams started pulling me into GenAI work, I kept seeing the same gap: engineers treating LLMs like deterministic functions, then being surprised when they weren't. The STATE framework came out of that gap.

Simon Paris
The Thesis

State Beats Intelligence.

A mid-tier model with proper state management outperforms a frontier model running stateless — every single time. The model is not your reliability problem. The architecture around it is.

This is not a philosophy. It's a constraint set — for AI systems that have to work in regulated, revenue-critical, and user-facing contexts where "it usually works" is not a delivery standard.

The STATE Framework

Five pillars. Zero ambiguity.

S
Structured

Every operation initializes a typed state object. Stage always reflects current execution position.

T
Traceable

Every LLM call, API call, and stage transition is logged with all required fields. No black boxes.

A
Auditable

Any automated decision affecting an individual has a decision record. Law 25, OSFI, and EU AI Act compliance by construction.

T
Tolerant

Workflow resumes from step 6 after a crash at step 6. Not from step 1. Idempotency is not optional.

E
Explicit

Every LLM output passes a validation gate before any write. Invalid output goes to the error path. Never silent continue.

What I Write About

Five lenses. One spine.

State Beats Intelligence.
Production Failure Taxonomy

Naming and classifying LLM failure modes with precision. These are always state failures in disguise.

STATE Framework Applied

Demonstrations of STATE pillars in real architecture decisions. Before/after comparisons.

Defensive Architecture

Design patterns that make AI systems tolerant by construction. Validation gates, locks, idempotency.

The Meta Layer

How I use AI to do the work most people do manually — including figuring out what to ask.

Regulated AI & Law 25

Quebec Law 25, OSFI, EU AI Act as architecture requirements, not compliance checkboxes.

How to Work With Me

Four ways in.

FreeBlog

Notes from production.

Failure taxonomies, defensive patterns, and architecture decisions for LLM systems that have to work. No tutorial content. Practitioner-only.

Read the blog →
FreeDiagnostic

STATE Readiness Score.

Score your LLM system across the five STATE pillars. Takes 8 minutes. Tells you exactly where your architecture is exposed.

Score your system →
FreeWorkshop

No Stack Trace.

A live session on why LLM systems fail in production and how to build the observability layer to find out why. No slides-only theory.

Register for free →
CohortProgram

Production-Grade LLM Architecture.

A hands-on cohort program. You build a stateful, observable, auditable LLM system from scratch. Designed for teams already in production.

Apply to the cohort →

Start with the diagnostic.

If you're running LLM pipelines in production and something feels off — it probably is. The STATE Score tells you what to fix first.