Summary
Fowler observes that traditional software abstractions (functions, classes, interfaces) are deterministic — the same input always produces the same output. LLMs introduce probabilistic abstraction: you describe intent in natural language and get a likely-correct result. This changes the economics of code generation but introduces a verification problem. Fowler argues the response isn’t to avoid LLMs but to pair them with stronger verification — tests, types, and formal methods.
What it means for our work
Fowler’s framing validates our architecture: probabilistic generation (the AI writes code) paired with deterministic verification (fuzz type-checks specs, ProB model-checks them, Refactory checks preconditions). The “new nature of abstraction” is precisely why we don’t trust LLM output at face value — every generated artifact passes through a formal checking layer. The AI handles intent; the tools handle proof.