Comment by chrisjj
20 days ago
> LLMs are probabilistic and non-deterministic
This is a polite way of saying unreliable and untrustworthy.
The problem facing enterprise is best understood by viewing LLMs as any other unreliable program.
> We’ve found that treating LLMs as suggestion engines rather than decision makers changes the architecture completely.
Figures. Look at the disruption LLM "suggestions" are inflicting on scientific journals, court cases and open source projects wordwide.
I don’t disagree with the underlying concern. In practice, “probabilistic” often does translate to unreliable when you put these systems in environments that expect reproducibility and accountability.
Where I think the framing matters is in how we respond architecturally. Treating LLMs as “just another unreliable program” is reasonable — but enterprises already have patterns for dealing with unreliable components: isolation, validation, gates, and clear ownership of side effects.
The problem we’re seeing is that LLMs are often dropped past those boundaries — allowed to directly author decisions or actions — which is why the downstream damage you mention (journals, courts, OSS) feels so chaotic.
The “suggestion engine” framing isn’t meant to excuse that behavior; it’s meant to reassert a familiar control model. Suggestions are cheap. Execution and publication are not. Once you draw that line explicitly, you can start asking the same questions enterprises always ask: who approves, what’s logged, and what happens when this is wrong?
Without that separation, I agree — you’re effectively wiring an unreliable component straight into systems that assume trust, and the failure modes shouldn’t surprise anyone.