← Back to context

Comment by prateekdalal

18 days ago

I doubt ISO-9000 gets “replaced” so much as interpreted more strictly in the presence of LLMs. ISO-9000 isn’t about how work is done — it’s about whether processes are defined, repeatable, auditable, and improvable.

From that lens, LLMs actually create tension rather than an escape hatch. A system whose outputs can’t be reproduced, explained, or bounded makes it harder to demonstrate compliance, not easier. Saying “we use AI” doesn’t satisfy requirements around traceability, corrective action, or process control.

My guess is that ISO-style frameworks will push organizations toward explicitly classifying where LLMs are allowed to operate: as advisory inputs, as drafting aids, or as automation under defined controls — with clear ownership and validation steps around them.

In other words, the pressure probably won’t be to loosen standards, but to reassert them: define where probabilistic components sit, what checks exist before outputs become authoritative, and how failures are detected and corrected. Without that structure, it’s hard to see how certification survives unchanged.