Comment by prateekdalal
19 days ago
This resonates a lot, and I think your example actually captures the core failure mode really well.
What your PM asked for isn’t an “agentic pipeline” problem - it’s an organizational knowledge and accountability problem. LLMs are being used as a substitute for missing context, missing ownership, and missing validation paths.
In a system like that (30+ years, COBOL, interdependent routines), the hardest parts are not parsing code — they are understanding why things exist, which constraints were intentional, and which tradeoffs are still valid. None of that lives in the code, and no model can infer it reliably without human anchors.
This is where I have seen LLMs work better as assistive tools rather than autonomous agents: helping summarize, cluster, or surface patterns — but not being expected to produce “the” design document, especially when there is no stakeholder capable of validating it.
Without determinism around inputs, review, and ownership, the output might look impressive but it’s effectively unverifiable. That’s a risky place to be, especially for early-career engineers being asked to carry responsibility without authority.
I don’t think the problem is that LLMs are not powerful enough — it is that they are often being dropped into systems where the surrounding structure (governance, validation, incentives) simply isn’t there.
No comments yet
Contribute on Hacker News ↗