Comment by rustyhancock
23 days ago
The problem is the LLMs are exceptionally good at producing output that appears good.
That's what it's ultimately been tuned to do.
The way I see this play out is output that satisfied me but that I would not produce myself.
Over a large project that adds up and typically is glaringly obvious to everyone but the person who was using the LLM.
My only guess as to why that is, is because most of what we do and why we do it we're not conscious of. The threshold we'd intervene at is higher than the original effort it takes to do the right thing.
If these things don't apply to you. Then I think you're coming up to a golden era.
No comments yet
Contribute on Hacker News ↗