Comment by tyingq
8 hours ago
My experience so far has been they are somewhat good at troubleshooting code, patterns, etc, that exist in the publicly viewable sphere of stuff it's trained on, where common error messages and pitfalls are "google-able"
They are much worse at code/patterns/apis that were locally created, including things created by the same LLM that's trying to fix a problem.
I think LLMs are also creating a decline in the amount of good troubleshooting information being published on the internet. So less future content to scrape.
No comments yet
Contribute on Hacker News ↗