← Back to context

Comment by rectang

4 days ago

ChatGPT is great for debugging common issues that have been written about extensively on the web (before the training cutoff). It's a synthesizer of Stack Overflow and greatly cuts down on the time it takes to figure out what's going on compared with searching for discussions and reading them individually.

(This IP rightly belongs to the Stack Overflow contributors and is licensed to Stack Overflow. It ought to be those parties who are exploiting it. I have mixed feelings about participating as a user.)

However, the LLM output is also noisy because of hallucinations — just less noisy than web searching.

I imagine that an LLM could assess a codebase and find common mistakes, problematic function/API invocations, etc. However, there would also be a lot of false positives. Are people using LLMs that way?