← Back to context

Comment by dent9

1 month ago

> When you say “LLMs did not fully solve this problem” some people tend to respond with “you’re holding it wrong!” > > I think they’re sometimes right! Interacting with LLMs is a new skill, and it feels pretty weird if you’re used to writing software like it’s 2020. A more talented user of LLMs may have trivially solved this problem.

So one thing I only recently figured out is that using ChatGPT via the web browser chat is massively different from using OpenAI's code-focused Codex model / interface. Once I switched to using Codex (via the VS Code extension + my own ChatGPT subscription) the quality of answers I got improved massively.

So if you're trying to use LLM to help with debug, make sure you're using the right model!! There are apparently massive differences between models of the same generation from the same company