← Back to context

Comment by redox99

4 days ago

He's not wrong. If the model doesn't give you what you want, it's a worthless model. If the model is like the genie from the lamp, and gives you a shitty but technically correct answer, it's really bad.

> If the model doesn't give you what you want, it's a worthless model.

Yeah, if you’re into playing stupid mind games while not even being right.

If you stick to just voicing your needs, it’s fine. And I don’t think the TS/JS story shows a lack of reasoning that would be relevant for other use cases.

  • > Yeah, if you’re into playing stupid mind games while not even being right.

    If I ask questions outside of the things I already know about (probably pretty common, right?), it's not playing mind games. It's only a 'gotcha' question with the added context, otherwise it's just someone asking a question and getting back a Monkey's Paw answer: "aha! See, it's technically a subset of TS.."

    You might as well give it equal credit for code that doesn't compile correctly, since the author didn't explicitly ask.

  • As I mentioned TS/JS was only one issue (semantic vs technical definition), the other is that it didn't know to question me, making it's reasoning a waste of time. I could have asked something else ambiguous based the on context, not a TS/JS example, it likely would still not have questioned me. In contrast if you question a fact, not a solution, I find LLMs are more accurate and will attempt to take you down a notch if you try to prove the fact wrong.