Comment by Tainnor
2 days ago
> I expect the answers of the AI to be really good. If it isn't, having explained my problem to the AI was simply a waste of time.
I find the worst part to be when it doesn't correct flaws in my assumptions.
For example, yesterday I asked it "what is the difference between these two Datadog queries"? And it replied something that was semi-correct, but it didn't discover the fundamental flaw - that the first one wasn't a valid query because of unbalanced parens. In fact, it turns out that the two strings (+ another one) would get concatenated and only then would it be a valid query.
A simple "the first string is not a valid query because of a missing closing paren" would have saved a lot of time in trying to understand this, and I suspect that's what I would have received if I had prompted it with "what's the problem with this query" but LLMs are just too sycophantic to help with these things.
I have found that o3, specifically, will tell me relevant information that I didn't ask for.
But most other models don't.
I do have a custom instruction in place to ask if I'm aware of concepts related to my question - perhaps in coming up with these, it notices when something relevant hasn't been mentioned.