Comment by aleph_minus_one

2 days ago

I think there are also other aspects:

- Some people simply ask a lot more questions than others (this ignores whether they like or dislike AI), i.e. some people simply prefer to find things out more by themselves, and thus also use other resources like Google or Stack Overflow as a last resort. So their questions to an AI will likely be more complicated, because they already found out the easy parts by themselves.

- If I have to make the effort to explain to the AI in a sufficiently exhaustive way what I need (which I often have to do), I expect the answers of the AI to be really good. If it isn't, having explained my problem to the AI was simply a waste of time.

> I expect the answers of the AI to be really good. If it isn't, having explained my problem to the AI was simply a waste of time.

I find the worst part to be when it doesn't correct flaws in my assumptions.

For example, yesterday I asked it "what is the difference between these two Datadog queries"? And it replied something that was semi-correct, but it didn't discover the fundamental flaw - that the first one wasn't a valid query because of unbalanced parens. In fact, it turns out that the two strings (+ another one) would get concatenated and only then would it be a valid query.

A simple "the first string is not a valid query because of a missing closing paren" would have saved a lot of time in trying to understand this, and I suspect that's what I would have received if I had prompted it with "what's the problem with this query" but LLMs are just too sycophantic to help with these things.

  • I have found that o3, specifically, will tell me relevant information that I didn't ask for.

    But most other models don't.

    I do have a custom instruction in place to ask if I'm aware of concepts related to my question - perhaps in coming up with these, it notices when something relevant hasn't been mentioned.