Comment by energy123
1 day ago
> If you have tried to use imaginary APIs, imaginary configuration and imaginary cli arguments, you know what I mean
I see this comment a lot but I can't help but feel it's 4 weeks out of date. The version of o1 released on 2024-12-17 so rarely hallucinates when asked code questions of basic to medium difficulty and provided with good context and a well written prompt, in my experience. If the context window is sub-10k tokens, I have very high confidence that the output will be correct. GPT-4o and o1-mini, on the other hand, hallucinates a lot and I have learned to put low trust in the output.
o1 is way to slow to keep up with my flow of thinking in order to be of any help in the scenario i am describing
How are you using LLMs? With o1 I've switched to spelling out in lots of details what I want, then asking it it to one shot the full file, so with this approach the wait time has been acceptable.
I'm using it to orient me when tackling something new. For instance, the other day i was making a web driver client in shell and i asked it something apong the lines of "is there an http endpoint to the webdriver to get the class name of an element?"
These are the sort of questions i mostly do. "What is the best practice to read output from a device file in C", "Is there a cli tool to find dead typescript interface fields?"