Comment by ozozozd
2 months ago
Exactly. It’s also easy to find yourself in the out-of-distribution territory. Just ask for some tree-sitter queries and watch Gemini 3, Opus 4.5 and GLM 5 hallucinate new directives.
2 months ago
Exactly. It’s also easy to find yourself in the out-of-distribution territory. Just ask for some tree-sitter queries and watch Gemini 3, Opus 4.5 and GLM 5 hallucinate new directives.
I think this could be the key difference in how people are experiencing the tools. Using Claude in industries full of proprietary code is a totally different experience to writing some React components, or framework code in C#, PHP or Java. It's shockingly good at the later, but as you get into proprietary frameworks or newer problem domains it feels like AI in 2023 again, even with the benefit of the agentic harnesses and context augments like memory etc.
You’ve hit the nail on the head.
I characterise llm’s as being black boxes that are filled with a dense pool of digital resources - that with the correct prompt you can draw out a mix of resources to produce an output.
But if the mix of resources you need isn’t there - it won’t work. This isn’t limited to just text. This also applies with video models - llms work better for prompts in which you are trying to get material that is widely available on the internet.
I think in the long term, if an LLM can’t use a tool, people won’t stop using LLM’s, they’ll stop using the tool.
We are building everything right now with LLM agents as a primary user in mind and one of our principles is “hallucination driven development”. If LLMs hallucinate an interface to your product regularly, that is a desire path and you should create that interface.
Any example of how I can get it to hallucinate?