Comment by ramon156
3 days ago
> the extension detects tokens and sends the code to an LLM
> The LLM generates documentation
so, not documentation? Why not write your own engine and detect the official docs? e.g. docs.rs would do this wonderfully
3 days ago
> the extension detects tokens and sends the code to an LLM
> The LLM generates documentation
so, not documentation? Why not write your own engine and detect the official docs? e.g. docs.rs would do this wonderfully
The LLM approach is simpler and more flexible since it works with every library and language out of the box.
Looking up official documentation would require shipping sophisticated parsers for each language, plus a way to map tokens to their corresponding docs. I'd also need to maintain those mappings as libraries evolve and documentation moves. Some ecosystems make this easier (Rust with docs.rs), but others would be much harder (AWS documentation, for example).
I also want explanations visible directly in hover hints rather than linking out to external docs. So even with official documentation, I'd need to extract and present the relevant content anyway.
Beyond that, the LLM approach adapts to context in ways static docs can't. It generates explanations for code within the documentation you're reading, and it works on code that doesn't compile, like snippets with stubs or incomplete examples.
It could be interesting in the future to look into doing some type of hybrid approach where an LLM goes out and searches up the documentation, that way it's a little bit more flexible. But that would also be a bit slower and more costly.
> Looking up official documentation would require shipping sophisticated parsers for each language,
You could just token match (use tree-sitter or something similar) and fetch the official docs. Keep it dead-simple so there's no way you can provide false positives (unlike what's happening rn where hallucinations will creep in).
> It generates explanation
Again, I don't want that. It's not a feature, it's a limitation that right now gives you fake information.
> The LLM approach is simpler
For whom? The whole reason I want to consult docs is to get the official documentation on a given topic. How could I trust anything it says, and what’s to say any earned trust is durable over time?
> For whom?
what is the name for this kind of pointless, lazy, selective, quoting that willfully misconstrues what's being quoted? the answer to this question is incredibly clear: for the developer that created this tool. if that makes you unhappy enough to malign them then maybe you should just not use it?
8 replies →
Oh…I guess this isn’t the documentation-surfacing droids I’m looking for.
I could see getting actual docs being useful. Spitting out the delusions of an LLM is pretty well covered already, at least in my stack.
Very little “engine” required - use an existing language sever, map tokens to URLs, done.