Comment by bfeynman
4 months ago
kinda surprising to see basically just LLM wrappers still getting funded/interest. Neat idea obviously and nice to have, but that's sort of it. Fundamentally misses mark on what you can do already without AI (running tests on your actual doc code examples etc), and fact that any of the AI IDEs could turn part of this one with single switch to just append a doc friendly update to a commit. To think that soon we have ai agents reading ai generated docs instead of code to reduce it back down to code...
Ironically, the common (used to be common?) trope about foundation models being picks and shovels and LLM wrappers having no values is probably backwards. ChatGPT is the most valuable AI product and it's just a wrapper around an underlying LLM that is not 10x better than the rest of the models.
I would not define ChatGPT as a wrapper, anything where you are doing actual training/learning and updating weights is by most persons definition, not a wrapper. Just injecting stuff into the context or using RAG is a wrapper because there are no weight updates anywhere.
My view is that the model is not the value. GPT-4o is not at the top of most LLM leaderboards, but ChatGPT is at the top of the AI product list.
So far, there is at most a few months' gap between state-of-the-art and commodity, if for no other reason than that other companies train on the output of the SOTA.
If I were a VC, I would invest in a wrapper (Cursor, Harvey, this idea, etc.) over a foundation model every day of the week.