Comment by gronky_
3 days ago
I see it a bit differently - LLMs are an incredible innovation but it’s hard to do anything useful with them without the right wrapper.
A good wrapper has deep domain knowledge baked into it, combined with automation and expert use of the LLM.
It maybe isn’t super innovative but it’s a bit of an art form and unlocks the utility of the underlying LLM
Exactly.
To present a potential usecase: there's a ridiculous and massive backlog in the Indian judicial system. LLMs can be let loose on the entire workflow: triage cases (simple, complicated, intractable, grouped by legal principles or parties), pull up related caselaw, provide recommendations, throw more LLMs and more reasoning at unclear problems. Now you can't do this with just a desktop and chatgpt, you need a systemic pipeline of LLM-driven workflows, but doing that unlocks potentially billions of dollars of value that is otherwise elusive.
>doing that unlocks potentially billions of dollars of value that is otherwise elusive
What's more, it unlocks potentially new additions to the 206 legal cases where generative AI produced hallucinated (fake) content.
https://www.damiencharlotin.com/hallucinations/
>pull up related caselaw
Or just make some up...
At the token layer an LLM can make things up, but not as part of a structured pipeline that validates an invariant that all suggestions are valid entities in the database.
Can google search hallucinate webpages?
How is something that cant admit it doesnt know, and hallucinates a good innovation?
Modern LLMs frequently do state that they "don't know", for what it's worth. Like everything, it highly depends on the question.