Comment by panarky
17 days ago
This is a big aha moment for me.
If Gemini can do semantic chunking at the same time as extraction, all for so cheap and with nearly perfect accuracy, and without brittle prompting incantation magic, this is huge.
17 days ago
This is a big aha moment for me.
If Gemini can do semantic chunking at the same time as extraction, all for so cheap and with nearly perfect accuracy, and without brittle prompting incantation magic, this is huge.
Could it do exactly the same with a web page? Would this replace something like beautiful soup?
I don't know exactly how or what it's doing behind the scenes, but I've been massively impressed with the results Gemini's Deep Research mode has generated, including both traditional LLM freeform & bulleted output, but also tabular data that had to come from somewhere. I haven't tried cross-checking for accuracy but the reports do come with linked sources; my current estimation is that they're at least as good as a typical analyst at a consulting firm would create as a first draft.
If I used Gemini 2.0 for extraction and chunking to feed into a RAG that I maintain on my local network, then what sort of locally-hosted LLM would I need to gain meaningful insights from my knowledge base? Would a 13B parameter model be sufficient?
Ypur lovalodel has littleore to do but stitch the already meaningzl pieces together.
The pre-step, chunking and semantic understanding is all that counts.
Do you get meaningful insights with current RAG solutions?
Yes. For example, to create AI agent 'assistants' that can leverage a local RAG in order to assist with specialist content creation or operational activities.
Small point but is it doing semantic chunking, or loading the entire pdf into context? I've heard mixed results on semantic chunking.
It loads the entire PDF into context, but then it would be my job to chunk the output for RAG, and just doing arbitrary fixed-size blocks, or breaking on sentences or paragraphs is not ideal.
So I can ask Gemini to return chunks of variable size, where each chunk is a one complete idea or concept, without arbitrarily chopping a logical semantic segment into multiple chunks.
Fixed size chunks is holding back a bunch of RAG projects on my backlog. Will be extremely pleased if this semantic chunking solves the issue. Currently we're getting around an 78-82% success on fixed size chunked RAG which is far too low. Users assume zero results on a RAG search equates to zero results in the source data.
5 replies →
I wish we had a local model for semantic chunking. I've been wanting one for ages, but haven't had the time to make a dataset and finetune that task =/.
It's cheap now because Google is subsidizing it, no?
Spoiler: every model is deeply, deeply subsidized. At least Google's is subsidized by a real business with revenue, not VC's staring at the clock.
It's cheap because it's a Flash model, far smaller and much less compute for inference, runs on TPUs instead of GPUs.