← Back to context

Comment by tossandthrow

3 months ago

If you think about chunking as "take x characters" then using LLMs is a poor idea.

But syntactic chunking also works really poorly for any serious application as you loose basically all context.

Semantic chunking, however, is a task you absolutely would use LLMs for.

If by LLM you mean embeddings I agree. Though you can often get away with using much smaller models for that.

I was talking about people who actually make a call to a completion endpoint and then have the LLM repeat the input text token for token just to get the split.

  • How do you do semantic chunking using embeddings?

    And yes, I perfectly now what you are talking about. And yes, that is a perfect strategy to chunk large texts so you can index it.

    It does not sound like you are familiar with chunking and it's current issues?