← Back to context

Comment by leobg

2 months ago

If by LLM you mean embeddings I agree. Though you can often get away with using much smaller models for that.

I was talking about people who actually make a call to a completion endpoint and then have the LLM repeat the input text token for token just to get the split.

How do you do semantic chunking using embeddings?

And yes, I perfectly now what you are talking about. And yes, that is a perfect strategy to chunk large texts so you can index it.

It does not sound like you are familiar with chunking and it's current issues?