← Back to context

Comment by mfkhalil

2 days ago

Yeah LLMs were the easiest way to get a proof of context running, but replacing it with a specialized distilled model/classifier should hopefully make it way quicker.

As for the results, it's tough because we've made the deliberate decision to have no control over the reranking. What that means is that if your criteria is "written by a woman", for instance, then any result that meets that will be ranked equally at the top. In all engines I've built for myself, I have a relevance criteria that's weighted relative to how much I care that the result is exactly what I'm looking for. It's probably important to make that clearer to the end user.