← Back to context

Comment by zaptheimpaler

2 days ago

I would love to set this up, I really want all my chats to be on one platform. The problem is, the AI companies seem to want the opposite.

My main concern is how well do all the extra features work compared to the native versions? Like web search, RAG on a document, or deep research, adding images, voice chats - my understanding is the models providers don't provide any API's for any of this stuff, so you have your own implementations of all the extra stuff right? Usually I find the open source versions of all these features aren't up to par with the corporate versions and they lag behind in development.

Yep, we have our own implementations! We've spent a lot of time on them, and in our internal benchmarks they compare pretty favorable to the native versions.

RAG specifically is our speciality - we've done a ton of optimizations there (hybrid search, document age-based weighting, giving the LLM the ability to read more from interesting docs and less of irrelevant docs, etc.) and we outperform the implementation within ChatGPT quite substantially in internal blind testing.

Curious what you find if you compare them head to head though!