← Back to context

Comment by philipkglass

3 days ago

How does this work? I thought it was probably powered by embeddings and maybe some more traditional search code, but I checked out the linked github repo and I didn't see any model/inference code. The public code is a wrapper that communicates with your commercial API?

Some searches work like magic and others seem to veer off target a lot. For example, "sculpture" and "watercolor" worked just about how I'd expect. "Lamb" showed lambs and sheep. But "otter" showed a random selection of animals.

It is powered by Mixedbread Search which is powered by our model Omni. Omni is multimodal (text, video, audio, images) and multi vector, which helps us to capture more information.

The search is in beta and we improving the model. Thank you for reporting the queries which are not working well.

Edit: Re the otter, I just checked and I did not found otters in the dataset. We should not return any results if the model is not sure to reduce confusion.

  • There's at least a little bit of otter in the data. The one relevant result I saw was "Plate 40: Two Otters and a Beaver" by Joris Hoefnagel.

    I also expected semantic search to return similar results for "fireworks" and "pyrotechnics," since the latter is a less common synonym for the former. But I got many results for fireworks and just one result for pyrotechnics.

    This is still impressive. My impulse is to poke at it with harder cases to try to reason about how it could be implemented. Thanks for your Show HN and for replying to me!

    • If you find more such cases please feel free to send them over to aamir at domain name of the Show HN. I would love to see those cases and see how we can improve on them. Thank you so much for the feedback.