Comment by VHRanger

3 years ago

Performance in terms of model quality would be the same.

The fast-se library uses C++ code and word embeddings being averaged to generate sentence embeddings, so would be similarly fast, or faster on apple silicon than x86.

For the SentenceTransformer library models I'm not sure, but I think it would run off the CPU for a M1/M2 computer