Comment by sa-code
12 hours ago
Qdrant is also a good default choice, since it can work in-memory for development, with a hard drive for small deployments and also for "web scale" workloads.
As a principal eng, side-stepping a migration and having a good local dev experience is too good of a deal to pass up.
That being said, turbopuffer looks interesting. I will check it out. Hopefully their local dev experience is good
Qdrant is one of the few vendors I actively steer people away from. Look at the GitHub issues, look at what their CEO says, look at their fake “advancements” that they pay for publicity on…
The number of people I know who’ve had unrecoverable shard failures on Qdrant is too high to take it seriously.
I’m curious about this. Could you please point to some things the CEO has said, or reports of shard failures?
The bit about paying for publicity doesn’t bother me.
Edit: I haven’t found anything egregious that the CEO has said, or anything really sketchy. The shard failure warnings look serious, but the issues look closed
https://github.com/qdrant/qdrant/issues/6025
https://github.com/qdrant/qdrant/issues/4939
https://x.com/nils_reimers/status/1809334134088622217?s=46
https://x.com/generall931/status/1809303448837582850?s=46
There used to be a benchmarking issue with a founder that was particularly egregious but I can’t find it anymore.
The sharding and consensus issues were from around a year and a half ago, so maybe it’s gotten better.
There are just so many options in the space, I don’t know why you’d go with one of the least correct vendors (whether or not the correctness is deception is a different question that I can’t answer)
For local dev + testing, we recommend just hitting the production turbopuffer service directly, but with a separate test org/API key: https://turbopuffer.com/docs/testing
Works well for the vast majority of our customers (although we get the very occasional complaint about wanting a dev environment that works offline). The dataset sizes for local dev are usually so small that the cost rounds to free.
> although we get the very occasional complaint about wanting a dev environment that works offline
It's only occasional because the people who care about dev environments that work offline are most likely to just skip you and move on.
For actual developer experience, as well as a number of use cases like customers with security and privacy concerns, being able to host locally is essential.
Fair enough if you don't care about those segments of the market, but don't confuse a small number of people asking about it with a small number of people wanting it.
As someone who works for a competitor, they are probably right holding off on that segment for a while. Supporting both cloud and local deployments is somewhere between 20% harder and 300% harder depending on the day.
I'm watching them with excitement. We all learn from each other. There's so much to do.
Can confirm. With a setup that works offline, one can
- start small on a laptop. Going through procurement at companies is a pain
- test things in CI reliably. Outages don’t break builds
- transition from laptop scale to web scale easily with the same API with just a different backend
Otherwise it’s really hard to justify not using S3 vectors here
The current dev experience is to start with faiss for PoCs, move to pgvector and then something heavy duty like one of the Lucene wrappers.
Yep, we're well aware of the selection bias effects in product feedback. As we grow we're thinking about how to make our product more accessible to small orgs / hobby projects. Introducing a local dev environment may be part of that.
Note that we already have a in-your-own-VPC offering for large orgs with strict security/privacy/regulatory controls.
That’s not local though
having a local simulator (DynamoDB, Spanner, others) helps me a lot for offline/local development and CI. when a vendor doesn't off this I have often end up mocking it out (one way or another) and have to wait for integration or e2e tests for feedback that could have been pushed further to the left.
in many CI environments unit tests don't have network access, it's not purely a price consideration.
(not a turbopuffer customer but I have been looking at it)
> in many CI environments unit tests don't have network access, it's not purely a price consideration.
I've never seen a hard block on network access (how do you install packages/pull images?) but I am sympathetic to wanting to enforce that unit tests run quickly by minimizing/eliminating RTT to networked services.
We've considered the possibility of a local simulator before. Let me know if it winds up being a blocker for your use case.
4 replies →
I should have clarified, by local dev and testing I did in fact mean offline usage.
Without that it’s unfortunately a non starter
So I can note this down on our roadmap, what's the root of your requirement here? Supporting local dev without internet (airplanes, coffee shops, etc.)? Unit test speed? Something else?
3 replies →