Comment by necubi
3 days ago
One downside is that you're paying for the GPU whether you're fully using it or not. It takes big queries to saturate a GH200, and if you're only using 10% of the capacity of the GPU it doesn't really matter that it's 10x faster.
In a typical company you'll have jobs, some scheduled, some ad-hoc, at a range of sizes. Most of them won't be cost-effective to run on a GPU instance, so you need a scheduling layer that estimates the size of the job and routes it to the appropriate hardware. But now what if the job is too big to run on your GPU machine? Now we either have to scale up our GPU cluster or retry it on our more flexible CPU cluster.
And this all assumes that your jobs can be transparently run across different executors from a correctness and performance standpoint.
There are niches where this makes sense (we run the same 100TB job every day and we need to speed it up), as well and large and sophisticated internal infra teams that can manage a heterogenous cluster + scheduling systems, but it's not mass-market.
The website claims it’s 10x cheaper (“10x faster on same hardware costs”) and implements SQL execution.
I don’t understand why GPU saturation is relevant. If it’s 10x cheaper, it doesn’t matter if you only use 0.1% of the GPU, right?
Correctness shouldn’t be a concern if it implements SQL.
Curious for some more details, maybe there’s something I’m missing.
GPU databases can run a small subset of production workloads in a narrow combination of conditions.
There are plenty of GPU databases out there: mapD/OmniSci/HeavyDB, AresDB, BlazingSQL, Kinetika, BrytlytDB, SQReam, Alenka, ... Some of them are very niche, and the others are not even usable.