Comment by romanfll
2 days ago
The shift from Explicit Reduction to GNNs/Embeddings is where the high-end is going in my view… We hit this exact fork in the road with our forecasting/anomaly detection engine (DriftMind). We considered heavy embedding models but realised that for edge streams, we couldn't afford the inference cost or the latency of round-tripping to a GPU server. It feels like the domain is splitting into 'Massive Server-Side Intelligence' (I am a big fan of Graphistry) and 'Hyper-Optimized Edge Intelligence' (where we are focused).
Interesting, mind sharing the context here?
My experience has been as workloads get heavier, it's "cheaper" to push to an accelerated & dedicated inferencing server. This doesn't always work though, eg, world of difference between realtime video on phones vs an interactive chat app.
Re:edge embedding, I've been curious about the push by a few to 'foundation GNNs', and it may be fun to compare UMAP on property-rich edges to those. So far we focus on custom models, but the success of neural graph drawing NNs & newer tabular NNs suggest something pretrained can replace UMAP as a generic hammer here too...