← Back to context

Comment by lmeyerov

9 hours ago

Interesting, mind sharing the context here?

My experience has been as workloads get heavier, it's "cheaper" to push to an accelerated & dedicated inferencing server. This doesn't always work though, eg, world of difference between realtime video on phones vs an interactive chat app.

Re:edge embedding, I've been curious about the push by a few to 'foundation GNNs', and it may be fun to compare UMAP on property-rich edges to those. So far we focus on custom models, but the success of neural graph drawing NNs & newer tabular NNs suggest something pretrained can replace UMAP as a generic hammer here too...