← Back to context

Comment by westurner

2 years ago

Nystroem method: https://en.wikipedia.org/wiki/Nystr%C3%B6m_method

6.7 Kernel Approximation > 6.7.1. Nystroem Method for Kernel Approximation https://scikit-learn.org/stable/modules/kernel_approximation...

Nystroem defaults to an rbf radial basis function and - from quantum logic - Bloch spheres are also radial. Perhaps that's nothing.

FWIU SVMs w/ kernel trick are graphical models, and NNs are too.

How much more resource-cost expensive is it to train an ensemble of SVMs than one graphical model with typed relations? What about compared to deep learning for feature synthesis and selection and gradient boosting with xgboost to find the coefficients/exponents of the identified terms of the expression which are not prematurely excluded by feature selection?

There are algorithmic complexity and algorithmic efficiency metrics that should be relevant to AutoML solution ranking. Opcode cost may loosely correspond to algorithmic complexity.

[Dask] + Scikeras + Auto-sklearn 2.0 may or may not modify NN topology metaparameters like number of layers and nodes therein at runtime? https://twitter.com/westurner/status/1697270946506170638