Comment by teruakohatu
1 year ago
> Even if many types of architectures converge to the same loss over time, finding the one that converges the fastest is quite valuable given the cost of running GPU's at scale.
This! Not just fastest but with the lowest resources in total.
Fully connected neural networks are universal functions. Technically we don’t need anything but a FNN, but memory requirements and speed would be abysmal far beyond the realm of practicality.
Unless we could build chips in 3D?
Not even then, a truly fully connected network would have super exponential runtime (it would take N^N time to evaluate)
Wetware is the future.
2 replies →
We need quantum computing there. I remember seeing a recent article about quantum processes in the brain. If that’s true, QC may be the missing part.
2 replies →
We are already doing this.
Heat extraction.