← Back to context

Comment by r0ze-at-hn

1 day ago

I suspect AI company want improved efficiencies and developing a framework that can be applied in determining the minimal-energy, maximal-efficiency architecture for ai models. Calculating the precise limits, like a Cognitive Event Horizon, where a model becomes so complicated it literally costs more energy to run than the knowledge it provides, and the Semantic Horizon, where it simply gets too complex to be accurate, etc. Lots of cool implications such as around a fundamental mathematical maximum learning rate which results in trying to get anywhere close to that that by doing stuff like aggressively filtering of the data.