Comment by asim
2 days ago
This was inevitable. You can't keep training LLMs and expect that's the answer to the evolution of AI. Yes it'll happen and we'll keep creating new more refined and bigger models but it's like DNA or something like the cortex of the brain. After that you need these systems that essentially "live" for years digesting information and develop a more refined way to process, store and retrieve the information. Compression of RAG was also inevitable. It's like the btree index of a database. The thing is, we're probably one or two iterations away from being good enough on the RAG pipeline and then we'll need to focus more on the other pieces of sensory input that need to be connected and processed at higher throughput. Right now it's not fast or efficient enough. This is where the likes of Google will shine. They are probably two decades ahead of everyone on internal technology and there is some team with the breakthrough but it hasn't seen the light of day yet. What's coming out of DeepMind is really a forced effort in productization and publication of work in a consumable format but internally they are likely way ahead. I don't have as much faith in Meta's efforts despite seeing things like this. Quite frankly those people, the ones doing the work should move to more honourable companies. Not feed crack addiction in the form of Meta's universe.
exactly. the real focus internally is working on new architectures. there is no other possibility.