Comment by mindwok

2 days ago

It's becoming increasingly clear that memory and context are the bottlenecks in advancing usage of AI. I can't help but feel there needs to be a general, perhaps even built into the model, solution for this - everyone seems to be building something on top that is roughly the same thing.

Karpathy had a similar interesting take the other day

https://x.com/karpathy/status/1921368644069765486

  • I'm starting up experiments with having agents write system prompts for sub-agents. Specifically, have the LLM build, test, and validate a small, simple tool, and once validated, add it to its own system prompt listing available tools.

    Anyone else experimenting with letting LLMs generate their own or sub-agent system prompts?

Fine tuning should be combined with inference in some way. However this requires keeping the model loaded at high enough precision for backprop to work.

Instead of hundreds of thousands of us downloading the latest and greatest model that won't fundamentally update one bit until we're graced with the next one, I would think we should all be able to fine-tune the weights so that it can naturally memorize new additional info and preferences without using up context length.

Absolutely. The "intelligence" isn't complete without a memory. In fact there's a whole lot more to it than that. The LLM is one component, a logic factory, but there's so much more than the LLM and the memory.

In fact, systems should be LLM agnostic or use different models for different needs.

I don't believe building something into the model will ever be the solution though. It is interesting what Google is trying to do with model caching but at the end of the day I believe the strength of agents here will rely heavily upon modularity.