← Back to context

Comment by acchow

1 year ago

What it will come down to is computational efficiencies. We don’t want to retrain once a month - we want to retrain continuously. We don’t want one agent talking to 5 LLMs. We want thousands of LLMs all working in concert.

This and also the way models are trained has to be rethought. BPP is good for figuring out complex function mappings, but not for storing information.