← Back to context

Comment by chowells

19 days ago

Oh, that's not a problem. Just cache the retrieval lookups too.

it's pointers all the way down

  • Just add one more level of indirection, I always say.

    • But seriously… the solution is often to cache / shard to a halfway point — the LLM model weights for instance — and then store that to give you a nice approximation of the real problem space! That’s basically what many AI algorithms do, including MCTS and LLMs etc.