← Back to context

Comment by __MatrixMan__

2 days ago

It would be cool if there were some cache (invalidated by hand, potentially distributed across many users) so we could get consistent results while iterating on the later stages of the pipeline.

Do you mean you want responses cached to e.g. a file based on the inputs?

  • Yeah, if it's a novel prompt, by all means send it to the model, but if its the same prompt as 30s ago, just immediately give me the same response I got 30s ago.

    That's typically how we expect bash pipelines to work, right?

    • Bash pipelines don't do any caching and will execute fresh each time, but I understand your idea and why a cache is useful when iterating on the command line. I'll implement it. Thanks!

“tee” where you want to intercept and cat that file into later stages?

  • Yeah sure but it breaks the flow that makes bash pipelines so fun:

    - arrow up

    - append a stage to the pipeline

    - repeat until output is as desired

    If you're gonna write to some named location and later read from it you're drifting towards a different mode of usage where you might as well write a python script.