← Back to context

Comment by maxkfranz

13 hours ago

Could you use an approach like this much like a traditional network proxy, to block or sanitise some requests?

E.g. if a request contains confidential information (whatever you define that to be), then block it?

I do kinda the opposite where I run my AI in a sandbox. it sends dummy tokens to APIs. the proxy then injects the real creds. so, the AI never has access to creds.

https://clauderon.com/ -- not really ready for others to use it though

Forgot to mention: It’s a neat tool. Well done.

  • Thank you, what I was thinking was more along the lines of optimizing how you use your context window. So that the LLM can actually access what it needs to, like a incredibly powerful compact that runs in the background with your file system working as a long term memory... Still thinking how to make it work, so I am super open to ideas.