← Back to context

Comment by mradek

9 days ago

I would like to know how much context is remaining. Claude code gives a % remaining when it is close to exhaustion which is nice, but I'd like to always see it.

Also, I wish it was possible for the models to leverage local machine to increase/augment its context.

Also, one observation is that Claude.ai (the web UI) gets REALLY slow as the conversation gets longer. I'm on a M1 Pro 32gb MacbookPro, and it lags as I type.

I really enjoy using LLMs and would love to contribute any feedback as I use them heavily every day :)