Comment by Forgeties79
13 hours ago
Have you tried using it? Not being flippant and annoying. Just curious if you tried it and what the results were
13 hours ago
Have you tried using it? Not being flippant and annoying. Just curious if you tried it and what the results were
Why should he put effort into measuring a tool that the author has not? The point is there are so many of these tools an objective measure that the creators of these tools can compare against each other would be better.
So a better question to ask is - Do you have any ideas for an objective way to a measure a performance of agentic coding tools? So we can truly determine what improves performance or not.
I would hope that internal to OpenAI and Anthropic they use something similar to the harness/test cases they use for training their full models to determine if changes to claude code result in better performance.
Well, if I were Microsoft and training co-pilot, I would log all the <restore checkpoint> user actions and grade the agents on that. At scale across all users, "resets per agent command" should be useful. But then again, publishing the true numbers might be embarrassing..
I'm not sure it's a good signal.
I often use restore conversion checkpoint after successfully completing a side quest.
Who has time to try this when there's this huge backlog here: https://www.reddit.com/r/ClaudeAI/search/?q=memory
Have you tried any of those?
Yes, they haven't helped. Have you found one that works for you?
3 replies →
It does nothing but send a bunch of data to a "alpha use at your own risk" third-party site that may or may not run some LLM on your data: https://ensue-network.ai/login