Comment by wonnage
9 hours ago
I see this sentiment of using AI to improve itself a lot but it never seems to work well in practice. At best you end up with a very verbose context that covers all the random edge cases encountered during tasks.
For this to work the way people expect you’d need to somehow feed this info back into fine tuning rather than just appending to context. Otherwise the model never actually “learns”, you’re just applying heavy handed fudge factors to existing weights through context.
I've been playing around with an AI generated knowledge base to grok our code base, I think you need good metrics on how the knowledge base is used. A few things is:
1. Being systematic. Having a system for adding, improving and maintaining the knoweldge base 2. Having feedback for that system 3. Implementing the feedback into a better system
I'm pretty happy I have an audit framework and documentation standards. I've refactored the whole knowledge base a few times. In the places where it's overly specific or too narrow in it's scope of use for the retained knowledge, you just have to prune it.
Any garden has weeds when you lay down fertile soil.
Sometimes they aren't weeds though, and that's where having a person in the driver's seat is a boon.