Comment by some-guy
2 days ago
I'm currently in repos where the context window required is so large that the output is almost always "wrong" for the problem at hand. Quite a few people at my company burn through tokens this way, and it certainly isn't providing value to the company.
As always, improving accessibility for humans makes automation more effective. If the humans need to remember a PhD's worth of source code/documentation to contribute effectively, your codebase stinks.
People at my company have started writing docs specifically for claude. They're quite useful for me too, but kinda disappointing they never wrote these docs for their colleagues.
As someone who has written many docs, it's because 99% won't read it (rightfully so if it's verbose). You can turn that doc into a skill in a repo and Claude will read it everytime it's needed.
I recently saw this with the logseq api - the published api was an auto-generated stub. So I tried to grep the source code for the function and found detailed documentation written for claude. So I guess one benefit of all of this is that it's making people actually document things and maybe plan a little bit before implementing.
I agree, in the general context of how I code.
The LLM hype train has me reflecting on what a spoiled existence working in a ‘proper’ language provides though…
React devs, JS devs, front-end devs working on large sites and frameworks might be triggering tens of files to be brought into context. What an OCaml dev can bring in through a 5 line union type can look very different in less token-efficient and terse languages.