Comment by instalabsai
16 days ago
I’ve been shipping AI-written code for 2 years now. I can build something amazing in 40 mins but then spend 4+ hours debugging because the agent has no idea how the libraries it’s calling actually work. Docs are stale, StackOverflow is dead, training data is outdated. Every engineer I talk to has the same problem.
So I built Instagit, an MCP server that lets your coding agent understand any GitHub repo in depth so it can get it right on the first try. Works with Claude Code, Codex, Cursor, OpenClaw, etc.
No API key or account needed to try it out. Just need to share these instructions with your coding agent to get started:
curl -s https://instagit.com/install.md
hey irrelevant question, what tools you use to make such great landing pages for your products? your instagit.com looks great, what's your vibe coding workflow?
Thanks! No fancy tooling, I made it by just prompting Claude Code with the "frontend-design" skill from Anthropic: https://skillsmp.com/skills/anthropics-skills-skills-fronten...
Any promoting tips for this sort of result?
2 replies →
Interesting! How does it work under the hood? If you can share. Would like to understand if this improves my Claude Code's understanding of my codebase.
It’s basically scanning the source code for each question (you can also check out specific branches or release tags if you need to debug a particular version) and then writes up the answer once it finds it.
It’s not really meant to query your own code base (Claude Code already does a great job at that) but more to explore other code bases you want to integrate with.
Why not context 7?
Context7 is great but ultimately it’s just a pre-generated static summarization that might not include the specific answers the agent needs. I have a slightly different approach where the actual source code is scanned for each question so it’s much more targeted and never out of date.