There are a ridiculous number of projects doing this.
I'm always baffled by the response they get since doing this is also the most impractical, poorly scaling, way to insert an LLM into your development process.
On one hand if you realize that, there may be times where you get lucky with the size of a codebase and the nature of your questions and it works acceptably.
But on the other, this feels like the kind of thing someone who's hearing others rave about the utility of AI will try with too large of a codebase, insert the result into ChatGPT, and then get an LLM underperforming because it's being flooded with irrelevant context for every basic operation it's being asked to do.
There are very few times when providing the entire codebase in the context window instead of the relevant code to a single operation makes sense.
There are a ridiculous number of projects doing this.
I'm always baffled by the response they get since doing this is also the most impractical, poorly scaling, way to insert an LLM into your development process.
On one hand if you realize that, there may be times where you get lucky with the size of a codebase and the nature of your questions and it works acceptably.
But on the other, this feels like the kind of thing someone who's hearing others rave about the utility of AI will try with too large of a codebase, insert the result into ChatGPT, and then get an LLM underperforming because it's being flooded with irrelevant context for every basic operation it's being asked to do.
There are very few times when providing the entire codebase in the context window instead of the relevant code to a single operation makes sense.
It is not. Others have commented pointing to services similar to this one, though.