← Back to context

Comment by the_duke

1 day ago

Agreed in general, the models are getting pretty good at dumping out new code, but for maintaining or augmenting existing code produces pretty bad results, except for short local autocomplete.

BUT it's noteworthy that how much context the models get makes a huge difference. Feeding in a lot of the existing code in the input improves the results significantly.

This might be an argument in favor of a microservices architecture with the code split across many repos rather than a monolithic application with all the code in a single repo. It's not that microservices are necessarily technically better but they could allow you to get more leverage out of LLMs due to context window limitations.

  • if your microservices become more verbose overall, now you have handicapped your ability to cram the whole codebase into a context window.

    I think AI is great but humans know the why's of the code needs to exist AI's don't need stuff, only generate it

    • The LLMs would only need to have API information for the service, not the whole code, which would be small.

  • this is a short term issue though. The available context window has been increasing exponentially over the past 2 years