Comment by suzzer99
8 hours ago
Yeah I'm working on one of those now that a 3rd-party vendor cranked out for us. I spent all day ripping out an endpoint that did 98% of what another endpoint did and should never have existed. I also ripped out 80 lines of code that looked like this:
const sqlStatement = (!params.mostRecentOnly) ? {giant SQL statement} : {identical giant SQL statement + 'LIMIT 1' at the end}
AI never met a problem that can't be solved with more code. Need some data in a slightly different structure? Don't try to modify an existing endpoint, just build a new one! Need to access a field that's buried in a JSON object in the database? Just create a new column, but don't bother removing the field from the JSON object. The more sources of truth, the merrier! When it comes time to update, just write more code to update the field everywhere it lives!
Factor out the extra sources of truth you say? Good luck scanning the most verbose front-end you've ever seen to make sure nothing is looking at the source you want to remove. In the beginning of big projects, you have to be absolutely ruthless about keeping complexity down so it doesn't get out of control later. AI is terrible at keeping complexity down.
My goal is to halve the lines of code from what the vendor turned over to us. One baby step at a time.
If only we had this tech back when managers were looking at how many lines of code you were committing weekly as a performance metric.
Now they're looking at your token consumption, which is even more gameable (and stupid).
That is a skill issue though. I have rules for my agents to write compositional, reusable, modular, small files and to avoid any sort of boilerplate etc. Being config driven, single source of truth, having other agents review that rules are followed, etc. Any API or UI or any sort of entry points very light, just proxying to the modular logic basically, so this logic could be reused by any entrypoint easily.
UI components always presentational only logic abstracted modularly, etc...
Can you share your rules and some of the example PRs that it auto generates and reviews?
The number of times I’ve seen Claude say “this test was failing already so is ignored” when it _wasnt_ despite me telling it to never do that makes me doubt.
How do you make it so that the model doesn't forget to follow those rules and skills? How do you make it actually understand the architecture and constraints? You can't, current models don't work that way to make it happen.
Ah, the make_no_mistakes.md
I mean quite frankly I have seen enough code that was definitely written by humans that had exactly this "style".
Then again I don't want to pay for AI to give me the coding style of the worst I ever worked with either.