Comment by necovek
14 days ago
The premise might possibly be true, but as an actually seasoned Python developer, I've taken a look at one file: https://github.com/dx-tooling/platform-problem-monitoring-co...
All of it smells of a (lousy) junior software engineer: from configuring root logger at the top, module level (which relies on module import caching not to be reapplied), over not using a stdlib config file parser and building one themselves, to a raciness in load_json where it's checked for file existence with an if and then carrying on as if the file is certainly there...
In a nutshell, if the rest of it is like this, it simply sucks.
The more I browse through this, the more I agree. I feel like one could delete almost all comments from that project without losing any information – which means, at least the variable naming is (probably?) sensible. Then again, I don't know the application domain.
Also…
there is a lot of obviously useful abstraction being missed, wasting lines of code that will all need to be maintained.
The scary thing is: I have seen professional human developers write worse code.
> I feel like one could delete almost all comments from that project without losing any information
I far from a heavy LLM coder but I’ve noticed a massive excess of unnecessary comments in most output. I’m always deleting the obvious ones.
But then I started noticing that the comments seem to help the LLM navigate additional code changes. It’s like a big trail of breadcrumbs for the LLM to parse.
I wouldn’t be surprised if vibe coders get trained to leave the excess comments in place.
More tokens -> more compute involved. Attention-based models work by attending every token with each other, so more tokens means not only having more time to "think" but also being able to think "better". That is also at least part of the reason why o1/o3/R1 can sometimes solve what other LLMs could not.
Anyway, I don't think any of the current LLMs are really good for coding. What it's good at is copy-pasting (with some minor changes) from the massive code corpus it has been pre-trained. For example, give it some Zig code and it's straight unable to solve even basic tasks. Same if you give it really unique task, or if you simply ask for potential improvements of your existing code. Very, very bad results, no signs of out-of-box thinking whatsoever.
BTW: I think what people are missing is that LLMs are really great at language modeling. I had great results, and boosts in productivity, just by being able to prepare the task specification, and do quick changes in that really easily. Once I have a good understanding of the problem, I can usually implement everything quickly, and do it in much much better way than any LLM can currently do.
1 reply →
It doesn't hurt that the model vendors get paid by the token, so there's zero incentive to correct this pattern at the model layer.
1 reply →
What’s worse, I get a lot of comments left saying what the AI did, not what the code does or why. Eg “moved this from file xy”, “code deleted because we have abc”, etc. Completely useless stuff that should be communicated in the chat window, not in the code.
LLMs are also good at commenting on existing code.
It’s trivial to ask Claude via Cursor to add comments to illustrate how some code works. I’ve found this helpful with uncommented code I’m trying to follow.
I haven’t seen it hallucinate an incorrect comment yet, but sometimes it will comment a TODO that a section should be made more more clear. (Rude… haha)
1 reply →
>The scary thing is: I have seen professional human developers write worse code.
This is kind of the rub of it all. If the code works, passes all relevant tests, is reasonably maintainable, and can be fitted into the system correctly with a well defined interface, does it really matter? I mean at that point its kind of like looking at the output of a bytecode compiler and being like "wow what a mess". And it's not like they can't write code up to your stylistic standards, it's just literally a matter of prompting for that.
> If the code works, passes all relevant tests, is reasonably maintainable, and can be fitted into the system correctly with a well defined interface, does it really matter?
You're not wrong here, but there's a big difference in programming one-off tooling or prototype MVPs and programming things that need to be maintained for years and years.
We did this song and dance pretty recently with dynamic typing. Developers thought it was so much more productive to use dynamically typed languages, because it is in the initial phases. Then years went by, those small, quick-to-make dynamic codebases ended up becoming unmaintainable monstrosities, and those developers who hyped up dynamic typing invented Python/PHP type hinting and Flow for JavaScript, later moving to TypeScript entirely. Nowadays nobody seriously recommends building long-lived systems in untyped languages, but they are still very useful for one-off scripting and more interactive/exploratory work where correctness is less important, i.e. Jupyter notebooks.
I wouldn't be surprised to see the same pattern happen with low-supervision AI code; it's great for popping out the first MVP, but because it generates poor code, the gung-ho junior devs who think they're getting 10x productivity gains will wisen up and realize the value of spending an hour thinking about proper levels of abstraction instead of YOLO'ing the first thing the AI spits out when they want to build a system that's going to be worked on by multiple developers for multiple years.
20 replies →
what are you going to do when something suddenly doesn't work and cursor endlessly spins without progress no matter how many "please don't make mistakes" you add? delete the whole thing and try to one-shot it again?
23 replies →
Good insight, and indeed quite exactly my state of mind while creating this particular solution.
Iin this case, I did put in the guard rails to ensure that I reach my goal in hopefully a straight line and a quickly as possible, but to be honest, I did not give much thought to long-term maintainability or ease of extending it with more and more features, because I needed a very specific solution for a use case that doesn't change much.
I'm definitely working differently in my brown-field projects where I'm intimately familiar with the tech stack and architecture — I do very thorough code reviews afterwards.
I think this code is at least twice the size than it needs to be compared to nicer, manually produced Python code: a lot of it is really superfluous.
People have different definitions of "reasonably maintainable", but if code has extra stuff that provides no value, it always perplexes the reader (what is the point of this? what am I missing?), and increases cognitive load significantly.
But if AI coding tools were advertised as "get 10x the output of your least capable teammate", would they really go anywhere?
I love doing code reviews as an opportunity to teach people. Doing this one would suck.
Right, and the reason why professional developers are writing worse code out there is most likely because they simply don't have the time/aren't paid to care more about it. The LLM is then mildly improving the output in this brand of common real world scenario
> there is a lot of obviously useful abstraction being missed, wasting lines of code that will all need to be maintained.
This is a human sentiment because we can fairly easily pick up abstractions during reading. AIs have a much harder time with this - they can do it, but it takes up very limited cognitive resources. In contrast, rewriting the entire software for a change is cheap and easy. So to a point, flat and redundant code is actually beneficial for a LLM.
Remember, the code is written primarily for AIs to read and only incidentally for humans to execute :)
At the very least, if a professional human developer writes garbage code you can confidently blame them and either try to get them to improve or reduce the impact they have on the project.
With AI they can simply blame whatever model they used and continually shovel trash out there instantly.
I don't see the difference there. Whether I've written all the code myself or an AI wrote all of it, my name will be on the commit. I'll be the person people turn to when they question why code is the way it is. In a pull request for my commit, I'll be the one discussing it with my colleagues. I can't say "oh, the AI wrote it". I'm responsible for the code. Full stop.
If you're in a team where somebody can continuously commit trash without any repercussions, this isn't a problem caused by AI.
> The scary thing is: I have seen professional human developers write worse code.
That's not the scary part. It's the honest part. Yes, we all have (vague) ideas of what good code looks like, and we might know it when we see it but we know what reality looks like.
I find the standard to which we hold AI in that regard slightly puzzling. If I can get the same meh-ish code for way less money and way less time, that's a stark improvement. If the premise is now "no, it also has to be something that I recognize as really good / excellent" then at least let us recognize that we have past the question if it can produce useful code.
I think there’s a difference in that this is about as good as LLM code is going to get in terms of code quality (as opposed to capability a la agentic functionality). LLM output can only be as good as its training data, and the proliferation of public LLM-generated code will only serve as a further anchor in future training. Humans on the other hand ideally will learn and improve with each code review and if they don’t want to you can replace them (to put it harshly).
I do believe it's amazing what we can build with AI tools today.
But whenever someone advertises how an expert will benefit from it yet they end up with crap, it's a different discussion.
As an expert, I want AI to help me produce code of similar quality faster. Anyone can find a cheaper engineer (maybe five of them?) that can produce 5-10x the code I need at much worse quality.
I will sometimes produce crappy code when I lack the time to produce higher quality code: can AI step in and make me always produce high quality code?
That's a marked improvement I would sign up for, and some seem to tout, yet I have never seen it play out.
In a sense, the world is already full of crappy code used to build crappy products: I never felt we were lacking in that department.
And I can't really rejoice if we end up with even more of it :)
My current favourite LLM wankery example is this beauty: https://blog.fahadusman.com/proxmox-replacing-failed-drive-i...
Note how it has invented the faster parameter for the zpool command. It is possible that the blog writer hallucinated a faster parameter themselves without needing a LLM - who knows.
I think all developers should add a faster parameter to all commands to make them run faster. Perhaps a LLM could create the faster code.
I predict an increase of man page reading, and better quality documentation at authoritative sources. We will also improve our skills at finding auth sources of docs. My uBlacklist is getting quite long.
What makes you think this was created by an LLM?
I suspect they might actually have a pool named faster -- I know I've named pools similarly in the past. This is why I now name my pools after characters from the Matrix, as is tradition.
This really gets to an acceleration of enshittification. If you can't tell its an LLM, and there's nobody to verify the information, humanity is architecting errors and mindfucks into everything. All of the markers of what is trustworthy has been coopted by untrustworthy machines, so all of the way's we'd previously differentiated actors have stopped working. It feels like we're just losing truth as rapidly as LLMs can generate mistakes. We've built a scoundrels paradise.
How useful is a library of knowledge when n% of the information is suspect? We're all about to find out.
2 replies →
The pool is named backups according to zpool status and the paragraph right after.
But then again the old id doesn't match between the two commands.
1 reply →
How can this article be written by LLM? Its date is November 2021. Not judging the article as a whole but the command you pointed out seems to be correct. Faster is the name of the pool.
>Its date is November 2021
The date can be spoofed. It first showed up on archive.org in December 2022, and there's no captures for the site before then, so I'm liable to believe the dates are spoofed.
There was alot going on in the years before ChatGPT. Text generation was going strong with interactive fiction before anyone were talking about OpenAI.
I used LLM:s for content generation in july 2021. Of course that was when LLM:s were pretty bad.
GPT-2 was released in 2019. ChatGPT wasn't the first publicly available LLM.
Ok - not wrong at all. Now take that feedback and put it in a prompt back to the LLM.
They’re very good at honing bad code into good code with good feedback. And when you can describe good code faster than you can write it - for instance it uses a library you’re not intimately familiar with - this kind of coding can be enormously productive.
> They’re very good at honing bad code into good code with good feedback.
And they're very bad at keeping other code good across iterations. So you might find that while they might've fixed the specific thing you asked for—in the best case scenario, assuming no hallucinations and such—they inadvertently broke something else. So this quickly becomes a game of whack-a-mole, at which point it's safer, quicker, and easier to fix it yourself. IME the chance of this happening is directly proportional to the length of the context.
This typically happens when you run the chat too long. When it gives you a new codebase, fire up a new chat so the old stuff doesn't poison the context window.
3 replies →
Nah. This isn’t true. Every time you hit enter you’re not just getting a jr dev, you’re getting a randomly selected jr dev.
So, how did I end up with a logging.py, config.py, config in __init__.py and main.py? Well I prompted for it to fix the logging setup to use a specific format.
I use cursor, it can spit out code at an amazing rate and reduced the amount of docs I need to read to get something done. But after its second attempt at something you need to jump in and do it yourself and most likely debug what was written.
Are you reading a whole encyclopedia each time you assigned to a task? The one thing about learning is that it compounds. You get faster the longer you use a specific technology. So unless you use a different platform for each task, I don't think you have to read that much documentation (understanding them is another matter).
1 reply →
I do plan on experimenting with the latest versions of coding assistants, but last I tried them (6 months ago), none could satisfy all of the requirements at the same time.
Perhaps there is simply too much crappy Python code around that they were trained on as Python is frequently used for "scripting".
Perhaps the field has moved on and I need to try again.
But looking at this, it would still be faster for me to type this out myself than go through multiple rounds of reviews and prompts.
Really, a senior has not reviewed this, no matter their language (raciness throughout, not just this file).
I would not say it is “very good” at that. Maybe it’s “capable,” but my (ample) experience has been the opposite. I have found the more exact I describe a solution, the less likely it is to succeed. And the more of a solution it has come up with, the less likely it is to change its mind about things.
Every since ~4o models, there seems to be a pretty decent chance that you ask it to change something specific and it says it will and it spits out line for line identical code to what you just asked it to change.
I have had some really cool success with AI finding optimizations in my code, but only when specifically asked, and even then I just read the response as theory and go write it myself, often in 1-15% the LoC as the LLM
I’ve found AI tools extremely helpful in getting me up to speed with a library or defining an internal override not exposed by the help. However, if I’m not explicit in how to solve a problem the result looks like the bad code it’s been ingesting.
I "love" this part:
An extremely useful and insightful comment. Then you look where it's actually used,
... so like, the entire function and its call (and its needlessly verbose comment) could be removed because the existence of the directory is being checked anyway by pathlib.
This might not matter here because it's a small, trivial example, but if you have 10, 50, 100, 500 developers working on a codebase, and they're all thoughtlessly slinging code like this in, you're going to have a dumpster fire soon enough.
I honestly think "vibe coding" is the best use case for AI coding, because at least then you're fully aware the code is throwaway shit and don't pretend otherwise.
edit: and actually looking deeper, `ensure_dir_exists` actually makes the directory, except it's already been made before the function is called so... sigh. Code reviews are going to be pretty tedious in the coming years, aren't they?
Not all code needs to be written at a high level of quality. A good deal of code just needs to work. Shell scripts, one-offs, linter rules, etc.
It'll be really interesting to see if the tech advances fast enough that future AI can deal with the tech debt of present day AI or if we'll see a generational die off of apps/companies.
I expect some of the big companies that went all in on relying on AI to fall in the coming years.
It will take some time tho, as decision makers will struggle to make up reasons why why noone on the payroll is able to fix production.
You’re objectively correct in a business context, which is what most software is for. For me, seeing AI slop code more and more is just sad from a craft perspective.
Software that’s well designed and architected is a pleasure to read and write, even if a lower quality version would get the job done. I’m watching one of the things I love most in the world become more automated and having the craftsmanship stripped out of it. That’s a bit over dramatic from me, but it’s been sad to watch.
It’s probably the same way monks copying books felt when the printing press came along. “Look at this mechanical, low-quality copy. It lacks all finesse and flourish of the pen!”
I agree with you that it is sad. And what is especially sad is that the result will probably be lower quality overall, but much cheaper. It’s the inevitable result of automation.
1 reply →
I feel exactly the same way, it’s profoundly depressing.
Having seen my fair share of those, they tend to work either until they don't, or you need to somehow change it.
Also somewhat strangely, I've found Python output has remained bad, especially for me with dataframe tasks/data analysis. For remembering matplotlib syntax I still find most of them pretty good, but for handling datagframes, very bad and extremely counter productive.
Saying that, for typed languages like TypeScript and C#, they have gotten very good. I suspect this might be related to the semantic information can be found in typed languages, and hard to follow unstructured blobs like dataframes, and there for, not well repeated by LLMs.
Spark especially is brutal for some reason. Even databrick's AI is bad at spark, which is very funny.
It's probably because spark is so backwards compatible with pandas, but not fully.
Here's a rl example from today:
I asked $random_llm to give me code to recursively scan a directory and give me a list of file names relative to the top directory scanned and their sizes.
It gave me working code. On my test data directory it needed ... 6.8 seconds.
After 5 min of eliminating obvious inefficiencies the new code needed ... 1.4 seconds. And i didn't even read the docs for the used functions yet, just changed what seemed to generate too many filesystem calls for each file.
Nice, sounds like it saved you some time.
You "AI" enthusiasts always try to find a positive spin :)
What if I had trusted the code? It was working after all.
I'm guessing that if i asked for string manipulation code it would have done something worth posting on accidentally quadratic.
18 replies →
In my opinion this isn't even too relevant. I am no python expert but I believe defining a logger at the top for the average one file python script is perfectly adequate or even very sensible in many scenarios. Depends on what you expect the code to do. Ok, the file is named utils.py...
Worse by far is still the ability of AI to really integrate different problems and combine them into a solution. And it also seems to depend on the language. In my opinion especially Python and JS results are often very mixxed while other languages with presumably a smaller training set might even fare better. JS seems often fine with asynchronous operation like that file check however.
Perhaps really vetting a training set would improve AIs, but it would be quite work intensive to build something like that. That would require a lot of senior devs, which is hard to come by. And then they need to agree on code quality, which might be impossible.
This is a logging setup being done top-level in an auxiliary module "utils": you might import it into one command and not another, and end up surprised why is one getting the logging setup and the other isn't. Or you might attempt to configure it and the import would override it.
As for getting a lot of code that was vetted by senior engineers, that's not so hard: you just have to pay for it. Basically, any company could — for a price — consider sharing their codebase for training.
As an actually unseasoned Python developer, would you be so kind as to explain why the problems you see are problems and their alternatives? Particularly the first two you note.
The call to logging.basicConfig happens at import time, which could cause issues in certain scenarios. For a one-off script, it's probably fine, but for a production app, you'd probably want to set up logging during app startup from whatever your main entry point is.
The Python standard library has a configparser module, which should be used instead of custom code. It's safer and easier than manual parsing. The standard library also has a tomllib module, which would be an even better option IMO.
Regarding your first paragraph, we still don't understand what the issue actually is.
3 replies →
>to a raciness in load_json where it's checked for file existence with an if and then carrying on as if the file is certainly there...
Explain the issue with load_json to me more. From my reading it checks if the file exists, then raises an error if it does not. How is that carrying on as if the file is certainly there?
There is a small amount of time between the `if` and the `with` where another process can delete the file, hence causing a race condition. Attempting to open the file and catching any exceptions raised is generally safer.
Won't it throw the same "FileNotFound" exception in that case? The issue being bothering to check if it exists in the first place I suppose.
1 reply →
OK, that does make sense. Thanks!
Thanks for looking into it.
While I would have hoped for a better result, I'm not surprised. In this particular case, I really didn't care about the code at all; I cared about the end result at runtime, that is, can I create a working, stable solution that solves my problem, in a tech stack I'm not familiar with?
(While still taking care of well-structured requirements and guard rails — not to guarantee a specific level of code quality per se, but to ensure that the AI works towards my goals without the need to intervene as much as possible).
I will spin up another session where I ask it to improve the implementation, and report back.
I'd definitely be curious to see if another session provides higher quality code — good luck, and thanks for taking this amicably!
I did another session with the sole focus being on code quality improvements.
The commit with all changes that Cursor/claude-3.7-sonnet(thinking) has done is at https://github.com/dx-tooling/platform-problem-monitoring-co....
As you can see, I've fed your feeback verbatim:
You can watch a screen recording of the resulting Agent session at https://www.youtube.com/watch?v=zUSm1_NFKpA — I think it's an interesting watch because it nicely shows how the tool-based guard rails help the AI to keep on track and reach a "green" state eventually.
I disagree, I think it's absolutely astounding that they've gotten this good in such a short time, and I think we'll get better models in the near future.
By the way, prompting models properly helps a lot for generating good code. They get lazy if you don't explicitly ask for well-written code (or put that in the system prompt).
It also helps immensely to have two contexts, one that generates the code and one that reviews it (and has a different system prompt).
> They get lazy if you don't explicitly ask for well-written code (or put that in the system prompt).
This is insane on so many levels.
Computer, enhance 15 to 23.
Makes sense given that so much of the training data for so many of these tools are trained on hello world examples where this kind of configuration is okay. Not like this will matter in a world where there are no juniors to replace aged-out seniors because AI was "good enough"...
> This is especially noteworthy because I don’t actually know Python.
> However, my broad understanding of software architecture, engineering best practices, system operations, and what makes for excellent software projects made this development process remarkably smooth.
If the seniors are going to write this sort of Python code and then talk about how knowledge and experience made it smooth or whatever, might as well hire a junior and let them learn through trials and tribulations.
How do you properly configure a logger in application like that?
Just imagine a callsite that configured a logger in another way, and then imports the utils module for a single function: its configuration getting overridden by the one in utils.
There are plenty of ways to structure code so this does not happen, but simply "do not do anything at the top module level" will ensure you don't hit these issues.
Usually you would do it in your main function, or a code path starting from there. Executing code with non-local side effects during import is generally frowned upon. Maybe it's fine for a project-local module that won't be shared, but it's a bad habit and can make it had to track down.
I can say it isn't any better for JS/Node/Deno/Bun projects that I've seen or tried. About the only case it's been helpful (GitHub CoPilot) is in creating boilerplate .sql files for schema creation, and in that it became kind of auto-complete on overdrive. It still made basic missteps though.
> to a raciness in load_json where it's checked for file existence with an if and then carrying on as if the file is certainly there...
It's not a race. It's just redundant. If the file does not exist at the time you actually try to access it you get the same error with slightly better error message.
There is a log message that won't be output in that case: whether getting a full, "native" FileNotFound exception is better is beside the point, since the goal of the code was obviously to print a custom error message.
And it's trivial to achieve the desired effect sanely:
It'd even be fewer lines of code.
Or even fewer by doing it in a global exception handler instead of every time you try to open a file, since all your doing is piping the error though logger.
wrap_long_lines shares those characteristics:
https://github.com/dx-tooling/platform-problem-monitoring-co...
Where things are placed in the project seems rather ad hoc too. Put everything in the same place kind of architecture. A better strategy might be to separate out the I and the O of IO. Maybe someone wants SMS or group chat notifications later on, instead of shifting the numbers in filenames step11_ onwards one could then add a directory in the O part and hook it into an actual application core.
> instead of shifting the numbers in filenames step11_ onwards
There are idioms used when programming in BASIC on how to number the lines so you don't end up renumbering them all the time to make an internal change. It's interesting that such idioms are potentially applicable here also.
Yup this tracks with what I have seen as well. Most devs who use this daily are usually junior devs or javascript devs who both write sloppy questionable code.
Perhaps that partly because 90% of the training data used to teach LLMs to code is made by junior engineers?
100%!
But the alternative would be the tool doesn't get built because the author doesn't know enough Python to even produce crappy code, or doesn't have the money to hire an awesome Python coder to do that for them.
If you check elsewhere in this thread, the author decided on Python to test out AI capabilities — they could have built it quickly in a language of their choice. I am sure I could have built it quickly in Python to a higher standard of quality.
Perhaps they wouldn't have built it because they did not set the time aside for it, like they did for this experiment (+ the blog post).
Doesn’t load_json throw if the file doesn’t exist?
Yes but then why do the check in the first place
Thanks for doing the footwork. These TED talk blog posts always stink of phony-baloney nonsense.