Comment by capyba
14 days ago
Your last sentence describes my thoughts exactly. I try to incorporate Claude into my workflow, just to see what it can do, and the best I’ve ended up with is - if I had written it completely by myself from the start, I would have finished the project in the same amount of time but I’d understand the details far better.
Even just some AI-assisted development in the trickier parts of my code bases completely robs me of understanding. And those are the parts that need my understanding the most!
I dont really understand how this is possible. I've built some very large applications, and even a full LLM data curation,tokenizer, pretrain, posttrain SFT/DPO pipeline with LLM's and it most certainly took far less time than if i had done it manually. Sure it isnt all optimal...but it most certainly isnt subpar, and it is fully functional
So you skipped the code review and just checked that it does what you needed it to do?
Nay, I periodically go through phases in most of my large applications where I do nothing but test, refine, refactor, consolidate, and modularize (the multi-thousand line monoliths that LLM's love to make so much) for days before resuming my roadmaps.
This part is honestly one of the most satisfying, where you merge the already existing functionality with beautiful patterns.
I also make good use of precommit hook scripts to catch drift where possible
I don't know how anyone can make this assumption in good faith. The poster did not imply anything along those lines.
1 reply →
GPT-5 codex variants with xhigh reasoning make great code reviewers.
1 reply →
> if I had written it completely by myself from the start, I would have finished the project in the same amount of time but I’d understand the details far better.
I believe the argument from the other camp is that you don't need to understand the code anymore, just like you don't need to understand the assembly language.
Of all the points the "other side" makes, this one seems the most incoherent. Code is deterministic, AI isn’t. We don’t have to look at assembly, because a compiler produces the same result every time.
If you only understand the code by talking to AI, you would’ve been able to ask AI “how do we do a business feature” and ai would spit out a detailed answer, for a codebase that just says “pretend there is a codebase here”. This is of course an extreme example, and you would probably notice that, but this applies at all levels.
Any detail, anywhere cannot be fully trusted. I believe everyone’s goal should be to prompt ai such that code is the source of truth, and keep the code super readable.
If ai is so capable, it’s also capable of producing clean readable code. And we should be reading all of it.
“Of all the points the other side makes, this one seems the most incoherent. Code is deterministic, AI isn’t. We don’t have to look at assembly, because a compiler produces the same result every time.”
This is a valid argument. However, if you create test harnesses using multiple LLMs validating each other’s work, you can get very close to compiler-like deterministic behavior today. And this process will improve over time.
5 replies →
> other side???
> We don’t have to look at assembly, because a compiler produces the same result every time.
This is technically true in the narrowest possible sense and practically misleading in almost every way that matters. Anyone who's had a bug that only manifests at -O2, or fought undefined behavior in C that two compilers handle differently, or watched MSVC and GCC produce meaningfully different codegen from identical source, or hit a Heisenbug that disappears when you add a printf ... the "deterministic compiler" is doing a LOT of work in that sentence that actual compilers don't deliver on.
Also what's with the "sides" and "camps?" ... why would you not keep your identity small here? Why define yourself as a {pro, anti} AI person so early? So weird!
9 replies →
That will never happen unless we figure out a far simpler way to prove the system does what it should. If you've ever had bugs crop up with a full test suite you should know this is incredibly hard to do
LLMs can't read your mind. In the end they're always taking the english prompt and making a bunch of fill in the blank assumptions around it. This is inevitable if we're to get any productivity improvements out of them.
Sometimes it's obvious and we can catch the assumptions we didn't want (the div isn't centered! fix it claude!) and sometimes you actually have to read and understand the code to see that it's not going to do what you want under important circumstances
If you want a 100% perfect communication of the system in your mind, you should use a terse language built for it: that's called code. We'd just write the code instead
we can do both. we can write code for the parts where it matters and let the LLM code the parts that aren't as critical.
People who really care about performance still do look at the assembly. Very few people write assembly anymore, a larger number do look at assembly every so often. It’s still a minority of people though.
I guess it would be similar here: a small few people will hand write key parts of code, a larger group will inspect the code that’s generated, and a far larger group won’t do either. At least if AI goes the way that the “other side” says.
>I believe the argument from the other camp is that you don't need to understand the code anymore
Then what stops anyone who can type in their native language to, ultimately when LLM's are perfected, just order their own software instead of using anybody else's (speaking about native apps like video games, mobile phones, desktop, etc.)?
Do they actually believe we'll need a bachelor's degree to prompt program in a world where nobody cares about technical details, because the LLM's will be taking care of? Actually, scratch that. Why would the companies who're pouring gorrilions of dollars in investment even give access to such power in an affordable way?
The deeper I look in the rabbit hole they think we're walking towards the more issues I see.
At least for me, the game-changer was realizing I could (with the help of AI) write a detailed plan up front for exactly what the code would be, and then have the AI implement it in incremental steps.
Gave me way more control/understanding over what the AI would do, and the ability to iterate on it before actually implementing.
Indeed. This is very much the way I use it at work. Present an idea of a design, iterate on it, then make a task/todo list and work through the changes piecemeal, reviewing and committing as I go. I find pair design/discussion practical here too. I expect to see smaller teams working like this in the future.
For small personal projects, it's more vibey.. eg. Home automation native UIs & services for Mac & Windows, which I wouldn't otherwise start.. more itches that can be scratched in my limited time.
quite a bit of software you would need to understand the assembly. not everything is web-services.
I've only needed assembly once in more than 20 years of programming, not a webdev
It was during university to get access to CPU counters for better instrumenting, like 15 years ago. Havent needed it since
I've found LLMs (since Opus 4.5) exceptionally good at reading and writing and debugging assembly.
Give them gdb/lldb and have your mind blown!
2 replies →
That is what's hard about transitioning from coder to lead. A good coder makes full use of a single thread of execution. A good lead effectively handles the coordination of multiple threads. Different skills.
An LLM coding assistant today is an erratic junior team member, but its destructive potential is nowhere near some of the junior human engineers I've worked with. So it's worth building the skills and processes to work with them productively. Today, Claude is a singular thing. In six months or a year, it'll be ten or a hundred threads working concurrently on dozens of parts of your project. Either you'll be comfortable coordinating them, or you'll nope out of there and remain an effective but solitary human coder.
I use conductor with git worktrees and will literally have 10 or 20 running at a time getting pinged as they finish stuff for me to review, mostly getting rid of small ticket and doing random POCs while I focus on bigger stuff, the bottleneck has literally become the company doesn't have enough stuff to give me. It only really works however because I have a lot of context and understanding of the codebase. It's already here.
I found coordinating skill usage across worktrees quite annoying, how are you managing this?
Then you are using it the wrong way
Driving is a skill that needs to be learnt, same with working with agents.
Are you giving it huge todos in one prompt or working modularly?
Claude set up account creation / login with SSO login, OTP and email notifications in like 5 mins and told me exactly what to do on the provider side. Theres no way that wouldn't have taken me few hours to figure out
There is no way its not faster at a large breadth of the work, unless youre maybe a fanatic with reviewing and nitpicking every line of code to the extreme
> I would have finished the project in the same amount of time
Probably less time, because you understood the details better.
Have you added agents.md files?
You have to do more than prompts to get the more impressive results
skill issue.
sorry for being blunt, but if you have tried once, twice and came to this conclusion, it is definitely a skill issue, I never got comfortable by writing 3 lines of Java, Python or Go or any other language, it took me hundreds of hours spent doing non-sense, failing miserably and finding out that I was building things which already exists in std lib.