Comment by johnfn
8 hours ago
As dumb as it is to loudly proclaim you wrote 200k loc last week with an LLM, I don’t think it’s much better to look at the code someone else wrote with an LLM and go “hah! Look at how stupid it is!” You’re making exactly the same error as the other guy, just in the opposite direction: you’re judging the profession of software engineering based on code output rather than value generation.
Now, did Garry Tan actually produce anything of value that week? I dunno, you’ll have to ask him.
Yeah! It's not like code quality matters in terms of negative value or lives lost, right?!
https://en.wikipedia.org/wiki/Horizon_IT_scandal
Furthermore,
> As for the artifact that Tan was building with such frenetic energy, I was broadly ignoring it. Polish software engineer Gregorein, however, took it apart, and the results are at once predictable, hilarious and instructive: A single load of Tan’s "newsletter-blog-thingy" included multiple test harnesses (!), the Hello World Rails app (?!), a stowaway text editor, and then eight different variants of the same logo — one of which with zero bytes.
Do you think any of the... /things/ bundled in this software increased the surface area that attacks could be leveraged against?
I also struggle with this all the time, balance between bringing value/joy and level of craft. Most human written stuff might look really ugly or was written in a weird way but as long as it’s useful it’s ok.
What I don’t like here is the bragging about the LoC. He’s not bragging about the value it could provide. Yes people also write shitty code but they don’t brag about it - most of the time they are even ashamed.
> a stowaway text editor
?!
Was it hiding in one of the lifeboats?
The Horizon IT scandal was not caused by poor code quality, the scandal was the corrupt employees of the UK government/Post Office. Poor quality code might have caused the error, but the failure to investigate the errors and sweep them under the rug was made by humans.
> Poor quality code might have caused the error, but the failure to investigate the errors and sweep them under the rug was made by humans.
That's not quite correct.
The root set of errors were made by the accounting software. The branch sets of errors were made by humans taking Horizon IT's word for it that there was no fault in the code, and instead blaming the workers for the differences in the balance sheets.
If there were no errors in the accounting software (i.e. it had been properly designed and tested), then none of that would have happened.
Nobody blames THERAC-25 on the human operator.
1 reply →
> included multiple test harnesses (!)
ive seen plenty of real code written by real people with multiple test harnesses and multiple mocking libraries.
its still kinda irrelevant to whether the code does anything useful; only a descriptor of the funding model
If I'm reading this correctly ("a single homepage load of http://garryslist.org downloads 6.42 MB across 169 requests"), the test harnesses were being downloaded by end users. They weren't being installed as devDependencies.
> Now, did Garry Tan actually produce anything of value that week? I dunno, you’ll have to ask him.
Let’s not be naive. Garry is not a nobody. He absolutely doesn’t care about how many lines of code are produced or deleted. He made that post as advertisement: he’s advertising AI because he’s the ceo of YC which profitability depends on AI.
He’s just shipping ads.
"Follow the money" was always relevant, but especially when it comes to any kind of LLM news or investment-du-jour.
The cautionary/pessimist folks at least don't make money by taking the stance.
A few do.
At the extreme end you'll get invited to conferences but further down you could have other products you are pushing. Even non-AI related that takes advantage of your "smart person" public persona.
> You’re making exactly the same error as the other guy, just in the opposite direction: you’re judging the profession of software engineering based on code output rather than value generation.
But the true metric isn't either one, it's value created net of costs. And those costs include the cost to create the software, the cost to understand and maintain it, the cost of securing it and deploying it and running it, and consequential costs, such as the cost of exploited security holes and the cost of unexpected legal liabilities, say from accidental copyright or patent infringement or from accidental violation of laws such as the Digital Markets Act and Digital Services Act. The use of AI dramatically decreases some of these costs and dramatically increases other costs (in expectation). But the AI hypesters only shine the spotlight on the decreased costs.
It isn't worth the time. I am not going to read the 200k LOC to prove it was a bad idea to generate that much code in a short time and ship it to production. It is on the vibe coder to prove it is. And if it is just tweets being exchanged, and I want to judge someone who is boasting about LOC and aiming to make more LOC/second. Yep I'll judge 'em. It is stupid.
"Value generation" is a term I would be somewhat wary of.
To me, in this context, it's similar to drive economic growth on fossil fuel.
Whether in the end it can result in a net benefit (the value is larger than the cost of interacting with it and the cost to sort out the mess later) is likely impossible to say, but I don't think it can simply be judged by short sighted value.
Given the framing of the article, I can understand where the opposite direction comment is coming from. The author also gives mixed signals, by simultaneously suggesting that the "laziness" of the programmer and code are virtues. Yet I don't think they are ignoring value generation. Rather, I think they are suggesting that the value is in the quality of the code instead of the problem being solves. This seems to be an attitude held by many developers who are interested in the pursuit of programming rather than the end product.
The main value he generated from that exercise was the screenshot. It's a kind of credentialism.