← Back to context

Comment by mattmaroon

1 day ago

Meanwhile, my cofounder is rewriting code we spent millions of salary on in the past by himself in a few weeks.

I myself am saving a small fortune on design and photography and getting better results while doing it.

If this is not all that well I can’t wait until we get to mediocre!

> Meanwhile, my cofounder is rewriting code we spent millions of salary on in the past by himself in a few weeks.

Code is not an asset it's a liability, and code that no one has reviewed is even more of a liability.

However, in the end, execution is all that matters so if you and your cofounder are able to execute successfully with mountains of generated code then it doesn't matter what assets and liabilities you hold in the short term.

The long term is a lot harder to predict in any case.

  • > Code is not an asset it's a liability, and code that no one has reviewed is even more of a liability.

    Code that solves problems and makes you money is by definition an asset. Whether or not the code in question does those things remains to be seen, but code is not strictly a liability or else no one would write it.

  • Developers that can’t see the change are blind.

    Just this week, sun-tue. I added a fully functional subscription model to an existing platform, build out a bulk async elasticjs indexing for a huge database and migrated a very large Wordpress website to NextJS. 2.5 days, would have cost me at least a month 2 years ago.

    • To me, this sounds like:

      AI is helping me solve all the issues that using AI has caused.

      Wordpress has a pretty good export and Markdown is widely supported. If you estimate 1 month of work to get that into NextJS, then maybe the latter is not a suitable choice.

      12 replies →

  • >Code is not an asset it's a liability

    This would imply companies could delete all their code and do better, which doesn't seem true?

    • A more accurate description of code is that it’s a depreciating asset, perhaps, or an asset that requires maintenance cost. Neither of which is a liability

All the productivity enhancement provided by LLMs for programming is caused by circumventing the copyright restrictions of the programs on which they have been trained.

You and anyone else could have avoided spending millions for programmer salaries, had you been allowed to reuse freely any of the many existing proprietary or open-source programs that solved the same or very similar problems.

I would have no problem with everyone being able to reuse any program, without restrictions, but with these AI programming tools the rich are now permitted to ignore copyrights, while the poor remain constrained by them, as before.

The copyright for programs has caused a huge multiplication of the programming effort for many decades, with everyone rewriting again and again similar programs, in order for their employing company to own the "IP". Now LLMs are exposing what would have happened in an alternative timeline.

The LLMs have the additional advantage of fast and easy searching through a huge database of programs, but this advantage would not have been enough for a significant productivity increase over a competent programmer that would have searched the same database by traditional means, to find reusable code.

  • > the rich are now permitted to ignore copyrights, while the poor remain constrained by them, as before.

    Claude Code is $20 a month, and I get a lot of usage out of it. I don't see how cutting edge AI tools are only for the rich. The name OpenAI is often mocked, but they did succeed at bringing the cutting edge of AI to everyone, time and time again.

  • Intellectual property law is a net loss to humanity, so by my reckoning, anything which lets us all work around that overhead gets some extra points on the credit side of the ledger.

    • I agree in spirit, but in actual fact this subversion of intellectual property is disproportionately beneficial to those who can afford to steal from others and those who can afford to enforce their copyright, while disproportionately disadvantageous to those who can't afford to fend off a copyright lawsuit or can't afford to sue to enforce their copyright.

      The GP can free-ride uncredited on the collective work of open source at their leisure, but I'm sure Disney would string me up by my earlobes if I released a copywashed version of Toy Story 6.

  • Then it really proves how much the economy would be booming if we abolished copyright, doesn't it? China ignores copyright too, and look at them surpassing us in all aspects of technology, while Western economies choose to sabotage themselves to keep money flowing upwards to old guys.

    • Well no, because copyright != cannot use.

      "Available for use" and "Automatically rewritten to work in your codebase fairly well" is very different, so copyright is probably not the blocker technically

      1 reply →

    • China is not surpassing the US in all aspects of technology.

      There is still much for them to steal.

    • The theory behind copyright is that the enshrined monopoly guarantees profits and thus encourages r&d.

      China steals our r&d (both copyrighted and non) and gets a lot of theirs from state funding.

      I don’t think I’d take China’s success as proof that the copyright system doesn’t work.

      2 replies →

> Meanwhile, my cofounder is rewriting code we spent millions of salary on in the past by himself in a few weeks.

Why?

Im not even casting shade - I think AI is quite amazing for coding and can increase productivity and quality a lot.

But I'm curious why he's doing this.

  • The codebase is old and really hard to work on. It’s a game that existed pre-iPhone and still has decent revenue but could use some updating. We intentionally shrank our company down to auto-pilot mode and frankly don’t even have a working development environment anymore.

    It was basically cost prohibitive to change anything significant until Claude became able to do most of the work for us. My cofounder (also CTO of another startup in the interim) found himself with a lot of time on his hands unexpectedly and thought it would be a neat experiment and has been wowed by the results.

    Much in the same way people on HN debate when we will have self driving cars while millions of people actually have their Teslas self-driving every day (it reminds me of when I got to bet that Joe Biden would win the election after he already did) those who think AI coding is years away are missing what’s happening now. It’s a powerful force magnifier in the hands of a skilled programmer and it’ll only get better.

    • Millions? Of people in self driving teslas?

      The actual number of such vehicles produced is two orders of magnitude less.

    • Sounds like a good reason to rewrite. And sounds like a rewrite just would not happen by any other means. Thanks for sharing the details.

    • When I say I want a self driving car I mean one that actually drives itself so I don't have to be involved other than setting the destination.

      What Tesla is selling now is the worst of both worlds. You still have to pay attention but it's way more boring so it's really hard to do so. Well until it suddenly decides to ram a barrier at highway speeds.

      Wake me up when I can have a beer and watch a movie while it's driving.

It's not directly comparable. The first time writing the code is always the hardest because you might have to figure out the requirements along the way. When you have the initial system running for a while, doing a second one is easier because all the requirements kinks are figured out.

By the way, why does your co-founder have to do the rewrite at all?

  • You can compare it - just factor that in. And compare writing it with AI vs. writing it without AI.

    We have no clue the scope of the rewrite but for anything non-trivial, 2 weeks just isn't going to be possible without AI. To the point of you probably not doing it at all.

    I have no idea why they are rewriting the code. That's another matter.

  • I find the opposite to be true. Once you know the problem you’re trying to solve (which admittedly can be the biggest lift), writing the fist cut of the code is fun, and you can design the system and set precedent however you want. Once it’s in the wild, you have to work within the consequences of your initial decisions, including bad ones.

    • ... And the undocumented code spaghetti that might come with a codebase that was touch by numerous hands.

G’day Matt from myself another person with a cofounder both getting insane value out of AI and astounded at the attitudes around HN.

You sound like complete clones of us :-)

We’ve been at it since July and have built what used to take 3-5 people that long.

To the haters: I use TDD and review every line of code, I’m not an animal.

There’s just 2 of us but some days it feels like we command an army.

lol same. I just wrote a bunch of diagrams with mermaid that would legit take me a week, also did a mock of an UI for a frontend engineer that would take me another week to do .. or some designers. All of that in between meetings...

Waiting for it to actually go well to see what else I can do !

  • The more I have this experience and read people maligning AI for coding, the more I think the junior developers are actually not the ones in danger.

    • Oh I've thought this for years. As an L7, basically my primary role is to serve as someone to bounce ideas off of, and to make recommendations based on experience. A chatbot, with its virtually infinite supply of experience, could ostensibly replace my role way sooner than it could a solid junior/mid-level coder. The main thing it needs is a consistent vision and direction that aligns with the needs of nearby teams, which frankly sounds not all that hard to write in code (I've been considering doing this).

      Probably the biggest gap would be the ability to ignite, drive, and launch new initiatives. How does an AI agent "lead" an engineering team? That's not something you can code up in an agent runtime. It'd require a whole culture change that I have a hard time seeing in reality. But of course if there comes a point where AI takes all the junior and mid-level coding jobs, then at that point there's no culture to change, so staff/principal jobs would be just as at risk.

      6 replies →

  • I have been able to prototype way faster. I can explain how I want a prototype reworked and it's often successful. Doesn't always work, but super useful more often than not.

In this thread: people throwing shade on tech that works, comparing it to a perfect world and making weird assumptions like no tests, no E2E or manual testing just to make a case. Hot take: most SWEs produce shit code, be it by constraints of any kind or their own abilities. LLMs do the same but cost less and can move faster. If you know how to use it, code will be fine. Code is a commodity and a lot of people will be blindsided by that in the future. If your value proposition is translating requirements into code, I feel sorry for you. The output quality of the LLM depends on the abilities of the operator. And most SWEs lack the system thinking to be good here, in my experience.

As a fractional CTO and in my decade of being co-founder/CTO I saw a lot of people and codebases and most of it is just bad. You need to compare real life codebases and outputs of developers, not what people wished it would be like. And the reality is that most of it sucks and most SWEs are bad at their jobs.

When I read the blog post, the impression I get is that the author is referring to the proposed "business" of licensing or selling "generative AI" (i.e., making money for the licensor or seller), not whether generative AI is saving money for any particular user

The author's second reference, an article from The Atlantic, describing the copyright liability issues with "generative AI", has been submitted to HN four times in the last week

AI Memorization Research (theatlantic.com)

2 points by tagyro 5 hours ago | flag | past | discuss

AI's Memorization Crisis (theatlantic.com)

2 points by twalichiewicz 1 day ago | flag | past | 1 comment

AI's Memorization Crisis (theatlantic.com)

3 points by palad1n 4 days ago | flag | past | 1 comment

AI's Memorization Crisis (theatlantic.com)

4 points by casparvitch 4 days ago | flag | past | discuss

Sounds like an argument for better hiring practices and planning.

Producing a lot of code isn’t proof of anything.

  • Yep. Let’s see the projects and more importantly the incremental returns…

Senior developer here, your co-founder is making a huge mistake. Their lack of knowledge about the codebase will be your undoing. PS. I work in GenAI.

Is the cofounder "rewriting" that code providing zero of the existing code as context? Doing it in a completely green field fashion?

Or is any of the existing platform is used as an input for the rewrite?

>rewriting code

Key thing here. The code was already written, so rewriting it isn't exactly adding a lot of quantifiable value. If millions weren't spent in the first place, there would be no code to rewrite.

no need to wait, by using AI you already are mediocre at best (because you forego skill and quality for speed)

>I myself am saving a small fortune on design and photography and getting better results while doing it.

Is this because you are improving your already existing design and photography skills and business ?

Or have you bootstrapped from the scratch with AI ?

Do you mind sharing or giving a hint ?

Thanks!

I myself am saving a small fortune on design and photography and getting better results while doing it.

Yay! Let's put all the artists out of business and funnel all the money to the tech industry. That's how to build a vibrant society. Yay!

> I myself am saving a small fortune on design and photography and getting better results while doing it.

Tell me you have bland taste without telling me you have bland taste. But if your customers eat it up and your slop manages to stand out in sea of slop, who am I to dislike slop.

Good luck with fixing that future mess. This is such an incredibly short sighted approach to running a company and software dev that I think your cofounder is likely going to torpedo your company.

> Meanwhile, my cofounder is rewriting code we spent millions of salary on in the past by himself in a few weeks.

If the LLM generating the code introduced a bug, who will be fixing it? The founder that does not know how to code or the LLM that made the mistake first?

Doesn't this imply that you were not getting the level of efficiency out of your investment? It would be a little odd to say this publicly as this says more about you and your company. The question would be what your code does and if it is profitable.

> Meanwhile, my cofounder is rewriting code we spent millions of salary on in the past by himself in a few weeks.

This is one of those statements that would horrify any halfway competent engineer. A cowboy coder going in, seeing a bunch of code and going 'I should rewrite this' is one of the biggest liabilities to any stable system.

  • I assume this is because they're already insanely profitable after hitting PMF and are now trying to bring down infra costs?

    Right? RIGHT?!

  • My cofounder is an all the way competent engineer. Making this many assumptions would horrify someone halfway competent with logic though.

    • It's crazy how some people here will just make all the assumptions possible in order to refuse to believe you. Anyone who's used a good model with open code or equivalent will know that it's plausible. Refactoring is really cheap now when paired with someone competent.

      I'm doing the same as your co-founder currently. In a few days, I've rewritten old code that took previous employees months to do. Their implementation sucked and barely worked, the new one is so much better and has tests to prove it.

      1 reply →

  • Every professional SWE is going to stare off into the middle distance, as they flashback to some PM or VP deciding to show everyone they still got it.

    The "how hard could it be" fallacy claims another!

    • LLMs do the jobs of developers, thereby eating up countless jobs.

      LLMs do the jobs of developers without telling semi-technical arrogant MBA holders “no, you’re dumb”, thereby creating all the same jobs as before but also a butt-ton more juggling expensive cleanup mixed with ego-massaging.

      We’re talking a 2-10x improvement in ‘how hard could it be?’ iterations. Consultant candy.

    • As someone who is more involved in shaping the product direction rather than engineering what composes the product - I will readily admit many product people are utterly, utterly clueless.

      Most people have no clue the craftsmanship, work etc it takes to create a great product. LLMs are not going to change this, in fact they serve as a distraction.

      I’m not a SWE so I gain nothing by being bearish on the contributions of LLMs to the real economy ;)

      1 reply →

    • This has become my new hell.

      PM has an idea. PM vibe codes a demo of this idea. PM shows it to the VP. VP gets excited and says "when can we have this." I look at the idea and estimate it'll take two people six months. VP and PM say "what the heck, but AI built the demo in a weekend, you should be able to do this with one engineer in a month." I get one day closer to quitting.

I suspect he means as a trillion dollar corporation led endeavor.

I trained a small neural net on pics of a cat I had in the 00s (RIP George, you were a good cat).

Mounted a webcam I had gotten for free from somewhere, above the cat door, in the exterior of the house.

If the neural net recognized my cat it switched off an electromagnetic holding the pet door locked. Worked perfectly until I moved out of the rental.

Neural nets are, end of the day, pretty cool. It's the data center business that's the problem. Just more landlords, wannabe oligarchs, claiming ownership over anything they can get the politicians to give them.

On design and photography? So you’re filling your product with slop images and graphics? Users won’t like it

The problem is... you're going to deprive yourself of the talent chain in the long run, and so is everyone else who is switching over to AI, both generative like ChatGPT and transformative like the various translation, speech recognition/transcription or data wrangling models.

For now, it works out for companies - but forward to, say, ten years in the future. There won't be new intermediates or seniors any more to replace the ones that age out or quit the industry entirely in frustration of them not being there for actual creativity but to clean up AI slop, simply because there won't have been a pipeline of trainees and juniors for a decade.

But by the time that plus the demographic collapse shows its effects, the people who currently call the shots will be in pension, having long since made their money. And my generation will be left with collapse everywhere and find ways to somehow keep stuff running.

Hell, it's already bad to get qualified human support these days. Large corporations effectively rule with impunity, with the only recourse consumers have being to either shell out immense sums of money for lawyers and court fees or turning to consumer protection/regulatory authorities that are being gutted as we speak both in money and legal protections, or being swamped with AI slop like "legal assistance" AI hallucinating case law.

  • > There won't be new intermediates or seniors any more to replace the ones that age out or quit the industry entirely in frustration of them not being there for actual creativity but to clean up AI slop, simply because there won't have been a pipeline of trainees and juniors for a decade.

    There are be plenty of self taught developers who didn't need any "traineeship". That proportion will increase even more with AI/LLMs and the fact that there are no more jobs for youngsters. And actually from looking at the purely toxic comments on this thread, I would say that's a good thing for youngsters to be not be exposed to such "seniors".

    Credentialism is dead. "Either ship or shutup" should be the mantra of this age.

    • More like "Either slop or shut up". Classic startup culture, fuck processes and doing things right, it's all about larping and lying to investors. Damn right, your value as an engineer is all about how much slop you can churn out, I'd love (not) to be in a team filled with people like you.