← Back to context

Comment by csto12

9 months ago

You just asked it to design or implement?

If o3 can design it, that means it’s using open source schedulers as reference. Did you think about opening up a few open source projects to see how they were doing things in those two weeks you were designing?

why would I do that kind of research if it can identify the problem I am trying to solve, and spit out the exact solution. also, it was a rough implementation adapted to my exact tech stack

  • Because that path lies skill atrophy.

    AI research has a thing called "the bitter lesson" - which is that the only thing that works is search and learning. Domain-specific knowledge inserted by the researcher tends to look good in benchmarks but compromise the performance of the system[0].

    The bitter-er lesson is that this also applies to humans. The reason why humans still outperform AI on lots of intelligence tasks is because humans are doing lots and lots of search and learning, repeatedly, across billions of people. And have been doing so for thousands of years. The only uses of AI that benefit humans are ones that allow you to do more search or more learning.

    The human equivalent of "inserting domain-specific knowledge into an AI system" is cultural knowledge, cliches, cargo-cult science, and cheating. Copying other people's work only helps you, long-term, if you're able to build off of that into something new; and lots of discoveries have come about from someone just taking a second look at what had been considered to be generally "known". If you are just "taking shortcuts", then you learn nothing.

    [0] I would also argue that the current LLM training regime is still domain-specific knowledge, we've just widened the domain to "the entire Internet".

    • Here on HN you frequently see technologists using words like savant, genius, magical, etc, to describe the current generation of AI. Now we have vibe coding, etc. To me this is just a continuation of StackOverflow copy/paste where people barely know what they are doing and just hammer the keyboard/mouse until it works. Nothing has really changed at the fundamental level.

      So I find your assessment pretty accurate, if only depressing.

      1 reply →

    • > Because that path lies skill atrophy.

      Maybe, but I'm not completely convinced by this.

      Prior to ChatGPT, there would be times where I would like to build a project (e.g. implement Raft or Paxos), I write a bit, find a point where I get stuck, decide that this project isn't that interesting and I give up and don't learn anything.

      What ChatGPT gives me, if nothing else, is a slightly competent rubber duck. It can give me a hint to why something isn't working like it should, and it's the slight push I need to power through the project, and since I actually finish the project, I almost certain learn more than I would have before.

      I've done this a bunch of times now, especially when I am trying to directly implement something directly from a paper, which I personally find can be pretty difficult.

      It also makes these things more fun. Even when I know the correct way to do something, there can be lots of tedious stuff that I don't want to type, like really long if/else chains (when I can't easily avoid them).

      3 replies →

    • >Because that path lies skill atrophy.

      I wonder how many programmers have assembly code skill atrophy?

      Few people will weep the death of the necessity to use abstract logical syntax to communicate with a computer. Just like few people weep the death of having to type out individual register manipulations.

      2 replies →

  • Because as far as you know, the "rough implementation" only works in the happy path and there are really bad edge cases that you won't catch until they bite you, and then you won't even know where to look.

    An open source project wouldn't have those issues (someone at least understands all the code, and most edge cases have likely been ironed out) plus then you get maintenance updates for free.

  • I was pointing out that if you spent 2 weeks trying to find the solution but AI solved it within a day (you don’t specify how long the final solution by AI took), it sounds like those two weeks were not spent very well.

    I would be interested in knowing what in those two weeks you couldn’t figure out, but AI could.

  • Who hired you and why are they paying you money?

    I don't want to be a hater, but holy moley, that sounds like the absolute laziest possible way to solve things. Do you have training, skills, knowledge?

    This is an HN comment thread and all, but you're doing yourself no favors. Software professionals should offer their employers some due diligence and deliver working solutions that at least they understand.

  • So you could stick your own copyright notice on the result, for one thing.

    • What's the point holding copyright on a new technical solution, to a problem that can be solved by anyone asking an existing AI, trained on last year's internet, independently of your new copyright?

      5 replies →

yeah unless you have very specific requirements I think the baseline here is not building/designing it yourself but setting up an off-the-shelf commercial or OSS solution, which I doubt would take two weeks...

  • Dunno, in work we wanted to implement a task runner that we could use to periodically queue tasks through a web UI - it would then spin up resources on AWS and track the progress and archive the results.

    We looked at the existing solutions, and concluded that customizing them to meet all our requirements would be a giant effort.

    Meanwhile I fed the requirement doc into Claude Sonnet, and with about 3 days of prompting and debugging we had a bespoke solution that did exactly what we needed.

    • the future is more custom software designed by ai, not less. alot of frameworks will disappear once you can build sophisticated systems yourself. people are missing this

      6 replies →