Comment by ilrwbwrkhv

5 months ago

[flagged]

>There is no intelligence here and Claude 3.7 cannot create anything novel.

I wouldn't be surprised if people would continue to deny the actual intelligence of these models even in a scenario where they were able to solve the Riemann hypothesis.

"Every time we figure out a piece of it, it stops being magical; we say, 'Oh, that's just a computation.'" - cit

Even when I feel this, 90% of any novel thing I'm doing is still old gruntwork, and Claude lets me speed through that and focus all my attention on the interesting 10% (disclaimer: I'm at Anthropic)

  • Do you think the "deep research" feature that some AI companies have will ever apply to software? For example, I had to update Spring in a Java codebase recently. AI was only able to help mildly to figure out why I was seeing some errors, but that's it.

  • One can also steal directly from GitHub and strip the license to avoid this grunt work. LLMs automate the stealing.

How many novel things does a developer do at work as a percentage of their time?

  • that's because stacks/apis/ecosystems are super complicated and require lots of reading/searching to figure out how make things happen. Now this time will be reduced dramatically and devs time will shift on more novel things.

The threat is not autocomplete, it's translation.

"translating" requirements into code is what most developers' jobs are.

So "just" translation is a threat to job security of developers.

Build on top of stolen code, no less. HN hates to hear it but LLMs are a huge step back for software freedom because as long as they call it "AI" and as long as politicians don't understand it, it allows companies to launder GPL code and reuse it without credit and without giving users their rights.

This is BS and you are not listening and watching carefully.

  • Even the best LLMs today are just junior devs with a lot of knowledge. They make a lot of the same mistakes junior devs would do. Even the responses, when you point out those mistakes, are the same.

    If anything, it's a tool for junior devs to get better and spend more time on the architecture.

    Using AI code without fully understanding it (ie operated by a non-programmer) is just recipe for disaster.

    • The worst is when you tell it it's made a mistake and it agrees.

      "You're right, but I just like wasting your time"

  • OK then show me a model that can answer honestly and correctly about whether or not it knows something.

This is pure cope

  • AI cannot write a simple dockerfile. I don't know how simple stuff you guys are writing. If ai can do it then it should be an excel sheet and not code.

    • I've been writing Dockerfiles with LLMs for over a year now - all of the top tier LLMs do a great job of those in my experience.