>There is no intelligence here and Claude 3.7 cannot create anything novel.
I wouldn't be surprised if people would continue to deny the actual intelligence of these models even in a scenario where they were able to solve the Riemann hypothesis.
"Every time we figure out a piece of it, it stops being magical; we say, 'Oh, that's just a computation.'" - cit
Even when I feel this, 90% of any novel thing I'm doing is still old gruntwork, and Claude lets me speed through that and focus all my attention on the interesting 10% (disclaimer: I'm at Anthropic)
Do you think the "deep research" feature that some AI companies have will ever apply to software? For example, I had to update Spring in a Java codebase recently. AI was only able to help mildly to figure out why I was seeing some errors, but that's it.
that's because stacks/apis/ecosystems are super complicated and require lots of reading/searching to figure out how make things happen. Now this time will be reduced dramatically and devs time will shift on more novel things.
Build on top of stolen code, no less. HN hates to hear it but LLMs are a huge step back for software freedom because as long as they call it "AI" and as long as politicians don't understand it, it allows companies to launder GPL code and reuse it without credit and without giving users their rights.
Even the best LLMs today are just junior devs with a lot of knowledge. They make a lot of the same mistakes junior devs would do. Even the responses, when you point out those mistakes, are the same.
If anything, it's a tool for junior devs to get better and spend more time on the architecture.
Using AI code without fully understanding it (ie operated by a non-programmer) is just recipe for disaster.
> Its not AI. It is enhanced autocomplete. There is no intelligence here and Claude 3.7 cannot create anything novel. We as an industry need to get more honest about these things.
AI cannot write a simple dockerfile. I don't know how simple stuff you guys are writing. If ai can do it then it should be an excel sheet and not code.
>There is no intelligence here and Claude 3.7 cannot create anything novel.
I wouldn't be surprised if people would continue to deny the actual intelligence of these models even in a scenario where they were able to solve the Riemann hypothesis.
"Every time we figure out a piece of it, it stops being magical; we say, 'Oh, that's just a computation.'" - cit
Even when I feel this, 90% of any novel thing I'm doing is still old gruntwork, and Claude lets me speed through that and focus all my attention on the interesting 10% (disclaimer: I'm at Anthropic)
Do you think the "deep research" feature that some AI companies have will ever apply to software? For example, I had to update Spring in a Java codebase recently. AI was only able to help mildly to figure out why I was seeing some errors, but that's it.
One can also steal directly from GitHub and strip the license to avoid this grunt work. LLMs automate the stealing.
How many novel things does a developer do at work as a percentage of their time?
that's because stacks/apis/ecosystems are super complicated and require lots of reading/searching to figure out how make things happen. Now this time will be reduced dramatically and devs time will shift on more novel things.
The threat is not autocomplete, it's translation.
"translating" requirements into code is what most developers' jobs are.
So "just" translation is a threat to job security of developers.
Build on top of stolen code, no less. HN hates to hear it but LLMs are a huge step back for software freedom because as long as they call it "AI" and as long as politicians don't understand it, it allows companies to launder GPL code and reuse it without credit and without giving users their rights.
> Its not AI
AI is a very broad term with many different definitions.
it does seem disingenuous for something without intelligence to be called intelligence
I feel like you're nitpicking. Intelligence is ALSO a broad term with no singular consensus on what it means or what it is.
What's your definition of intelligence?
This is BS and you are not listening and watching carefully.
Even the best LLMs today are just junior devs with a lot of knowledge. They make a lot of the same mistakes junior devs would do. Even the responses, when you point out those mistakes, are the same.
If anything, it's a tool for junior devs to get better and spend more time on the architecture.
Using AI code without fully understanding it (ie operated by a non-programmer) is just recipe for disaster.
The worst is when you tell it it's made a mistake and it agrees.
"You're right, but I just like wasting your time"
OK then show me a model that can answer honestly and correctly about whether or not it knows something.
Show me a human that can answer honestly and correctly about whether they know something.
https://news.ycombinator.com/item?id=43155825
> Its not AI. It is enhanced autocomplete. There is no intelligence here and Claude 3.7 cannot create anything novel. We as an industry need to get more honest about these things.
Yeah, this sort of "AI" is still nothing more than a glorified “Chinese room” (https://www.wikiwand.com/en/articles/Chinese_room).
To illustrate:
https://x.com/yoltartar/status/1861812132209369420
This is pure cope
AI cannot write a simple dockerfile. I don't know how simple stuff you guys are writing. If ai can do it then it should be an excel sheet and not code.
I've been writing Dockerfiles with LLMs for over a year now - all of the top tier LLMs do a great job of those in my experience.