Comment by Tade0
1 day ago
> The benefits we get from checking in with other humans, like error correction, and delegation can all be done better by AI.
Not this generation of AI though. It's a text predictor, not a logic engine - it can't find actual flaws in your code, it's just really good at saying things which sound plausible.
> it can't find actual flaws in your code
I can tell from this statement that you don't have experience with claude-code.
It might just be a "text predictor" but in the real world it can take a messy log file, and from that navigate and fix issues in source.
It can appear to reason about root causes and issues with sequencing and logic.
That might not be what is actually happening at a technical level, but it is indistinguishable from actual reasoning, and produces real world fixes.
> I can tell from this statement that you don't have experience with claude-code.
I happen to use it on a daily basis. 4.6-opus-high to be specific.
The other day it surmised from (I assume) the contents of my clipboard that I want to do A, while I really wanted to B, it's just that A was a more typical use case. Or actually: hardly anyone ever does B, as it's a weird thing to do, but I needed to do it anyway.
> but it is indistinguishable from actual reasoning
I can distinguish it pretty well when it makes mistakes someone who actually read the code and understood it wouldn't make.
Mind you: it's great at presenting someone else's knowledge and it was trained on a vast library of it, but it clearly doesn't think itself.
What do you mean the content of your clipboard?
6 replies →
What you're describing is not finding flaws in code. It's summarizing, which current models are known to be relatively good at.
It is true that models can happen to produce a sound reasoning process. This is probabilistic however (moreso than humans, anyway).
There is no known sampling method that can guarantee a deterministic result without significantly quashing the output space (excluding most correct solutions).
I believe we'll see a different landscape of benefits and drawbacks as diffusion language models begin to emerge, and as even more architectures are invented and practiced.
I have a tentative belief that diffusion language models may be easier to make deterministic without quashing nearly as much expressivity.
This all sounds like the stochastic parrot fallacy. Total determinism is not the goal, and it not a prerequisite for general intelligence. As you allude to above, humans are also not fully deterministic. I don't see what hard theoretical barriers you've presented toward AGI or future ASI.
3 replies →
> moreso than humans
Citation needed.
7 replies →
Nothing you've said about reasoning here is exclusive to LLMs. Human reasoning is also never guaranteed to be deterministic, excluding most correct solutions. As OP says, they may not be reasoning under the hood but if the effect is the same as a tool, does it matter?
I'm not sure if I'm up to date on the latest diffusion work, but I'm genuinely curious how you see them potentially making LLMs more deterministic? These models usually work by sampling too, and it seems like the transformer architecture is better suited to longer context problems than diffusion
3 replies →
And not this or any existing generation of people. We're bad a determining want vs need, being specific, genericizing our goals into a conceptual framework of existing patterns and documenting & explaining things in a way that gets to a solid goal.
The idea that the entire top down processes of a business can be typed into an AI model and out comes a result is again, a specific type of tech person ideology that sees the idea of humanity as an unfortunate annoyance in the process of delivering a business. The rest of the world see's it the other way round.
I would have agreed with you a year ago
If you only realized how ridiculous your statement is, you never would have stated it.
It's also literally factually incorrect. Pretty much the entire field of mechanistic interpretability would obviously point out that models have an internal definition of what a bug is.
Here's the most approachable paper that shows a real model (Claude 3 Sonnet) clearly having an internal representation of bugs in code: https://transformer-circuits.pub/2024/scaling-monosemanticit...
Read the entire section around this quote:
> Thus, we concluded that 1M/1013764 represents a broad variety of errors in code.
(Also the section after "We find three different safety-relevant code features: an unsafe code feature 1M/570621 which activates on security vulnerabilities, a code error feature 1M/1013764 which activates on bugs and exceptions")
This feature fires on actual bugs; it's not just a model pattern matching saying "what a bug hunter may say next".
Was this "paper" eventually peer reviewed?
PS: I know it is interesting and I don't doubt Antrophic, but for me it is so fascinating they get such a pass in science.
5 replies →
> This feature fires on actual bugs; it's not just a model pattern matching saying "what a bug hunter may say next".
You don't think a pattern matcher would fire on actual bugs?
Mechanistic interpretability is a joke, supported entirely by non-peer reviewed papers released as marketing material by AI firms.
Some people are still stuck in the “stochastic parrot” phase and see everything regarding LLMs through that lense.
Current LLMs do not think. Just because all models anthropomorphize the repetitive actions a model is looping through does not mean they are truly thinking or reasoning.
On the flip side the idea of this being true has been a very successful indirect marketing campaign.
2 replies →
While I agree, if you think that AI is just a text predictor, you are missing an important point.
Intelligence, can be borne of simple targets, like next token predictor. Predicting the next token with the accuracy it takes to answer some of the questions these models can answer, requires complex "mental" models.
Dismissing it just because its algorithm is next token prediction instead of "strengthen whatever circuit lights up", is missing the forest for the trees.
Absolutely nuts, I feel like I'm living in a parallel universe. I could list several anecdotes here where Claude has solved issues for me in an autonomous way that (for someone with 17 years of software development, from embedded devices to enterprise software) would have taken me hours if not days.
To the nay sayers... good luck. No group of people's opinions matter at all. The market will decide.
I wonder if the parent comments remark is a communication failure or pedantry gone wrong, because like you, claude-code is out there solving real problems and finding and fixing defects.
A large quantity of bugs as raised are now fixed by claude automatically from just the reports as written. Everything is human reviewed and sometimes it fixes it in ways I don't approve, and it can be guided.
It has an astonishing capability to find and fix defects. So when I read "It can't find flaws", it just doesn't fit my experience.
I have to wonder if the disconnect is simply in the definition of what it means to find a flaw.
But I don't like to argue over semantics. I don't actually care if it is finding flaws by the sheer weight of language probability rather than logical reasoning, it's still finding flaws and fixing them better than anything I've seen before.
I can't control random internet people, but within my personal and professional life, I see the effective pattern of comparing prompts/contexts/harnesses to figure out why some are more effective than others (in fact tooling is being developed in the AI industry as a whole to do so, claude even added the "insights" command).
I feel that many people that don't find AI useful are doing things like, "Are there any bugs in this software?" rather than developing the appropriate harness to enable the AI to function effectively.
I think it’s just fear, I sure know that after 25 years as a developer with a great salary and throughout all that time never even considering the chance of ever being unemployable I’m feeling it too.
I think some of us come to terms with it in different ways.
I used to sometimes get stuck on a problem for weeks and then get a budget pulled or get put on another project. Sometimes those issues never did get solved. Or have to tell someone sorry I don't have capacity to solve a problem for you. Now a lot of that anxiety has been replaced with a more can do attitude. Like wouldn't being able to pull off results create more opportunity?
You’re committing the classic fallacy of confusing mechanics with capabilities. Brains are just electrons and chemicals moving through neural circuits. You can’t infer constraints on high-level abilities from that.
This goes both ways. You can't assume capabilities based on impressions. Especially with LLMs, which are purpose built to give an impression of producing language.
Also, designers of these systems appear to agree: when it was shown that LLMs can't actually do calculations, tool calls were introduced.
It's true that they only give plausible sounding answers. But let's say we ask a simple question like "What's the sum of two and two?" The only plausible sounding answer to that will be "four." It doesn't need to have any fancy internal understanding or anything else beyond prediction to give what really is the same answer.
The same goes for a lot of bugs in code. The best prediction is often the correct answer, being the highlighting of the error. Whether it can "actually find" the bugs—whatever that means—isn't really so important as whether or not it's correct.
2 replies →
[flagged]
I use these tools and that's my experience.
I think it all depends on the use case and a luck factor.
Sometimes I instruct copilot/claude to do a development (stretching it's capabilities), and it does amazingly well. Mind you that this is front-end development, so probably one of the more ideal use-cases. Bugfixing also goes well a lot of times.
But other times, it really struggles, and in the end I have to write it by hand. This is for more complex or less popular things (In my case React-Three-Fiber with skeleton animations).
So I think experiences can vastly differ, and in my environment very dependent on the case.
One thing is clear: This AI revolution (deep learning) won't replace developers any time soon. And when the next revolution will take place, is anyones guess. I learned neural networks at university around 2000, and it was old technology then.
I view LLM's as "applied information", but not real reasoning.
[flagged]
Ok, I'll bite. Let's assume a modern cutting edge model but even with fairly standard GQA attention, and something obviously bigger than just monosemantic features per neuron.
Based on any reasonable mechanistic interpretability understanding of this model, what's preventing a circuit/feature with polysemanticity from representing a specific error in your code?
---
Do you actually understand ML? Or are you just parroting things you don't quite understand?
8 replies →
Your brain is a slab of wet meat, not a logic engine. It can't find actual flaws in your code - it's just half-decent at pattern recognition.
That is not exactly true. The brain does a lot of things that are not "pattern recognition".
Simpler, more mundane (not exactly, still incredibly complicated) stuff like homeostasis or motor control, for example.
Additionally, our ability to plan ahead and simulate future scenarios often relies on mechanisms such as memory consolidation, which are not part of the whole pattern recognition thing.
The brain is a complex, layered, multi-purpose structure that does a lot of things.
Its pattern recognition all the way down.