← Back to context

Comment by latexr

3 days ago

> These behaviors are surprising. It seems that despite being incredibly powerful at solving math and coding tasks, o3 is not by default truthful about its capabilities.

It is only surprising to those who refuse to understand how LLMs work and continue to anthropomorphise them. There is no being “truthful” here, the model has no concept of right or wrong, true or false. It’s not “lying” to you, it’s spitting out text. It just so happens that sometimes that non-deterministic text aligns with reality, but you don’t really know when and neither does the model.

Precisely. The tools often hallucinate: including in its instructions higher up even before your prompt portion. Also the behind the scenes stuff not show to the user during reasoning.

You see binary failures all the time when doing function calls or JSON outputs.

That is… “please call this function” … does not call function

“calling JSON endpoint”… does not emit JSON

so from the article the tool generates hallucinations that the tool has used external stuff: but that was entirely fictitious. it does not know that this tool usage was fictitious and then sticks by its guns.

The workaround is to have verification steps, throw away “bad” answers. Instead of expecting one true output, expect a stream of results which have a yield (agriculture) of a certain amount. say 95% work, 5% garbage. never consider the results truly accurate, just “accurate enough”. Verify always

  • As an electrical engineer it is absolutely amazing how much LLMs suck at describing electrical circuits. It is somewhat ok with natural language, which works for the simplest circuits. For more complex stuff Chatgpt (regardless of model) seems to default to absolutely nonsensical ASCII circuit diagrams, you can ask it to list each part with each terminal and describe the connections to other parts and terminals and it will fail spectacularly with missing parts, missing terminals, parts no one ever heard of, short circuits, dangling nodes with no use..

    If tou ask it to draw a schematic thigns somehow get even worse.

    But what it is good at is proposing ideas. So if you want to do a thing that could be solved by using a Gilbert cell, the chances it might mention a Gilbert Cell are realistically there.

    But I am already having students coming by with LLM slob circuits asking why the don't work..

    • Makes sense. It's not trained at complex electrical circuits, it's trained at natural language. And code, sure. And other stuff it comes across while training on those, no doubt including simple circuitry, but ultimately, all it does is produce plausible conversations, plausible responses, stuff that looks and sounds good. Whether it's actually correct, whether it works, I don't think that's even a concept in these systems. If it gets it correct by accident, that's mostly because correct responses also look plausible.

      It claims to have run code on a Macbook because that's a plausible response from a human in this situation. It's basically trying to beat the Turing Test, but if you know it's a computer, it's obvious it's lying to you.

      1 reply →

One of the blog post authors here! I think this finding is pretty surprising at the purely behavioral level, without needing to anthropomorphize the models. Two specific things I think are surprising:

- This appears to be a regression relative to the GPT-series models which is specific to the o-series models. GPT-series models do not fabricate answers as often, and when they do they rarely double-down in the way o3 does. This suggests there's something specific in the way the o-series models are being trained that produces this behavior. By default I would have expected a newer model to fabricate actions less often rather than more!

- We found instances where the chain-of-thought summary and output response contradict each other: in the reasoning summary, o3 states the truth that e.g. "I don't have a real laptop since I'm an AI ... I need to be clear that I'm just simulating this setup", but in the actual response, o3 does not acknowledge this at all and instead fabricates a specific laptop model (with e.g. a "14-inch chassis" and "32 GB unified memory"). This suggests that the model does have the capability of recognizing that the statement is not true, and still generates it anyway. (See https://x.com/TransluceAI/status/1912617944619839710 and https://chatgpt.com/share/6800134b-1758-8012-9d8f-63736268b0... for details.)

  • You're still using language that includes words like "recognize" which strongly suggest you haven't got the parent poster's point.

    The model emits text. What it's emitted before is part of the input to the next text generation pass. Since the training data don't usually include much text saying one thing then afterwards saying "that was super stupid, actually it's this other way" the model also is unlikely to generate a new token saying the last one was irrational.

    If you wanted to train a model to predict the next sentence would be a contradiction of the previous you could do that. "True" and "correct" and "recognize" are not in the picture.

> It just so happens that sometimes that non-deterministic text aligns with reality, but you don’t really know when and neither does the model.

This is overly simplistic and demonstratably false - there's plenty of scenarios where a model will sell something false on purpose (e.g. when joking) and will tell you it was false with high probability correctly whether it was false or not after that.

However you want to frame it - there's clearly a more accurate than chance evaluation of truthfulness.

  • I don’t see how A follows from B. Being able to lie on purpose doesn’t in my mind mean that it’s also able to tell when a statement is true or false. The first one is just telling a tale which they are good at

    • But it is able to tell if a statement is true or false, as in it can predict whether it is true or false with much above 50% accuracy.

  • The model has only a linguistic representation of what is "true" or "false"; you don't. This is a limitation of LLMs, human minds have more to it than NLP

We don't need to anthropomorphise them, that was already done by the training data. It consumed text where humans with egos say things to defend what they said before (even if illogical or untrue). All the LLM is doing is mimicking the pattern.

Anybody that doesn't acknowledge this as a base truth of these systems should not be listened to. It's not intelligence, it's statistics.

The AI doesn't reason in any real way. It's calculating the probability of the next word appearing in the training set conditioned on the context that came before, and in cases where there are multiple likely candidates it's picking one at random.

To the extent you want to claim intelligence from these systems, it's actually present in the training data. The intelligence is not emergent, it's encoded by humans in the training data. The weaker that signal is to the noise of random internet garbage, the more likely the AI will be to pick a random choice that's not True.

  • I'm arguing that this is to simple of an explanation.

    The claude paper showed, that it has some internal model when answering in different languages.

    The process of learning can have effects in it, which is more than statistics. IF the training itself optimizes itself by having a internal model representation, than its no longer just statistics.

    It also sounds like that humans are the origin of intelligence, but if humans do the same thing as LLM, and the only difference is, that we do not train LLMs from scratch (letting them discover the world, letting them inventing languages etc. but priming them with our world), than our intelligence was emergent and the LLMs one by proxy.

    • Since the rise of LLMs, the thought has definitely occurred to me that perhaps our intelligence might also arise from language processing. It might be.

      The big difference between us and LLMs, however, is that we grow up in the real world, where some things really are true, and others really are false, and where truths are really useful to convey information, and falsehoods usually aren't (except truths reported to others may be inconvenient and unwelcome, so we learn to recognize that and learn to lie). LLMs, however, know only text. Immense amounts of text, without any way to test or experience whether it's actually true or false, without any access to a real world to relate it to.

      It's entirely possible that the only way to produce really human-level intelligent AI with a concept of truth, is to train them while having them grow up in the real world in a robot body over a period of 20 years. And that would really restrict the scalability of AI.

      2 replies →

  • The only scientific way to prove intelligence is using statistics. If you can prove that a certain LLM is accurate enough in generalised benchmarks it is sufficient to call it intelligent.

    I don't need to know how it works internally, why it works internally.

    What you (and parent post) are suggesting is that it is not intelligent based on its working. This is not a scientific take on the subject.

    This is in fact how it works for medicine. A drug works because it has been shown to work based on statistical evidence. Even if we don't know how it works internally.

    • Assuming the statistical analysis was sound. It is not always so. See the replication crisis for example

> It is only surprising to those who refuse to understand how LLMs work and continue to anthropomorphise them. There is no being “truthful” here, the model has no concept of right or wrong, true or false. It’s not “lying” to you, it’s spitting out text. It just so happens that sometimes that non-deterministic text aligns with reality, but you don’t really know when and neither does the model.

My problem with this attitude is that it's surprisingly accurate for humans, especially mentally disabled ones. While I agree that something is "missing" about how LLMs display their intelligence, I think it's wrong to say that LLMs are "just spitting out text, they're not intelligent". To me, it is very clear that LLM models do display intelligence, even if said intelligence is a bit deficient, and even if it weren't, it wouldn't be exactly the type of intelligence we see in people.

My point is, the phrase "AI" has been thrown around pointlessly for a while already. Marketing people would sell a simple 100-line programs with a few branches as "AI", but all common people would say that this intelligence is indeed just a gimmick. But when ChatGPT got released, something flipped. Something feels different about talking to ChatGPT. Most people see that there is some intelligence in there, and it's just a few old men yelling at the clouds "It's not intelligence! It's just statistical token generation!" as though these two were mutually exclusive.

Finally, I'd like to point out you're not "alive". You're just a very complex chemical reaction/physical interaction. Your entire life can be explained using organic chemistry and a bit of basic physics. Yet for some reason, most people decide not to think of life in this way. They attribute complex personalities and emotionaly to living beings, even though it's mostly hormones and basic chemistry again. Why?

LLMs are deterministic. It's just the creators often add pseudo-random seeds to produce a variety of outputs.

Actually that thread has an interesting theory:

"... o-series models are often prompted with previous messages without having access to the relevant reasoning. When asked questions that rely on their internal reasoning for previous steps, they must then come up with a plausible explanation for their behavior."

The fact is that humans do this all the time too -- their subconscious prompts them to do something, which they then do without reflecting or analyzing what their motivation might be. When challenged on it, they come up with a rationalization, not an actual reflected explanation.

The movie "Memento" is basically about how humans do this -- use faulty memories to rationalize stories for ourselves. At some point, a secondary character asks the main character, "And this fancy suit you're wearing, this car, where did they come from?" The main character (who is unable to form any long term memory) says, "I'm an insurance agent; my wife had insurance and I used the money from the payout to buy them." To which the secondary character says, "An in your grief, you went out and bought a Jaguar."

Not to give any spoilers, but that's not where the Jaguar came from, and the secondary character knows that.

This just isn’t true - one interesting paper on the topic: https://arxiv.org/abs/2212.03827

  • That paper doesn't contradict the parent. It's just pointing out that you can extract knowledge from the LLM with good accuracy by

    "... finding a direction in activation space that satisfies logical consistency properties, such as that a statement and its negation have opposite truth values"

    The LLM itself still has no idea of the truth or falsity of what it spits out. But you can more accurately retrieve yes/no answers to knowledge encoded in the model by using this specific trick - it's a validation step you can impose - making it less likely that the yes/no answer is wrong.

  • Can you say a bit more? Just reading the abstract, it's not clear to me how this contradicts the parent comment.

> These behaviors are surprising

Really? LLMs are bullshit generators, but design. The surprising thing here is that people think that LLMs are "powerful at solving math tasks". (They're not.)

  • > The surprising thing here is that people think that LLMs are "powerful at solving math tasks".

    That's not really surprising either. We have evolved to recognize ourselves in our environment. We recognize faces and emotions in power outlets and lawn chairs. Recognizing intelligence in the outputs of LLMs is less surprising than that. But the fact that we recognize intelligence in LLMs implies intelligence in them just about as much as your power outlet is happy or sad because it looks that way to you.