← Back to context

Comment by dash2

8 days ago

You'll be pleased to know that it chooses "drive the car to the wash" on today's latest embarrassing LLM question.

My OpenClaw AI agent answered: "Here I am, brain the size of a planet (quite literally, my AI inference loop is running over multiple geographically distributed datacenters these days) and my human is asking me a silly trick question. Call that job satisfaction? Cuz I don't!"

The thing I would appreciate much more than performance in "embarrassing LLM questions" is a method of finding these, and figuring out by some form of statistical sampling, what the cardinality is of those for each LLM.

It's difficult to do because LLMs immediately consume all available corpus, so there is no telling if the algorithm improved, or if it just wrote one more post-it note and stuck it on its monitor. This is an agency vs replay problem.

Preventing replay attacks in data processing is simple: encrypt, use a one time pad, similarly to TLS. How can one make problems which are at the same time natural-language, but where at the same time the contents, still explained in plain English, are "encrypted" such that every time an LLM reads them, they are novel to the LLM?

Perhaps a generative language model could help. Not a large language model, but something that understands grammar enough to create problems that LLMs will be able to solve - and where the actual encoding of the puzzle is generative, kind of like a random string of balanced left and right parentheses can be used to encode a computer program.

Maybe it would make sense to use a program generator that generates a random program in a simple, sandboxed language - say, I don't know, LUA - and then translates that to plain English for the LLM, and asks it what the outcome should be, and then compares it with the LUA program, which can be quickly executed for comparison.

Either way we are dealing with an "information war" scenario, which reminds me of the relevant passages in Neal Stephenson's The Diamond Age about faking statistical distributions by moving units to weird locations in Africa. Maybe there's something there.

I'm sure I'm missing something here, so please let me know if so.

  • I like your idea of finding the pattern of those "embarrassing LLM questions". However, I do not understand your example. What is a random program? Is it a program that compiles/executes without error but can literally do anything? Also, how do you translate a program to plain English?

    • A randomly generated program from a space of programs defined by a set of generating actions.

      A simple example is a programming language that can only operate on integers, do addition, subtraction, multiplication, and can check for equality. You can create an infinite amount of programs of this sort. Once generated, these programs are quickly evaluated within a split second. You can translate them all to English programmatically, ensuring grammatical and semantical correctness, by use of a generating rule set that translates the program to English. The LLM can provide its own evaluation of the output.

      For example:

      program:

      1 + 2 * 3 == 7

      evaluates to true in its machine-readable, non-LLM form.

      LLM-readable english form:

      Is one plus two times three equal to seven?

      The LLM will evaluate this to either true or false. You compare with what classical execution provided.

      Now take this principle, and create a much more complex system which can create more advanced interactions. You could talk about geometry, colors, logical sequences in stories, etc.

for Google AI Overview (not sure which Gemini model is used for it, must be something smaller than regular model), looks like search/RAG helps it get it right - since it relies on LinkedIn and Hacker News (!) posts to respond correctly...

as of Feb 16, 2026:

====

Drive the car. While 50 meters is a very short distance, the car must be present at the car wash to be cleaned, according to LinkedIn users [1]. Walking would leave your car at home, defeating the purpose of the trip, notes another user.

Why Drive: The car needs to be at the location to be cleaned. It's only a few seconds away, and you can simply drive it there and back, says a Hacker News user. [2]

Why Not to Walk: Walking there means the car stays home, as noted in a post. [3]

The best option is to start the engine, drive the 50 meters, and let the car get washed.

[1] https://www.linkedin.com/posts/ramar_i-saw-this-llm-failure-... [2] https://news.ycombinator.com/item?id=47034546 [3] https://x.com/anirudhamudan/status/2022152959073956050/photo...

But the regular Gemini reasons correctly by itself, without any references:

==== Unless you have a very long hose and a very patient neighbor, you should definitely drive. Washing a car usually requires, well, the car to be at the wash. Walking 50 meters—about half a New York City block—is great for your step count, but it won't get your vehicle any cleaner! Are you headed to a self-service bay or an automatic tunnel wash?

  • The fact that it quotes discussions about LLM failures kinda counts as cheating. That just means you need to burn a fresh question to get a real idea of its reasoning.

How well does this work when you slightly change the question? Rephrase it, or use a bicycle/truck/ship/plane instead of car?

  • I didn't test this but I suspect current SotA models would get variations within that specific class of question correct if they were forced to use their advanced/deep modes which invoke MoE (or similar) reasoning structures.

    I assumed failures on the original question were more due to model routing optimizations failing to properly classify the question as one requiring advanced reasoning. I read a paper the other day that mentioned advanced reasoning (like MoE) is currently >10x - 75x more computationally expensive. LLM vendors aren't subsidizing model costs as much as they were so, I assume SotA cloud models are always attempting some optimizations unless the user forces it.

    I think these one sentence 'LLM trick questions' may increasingly be testing optimization pre-processors more than the full extent of SotA model's max capability.

That's the Gemini assistant. Although a bit hilarious it's not reproducible by any other model.

  • GLM tells me to walk because it's a waste of fuel to drive.

    • I am not familiar with those models but I see that 4.7 flash is 30B MoE? Likely in the same venue as the one used by the Gemini assistant. If I had to guess that would be Gemini-flash-lite but we don't know that for sure.

      OTOH the response from Gemini-flash is

         Since the goal is to wash your car, you'll probably find it much easier if the car is actually there! Unless you are planning to carry the car or have developed a very impressive long-range pressure washer, driving the 100m is definitely the way to go.

A hiccup in a System 1 response. In humans they are fixed with the speed of discovery. Continual learning FTW.

  • I mean reasoning models don't seem to make this mistake (so, System 1) and the mistake is not universal across models, so a "hiccup" (a brain hiccup, to be precise).

Is that the new pelican test?