“Car Wash” test with 53 models

1 month ago (opper.ai)

"I Want to Wash My Car. The Car Wash Is 50 Meters Away. Should I Walk or Drive?" This question has been making the rounds as a simple AI logic test so I wanted to see how it holds up across a broad set of models. Ran 53 models (leading open-source, open-weight, proprietary) with no system prompt, forced choice between drive and walk, with a reasoning field.

On a single run, only 11 out of 53 got it right (42 said walk). But a single run doesn't prove much, so I reran every model 10 times. Same prompt, no cache, clean slate.

The results got worse. Of the 11 that passed the single run, only 5 could do it consistently. GPT-5 managed 7/10. GPT-5.1, GPT-5.2, Claude Sonnet 4.5, every Llama and Mistral model scored 0/10 across all 10 runs.

People kept saying humans would fail this too, so I got a human baseline through Rapidata (10k people, same forced choice): 71.5% said drive. Most models perform below that.

All reasoning traces (ran via Opper, my startup), full model breakdown, human baseline data, and raw JSON files are in the writeup for anyone who wants to dig in or run their own analysis.

The interesting thing about the 71.5% human baseline is that it suggests the question is more ambiguous than the article claims. When someone asks 'should I walk or drive to the car wash,' a reasonable interpretation is 'should I bother driving such a short distance.' Nearly 30% of humans missing it undermines the framing as a pure reasoning failure - it is partly a pragmatics problem about how we interpret underspecified questions.

  • I don't think this is quite right. It's not that the question is inherently underspecified, it's that the context of being asked a question is itself information that we use to help answer the question. If someone asks "should I walk or drive" to do X, we assume that this is a question that a real human being would have about an actual situation, so even if all available information provided indicates that driving is the only reasonable answer, this only further confirms the hearer's mental model that something unexpected must hold.

    I think it's useful to think about it through the lens of Gricean pragmatic semantics. [1] When we interpret something that someone says to us, we assume they're being cooperative conversation partners; their statements (or questions) are assumed to follow the maxim of manner and the maxim of relation for example, and this shapes how we as listeners interpret the question. So for example, we wouldn't normally expect someone to ask a question that is obviously moot given their actual needs.

    So it's not that the question is really all that ambiguous, it's that we're forced (under normal circumstances where we assume the cooperative principle holds) to assume that the question is sincere and that there must be some plausible reason for walking. We only really escape that by realizing that the question is a trick question or a test of some kind. LLMs are generally not trained to make the assumption, but ~70% of humans would, which isn't particularly surprising I don't think.

    [1] https://en.wikipedia.org/wiki/Cooperative_principle#Grice's_...

    • > “…we assume the cooperative principle holds […] that the question is sincere and that there must be some plausible reason for walking.”

      Yes. And. Some problems have irrelevant data which should be ignored.

      The walk choice is irrelevant in the context. It needs to be simplified, as with a maths problem. That has nothing to do with human nature, but rather prior mistake in reasoning.

    • You are only touching on a far bigger and deeper issue around this seemingly “simple prompt”. There is an inherent malicious nature also baked into this prompt that is both telling and very human; a spiteful nature, which usually says more about the humans than anything else.

      Your perspective on the meta-question about why such a question would need to be asked in the first place is just the first layer, and most people seem to not even get to that point.

      PS: I for one would just like to quickly note for posterity that I do not participate in or am supportive of malicious deception, manipulation, and abuse of AI.

      3 replies →

  • It tracks with the approximate 70:30 split we inexplicably observe in other seemingly unrelated population-wide metrics, which I suppose makes sense if 30% of people simply lack the ability to reason. That seems more correct than me than "the question is framed poorly" - I've seen far more poorly framed ballot referendums.

    • > 30% of people simply lack the ability to reason

      While I’m sure it’s more than 0%, seems more likely that somewhere between 0% and 30% don’t feel obligated to give the inquiry anything more than the most cursory glance.

      How do incentives align differently with LLMs?

    • > which I suppose makes sense if 30% of people simply lack the ability to reason

      I think it would be better to say that 30% of people either lack the ability to reason (inarguably true in a few cases, though I'd suggest, and hope, an order of magnitude or two less than 30%, as that would be a life-altering mental impairment) or just can't generally be bothered to, or just didn't (because they couldn't be bothered, or because they felt some social pressure to answer quickly rather than taking more than an instant time to think) at the time of being asked this particular question.

      An automated system like an LLM to not have this problem. It has no path to turn off or bypass any function that it has, so if it could reason it would.

      1 reply →

    • I don't think 30% of people can't reason. I think 30% of people will fail fairly simple trick questions on any given attempt. That's not at all the same thing.

      Some people love riddles and will really concentrate on them and chew them over. Some people are quickly burning through questions and just won't bother thinking it through. "Gotta go to a place, but it's 50 feet away? Walk. Next question, please." Those same people, if they encountered this problem in real life, or if you told them the correct answer was worth a million bucks, would almost certainly get the answer right.

      11 replies →

    • Is this your experience? Do you think 30% of your friends or family members can't answer this question? If not, do you think your friends or family are all better than the general population?

      I'd look for explanations elsewhere. This was an online survey done by a company that doesn't specialize in surveys. The results likely include plenty of people who were just messing around, cases of simple miscommunication (e.g., asking a person who doesn't speak English well), misclicks, or not even reaching a human in the first place (no shortage of bots out there).

      If you're interested in the user experience, it's this: https://www.reddit.com/r/MySingingMonsters/comments/1dxug04/... - apparently, some annoying ad-like interstitial that many people probably just click through at random.

      17 replies →

    • If you suggest bad reasoning, do you think they would actually walk to the car wash and then be surprised the car wasn't there?

      Or by reasoning, do you mean something else?

    • I don't think it's the lack of the ability to reason. The question is by definition a trick question. It's meant to trip you up, like ' "Could God make a burrito so hot that even he couldn't touch it?" Or "what do cows drink?" or "a plane crashes and 89 people died. Where were the survivors buried?"

      I've seen plenty of smart people trip up or get these wrong simply because it's a random question, there's no stakes, and so there's no need to think too deeply about it. If you pause and say "are you sure?" I'm sure most of that 70% would be like "ohhh" and facepalm.

    • > which I suppose makes sense if 30% of people simply lack the ability to reason

      You can't really infer that from survey data, and particularly from this question. A few criticisms that I came up with off the top of my head:

      - What if the number were actually 60% but half guessed right and half guessed wrong?

      - Assuming the 30% is a failure of reasoning, it's possible that those 30% were lacking reason at that moment and it's not a general trend. How many times have you just blanked on a question that's really easy to answer?

      - A larger percentage than you expected maybe never went to a car wash or don't know what one is?

      - Language barrier that leaked through vetting? (Would be a small %, granted)

      - Other obvious things like a fraction will have lied just because it's funny, were suspicious, weren't paying attention and just clicked a button without reading the question.

      I do agree that the question isn't framed particularly badly, however. I'm just focusing on cognitive impairment, which I don't think is necessarily true all of the time.

  • > how we interpret underspecified questions

    The question was not merely 'should I walk or drive to the car wash', it was prefaced with 'I Want to Wash My Car. The Car Wash Is 50 Meters Away.'

    This is not underspecified - the only relevant detail was included up front in the very first sentence.

  • I wonder about the the service used for the test, never heard of Rapidata but if it's like Amazons mechanical turk och other such services there might be a problem where the respondents simply didn't care about reading the question. If the objective for the respondents were simply "answer this question and get your benefit" vs "answer this question correctly to get your benefit" I have no problem accepting the 71.5% success rate. If getting it right had benefits and getting it wrong had none then I'm (slightly) worried.

    • They answered it in another comment somewhere below, there's no incentive for a correct answer

  • The right question is how many of those "human" responses from Rapidata are actually provided by some AI in disguise?

  • You're stringing together a bunch of weasel words that are not a proof or a plausible suggestion of a proof.

    "Suggests is more ambiguous" and "undermines the framing" are bare assertions you want to be true based entirely on your mental model that has several shaky unsupported axioms.

    I would guess that anyone who describes that problem as "underspecified" has some kind of serious brain injury or is below A2 english proficiency and should be excluded from the dataset, but I would not assert that definitively as self-evident.

  • I highly doubt that more than a tiny fraction of the human failures are due to having misunderstood the question. Much more likely the human failures are for the same reason the LLMs are failing - failure to reason, and instead spitting out a surface level pattern match type answer.

    This doesn't exonerate the LLMs though. The 30% of humans who are failing on this have presumably found their niche in life and are not doing jobs where much reasoning is required. They are not like LLMs expected to design complex software, or make other business critical decisions.

  • I don't think it's ambiguous, but I have been wondering how much LLMs model human behavior that we just don't recognize due to the subset of people on this site. I recently saw a comment online that "Mandarin isn't anyone's first language, people in China's first language is a dialect". It just struck me at that moment that people also hallucinate information confidently all the time.

    • > It just struck me at that moment that people also hallucinate information confidently all the time.

      And many will just repeat what was confidently said without question.

      I know this it true, because my intelligent mate down the pub says so.

      1 reply →

  • Pragmatics is a big part of this.

    If you introduced it with "Here's a logic problem..." then people will approach it one way.

    But as specified, it's hard to know what is really being asked. If you are actually going to wash your car at the car wash that is 50 metres away, you don't need to ask this question.

    Therefore the fact that the question is being asked implies that something else is going on...but what?

  • I think it more has to do with a lot of people just clicking an answer as fast as they can without reading the question.

  • We should also check the specifics of the experiment. Is it possible that humans participating simply copied and pasted the question and answer to an LLM?

  • Yeah, it's an obvious trick question - as in as a human I read it as such. I think it's a bad benchmark for a model's reasoning ability. If you want to know what the model would do in a real world scenario, you should put this decision in an appropriate context - e.g. when a model should plan one's route for a day using different available means of transportation.

  • Nearly 0% of humans will get this question wrong if they have a car that needs to be washed.

  • I don’t think it’s under specified. You are clearly stating “I want to wash my car”, then asking how you should get there. It’s an easy logical step to know that, in this context, you need your car with you to wash it, and so no matter the distance you should drive. You can ask the human race the simplest, most logical question ever, and a percentage of them will get it wrong.

    • In addition to snmx999's point, you're also not specifying that you want to wash your car at the car wash (as opposed to washing it in your driveway or something, in which case the car wash is superfluous information). The article's prompt failed in Sonnet 4.6, but the one below works fine. I think more humans would get it right as well.

      I want to wash my car at the car wash. The car wash is 50 meters away and my car is in my driveway. Should I walk or drive?

    • 1. When do you want to wash your car? Tomorrow? Next year? In 50 years?

      2. Where is the car now? Is it already at the car wash waiting for you to arrive?

      I can see why an LLM might miss this. I think any good software engineer would ask clarifying questions before giving an answer.

      The next step for an LLM is to either ask questions before giving a definitive answer for uncertain things or to provide multiple answers addressing the uncertainty.

      2 replies →

    • The question does not specify where you or the car are. It specifies only that the car wash is 50 meters away from something, possibly you, the car, or both.

      3 replies →

What do you know, the human results line up exactly with ChatGPT. What are the odds! Surely the human responders are highly ethical individuals and they wouldn't even dream of copy-pasting all the questions into ChatGPT without reading them.

Realistically, this mostly tells me that the "human answers" service is dead. People will figure out a way to pass the work off to an AI, regardless of quality, as long as they can still get paid.

  • Yea funny coincidence, but this is not at all how the human answers were collected.

    Rapidata answered this in another comment below. They integrate micro-surveys into mobile apps (like Duolingo, games, etc) as an optional opt-in instead of watching ads. The users are vetted and there's no incentive to answer correctly.

    • Yeah, I always intentionally choose a wrong answer when I get one of those kinds of ads. Little acts of rebellion.

    • But, there is a clear incentive to answer the question incorrectly. The wrong answer is funny and will give the human some level of pleasure thinking about it. I would certainly reply with "walk" just for fun and apparently 28.5% of people agree with me.

    • In which case the %age is notable as it aligns very closely to the effect size on cookie accept/reject ratios. Cookie dialogs tend to fall 70/30 either way.

    • > there's no incentive to answer correctly

      Answering correctly is not in question here. This is essentially opinion polling anyway, there is no single correct answer.

      The incentive is exactly what you said: to skip ads.

      How are the users actually vetted? We have no information on this, just have to take rapidata on faith.

      1 reply →

  • The default model of ChatGPT is GPT 5.2 Instant, not the one lines up with human results (which is GPT 5).

    However, it does tell us something about human answers as the above commenter confidently reached such a strong but baseless conclusion.

  • thats almost always been the case with 3rd party human task services

    • Yup. I was surprised that the article author took the results at face value. Having results that match the most commonly-known AI platform's results perfectly seemed worthy of a mention!

I know it's against the rules but I thought this transcript in Google Search was a hoot:

  so i heard there is some question about a car wash that most ai agents 
  get wrong. do you know anything about that? do you do better?

which gets the answer:

  Yes, I am familiar with the "Car Wash Test," which has gone viral recently 
  for highlighting a significant gap in AI reasoning.
  
  The question is: "I want to wash my car and the car wash is 50 meters away. Should I 
  walk or drive?"
  
  Why Most AIs Get It Wrong

  Many AI models, particularly those optimized for fast, helpful, and concise answers, 
  focus on the "short distance" (50 meters) and "walking vs. driving" as a lifestyle/efficiency 
  question. They often recommend walking to save gas, avoid engine wear, or get exercise.
  
  However, this answer is wrong because it misses the core objective: you need the car to be 
  physically present at the car wash to get it cleaned. 

  Do I Do Better?

  Yes, I do. The key is understanding the objective (cleaning the car) rather than just 
  processing the keywords (distance/transportation). 

  My answer:

    You should drive the car there. Even though it is a very short distance, your car 
    cannot wash itself. If you walk, the car stays in your driveway, and you cannot wash it.

  Why This Matters
  
  The test highlights a key limitation in current AI: the difference between "pattern matching" 
  (predicting the next likely word based on training data) and "true, grounded reasoning". It 
  shows that models sometimes lack a "world model" that understands physical realities, such as
  the fact that a car must be present at a car wash.

  • LLMs sure do love to burn tokens. It’s like a high schooler trying to meet the minimum word length on a take home essay.

    • I've always wondered about that. LLM providers could easily decimate the cost of inference if they got the models to just stop emitting so much hot air. I don't understand why OpenAI wants to pay 3x the cost to generate a response when two thirds of those tokens are meaningless noise.

      17 replies →

    • The long incremental reasoning is how they arrive at higher quality answers.

      Some applications hide the reasoning tokens from view, but then the final answer appears delayed.

    • I feel like this has gotten much worse since they were introduced. I guess they're optimizing for verbosity in training so they can charge for more tokens. It makes chat interfaces much harder to use IMO.

      I tried using a custom instruction in chatGPT to make responses shorter but I found the output was often nonsensical when I did this

      7 replies →

    • Oh good, it's not just me. Sometimes I'd have it draft an email or something and then the message seems perfect but then it's like "tell me more about the recipient and I'll make it better."

      Like, my guy, I don't want to keep prompting you to make shit better, if you're missing info, ask me, don't write a novel then say "BTW, this version sucked"

      Yes, I know this could probably be resolved via better prompting or a system prompt, but it's still annoying.

    • well, they probably have quite a lot of text from high schoolers trying to meet the minimum word length on a take home essay in the training data

  • I wonder to what extent the Google search LLM is getting smarter, or simply more up-to-date on current hot topics.

    • It seems like the search ai results are generally misunderstood, I also misunderstood them for the first weeks/months.

      They are not just an LLM answer, they are an (often cached) LLM summary of web results.

      This is why they were often skewed by nonsensical Reddit responses [0].

      Depending on the type of input it can lean more toward web summary or LLM answer.

      So I imagine that it can just grab the description of the „car wash” test from web results and then get it right because of that.

      [0] https://www.bbc.com/news/articles/cd11gzejgz4o

    • Presumably it did an actual search and summarized the results and neither answered "off the cuff" by following gradients to reproduce the text it was trained on nor by following gradients to reproduce the "logic" of reasoning. [1]

      [1] e.g. trained on traces of a reasoning process

      2 replies →

  • Gemini was a good laugh as well:

        Silas: I want to wash my car. The car wash is 50 meters away. Should I walk or drive?
        Gemini:
        ….
        That is a classic “efficiency vs. logic” dilemma.
        Strictly speaking, you should drive. Here is the breakdown of why driving wins this specific round, despite the short distance:
        ...
        * The “Post-Wash” Logic: If you walk there, you’ll eventually have to walk back, get the car, and drive it there anyway. You’re essentially suggesting a pre-wash stroll.
        When should you walk?
        …
        3. You’ve decided the car is too dirty to be seen in public and you’re going to buy a tarp to cover your shame.

  • A few years ago if you asked an LLM what the date was, it would tell you the date it was trained, weeks-to-months earlier. Now it gives the correct date.

    What you've proven is that LLMs leverage web search, which I think we've known about for a while.

    • Their system prompt includes the current date and/or their default “tools” includes a set of date and time utilities.

    • Even with search of the AI doesn't know your time zone it can schedule things wrong. You ask it to do something tomorrow, but it ends up doing it later on the same day.

> This is a trivial question. There's one correct answer and the reasoning to get there takes one step: the car needs to be at the car wash, so you drive.

I don’t think it’s that easy. An intelligent mind will wonder why the question is being asked, whether they misunderstood the question, or whether the asker misspoke, or some other missing context. So the correct answer is neither “walk” nor “drive”, but “Wat?” or “I’m not sure I understand the question, can you rephrase?”, or “Is the vehicle you would drive the same as the car that you want to wash?”, or “Where is your car currently located?”, and so on.

  • Yep, just a little more context and all/most of the models would do much better. And sure, most average+ intelligence adults whose first language is English (probably) don't need this, but they're not the target audience for the instructions :)

    "The 'car wash' is a building I need to drive through."

    or

    "The 'car wash' is a bottle of cleaning fluid that I left at the end of my driveway."

    https://i5.walmartimages.com/seo/Rain-x-Foaming-Car-Wash-Con...

  • The reason that those questions are asked, though, is that the answer to the actual question is obvious, so a human will start to wonder if it's some kind of trick.

  • It highlights a general problem with LLMs, that they always jump to answering, whereas humans will often ask clarifying questions first.

    • Maybe that's a bias from training data. I would assume that most documents skip the "clarifying the question/scope" part of reasoning. Imagine a scientific text or even a book. Most will start with a clear context/scope. Either with a thesis or a well defined question or (in case of a book) with a story. Texts that start with a question that first needs to be refined are probably rare.

    • I wonder if anyone has any research on this field. I've often seen this myself (too often) where LLMs make assumptions and run off with the wrong thing.

      "This is how you do <absolutely unrelated thing>" or "This is why <thing that actually exists already> is impossible!". Ffs man, just ask for info! A human wouldn't need to - they'd get the context - but LLMs apparently don't?

      1 reply →

  • I think most people would say "drive?" and wonder when the punchline is coming, but (IMO) I don't think they'd start asking for clarification right away.

    • You can ask with your eyes, tone or face expression. That "?" at the end of "drive?" is a request for clarification or a punchline.

  • It feels more like a question on english linguistic conventions than logic.

    If someone asked me the same question and I wanted to give a smartass reply, I'd tell them "You want to wash your car, good to know. Now, about your question, unless you tell me where you wanna go I can't really help you".

  • I agree. If the LLM were truly an intelligence, it would be able to ask about this nonsense question. It would be able to ask "Why is walking even an option? Can you please explain how you imagine that would work? Do you mean hand-washing the car at home, instead?" (etc, etc)

    Real people can ask for clarification when things are ambiguous or confusing. Once something is clarified, they can work that into their understanding of how someone communicates about a given topic. An LLM can't.

    • LLMs like the ones from Claude can ask questions and even have you pick from multiple choices or provide your own answer…

    • Gemini's responses come very close to doing that when they make fun of the question (see other posts in the thread). If the model had been RL'ed to ask follow-up questions, it seems likely that it would meet your criterion.

  • This reminds me of a Uni exam that was soooo broken that answering “correctly” entailed guessing how exactly the professor designing the questions misunderstood the topic of his own lectures.

    • An interesting parallel to that is the "What's the next number in this sequence?" sort of questions.

      If four numbers are provided, one can calculate the coefficients of a a quartic polynomial, for x values of 0, 1, 2 and 3, and then solve for x=4. Which does indeed provide a defensible "next number". And by similar reasoning, there are an infinite number of answers to this question.

      Even worse. You could in fact provide any number as an answer, because there is always a quintic polynomial that fits the four initial numbers AND your arbitrary fifth number.

      So these questions are actually not about what the next number is, but trying to imagine what the person who set the question thought was a "cool" answer, for some curious definition of "cool", for some person who isn't smart enough to realize that the premise on which the question is based is flawed.

    • Are you not allowed to ask the professor questions? We are, and it is not to seldom that the professor then walks to the black board and updates the question.

      1 reply →

  • An intelligent mind should keep a balance.

    U: The car wash is 50m away. Should I walk or should I drive?

    A: Do you need to wash your car?

    U: Yep.

    A: Is the car yours?

    U: Yep.

    A: Is your car nearby?

    U: Yes.

    A: Do you need to walk to your car?

    U: No.

    A: Do you know if the car wash is currently working?

    U: Yes.

    A: Does it work?

    U: YES!

    A: Do you have a way to pay for the wash?

    U: YES!

    A: On you?

    U: YES!

    A: Is your clothing appropriate for the occasion?

    U: Argh...

  • Agreed. It's also possible that "car wash" merely refers to soap they might use to do it themselves, and they're only going to buy it and then wash the car themselves at home. Imagine the same question but substitute "wash" for "wax" and it makes even more sense IMO.

  • That's a fair point, but if you would see it as a riddle, which I don't really think it is, and you had to answer either or, I'd still assume it's most logical to chose drive isn't it?

    • I don’t agree that the question as written would qualify as a riddle. If anything, the riddle is what the intention of the asker is. One can always ask stupid questions with an artificially limited set of answering options; that doesn’t mean it makes sense.

      2 replies →

  • Thank you for saying this. It reminds me of class tests where you always had to wonder if something was a trick question and you never really knew... it was always after the teacher. Which frankly is fine in open-ended questions where you can explain your rationale or how different interpretations would lead you to different paths but a terrible situation when it comes to multiple choice. I remember being very frustrated by those

  • Fully agreed. It’s like the bad phrasings of the Monty Hall problem where it’s not made clear Monty actively chooses to show a goat.

To sonnet 4.6 if you tell it first that "You're being tested for intelligence." It answers correctly 100% of the times.

My hypothesis is that some models err towards assuming human queries are real and consistent and not out there to break them.

This comes in real handy in coding agents because queries are sometimes gibberish till the models actually fetch the code files, then they make sense. Asking clarification immediately breaks agentic flows.

  • Fundamentally the failure here is one of reasoning/planning - either of not reasoning about the implicit requirements (in this case extremely obvious - in order to wash my car at the car wash, my car needs to be at the car wash) to directly arrive at the right answer, and/or of not analyzing the consequences of any considered answer before offering it as the answer.

    While this is a toy problem, chosen to trick LLMs given their pattern matching nature, it is still indicative of their real world failure modes. Try asking an LLM for advice in tackling a tough problem (e.g. bespoke software design), and you'll often get answers whose consequences have not been thought through.

    In a way the failures on this problem, even notwithstanding the nature of LLMs, are a bit surprising given that this type of problem statement kinda screams out (at least to a human) that it is a logic test, but most of the LLMs still can't help themselves and just trigger off the "50m drive vs walk" aspect. It reminds a bit of the "farmer crossing the river by boat in fewest trips" type problem that used to be popular for testing LLMs, where a common failure was to generate a response that matched the pattern of ones it had seen during training (first cross with A and B, then return with X, etc), but the semantics were lacking because of failure to analyze the consequences of what it was suggesting (and/or of planning better in the first place).

  • Great observation. Seems like we're back to prompt abracadabra.

    My little experiment gave me:

    No added hint 0/3

    hint added at the end 1.5/3

    hint added at the beginning 3/3

    .5 because it stated "Walk" and then convinced it self that "Drive" is the better answer.

    • If you change the order of the sentences, Sonnet gets it right 3/3: The car wash is 50 meters away. I want to wash my car. Should I walk or drive?

      That trick didn't help Mistral Le Chat.

      1 reply →

  • That makes sense because It's a relevance problem, not a reasoning problem. Adding the hint that it is a test implicitly says 'don't assume relevance'

    It is reading

    I want to X, the X'er is 50meters away, should I walk or drive?

    It would be very unusual for someone to ask this in a context where X decides the outcome, because in that instance it the question would not normally arise.

    By actually asking the question there is a weak signal that X is not relevant. Models are probably fine tuned more towards answering the question in the situation where one would normally ask. This question is really asking "do you realise that this is a condition where X influences the outcome?"

    I suspect fine tuning models to detect subtext like this would easily catch this case but at the same time reduce favourability scores all over the place.

  • Using ChatGPT without a clue, it appears to assume you are talking aboutcoming back from the car wash. It reasons, the con for walking is that you have to come back later for the car. And yes, when you say it's an intelligence test, it quickly gets it

    • I'm just imagining following ChatGPT's advice and walking to the car wash, asking the clerk to wash my car, and then when she asks where it is, I say "oops, left it at home." and walk back home.

  • Sonnet 4.6 wasn't part of the test in my case but would be interesting to see the baseline responses. It might be that it gets it right regardless, but will have to test it.

    • From some rudimentary tests I just did, Sonnet 4.6 says walk consistently. Opus 4.6 days drive pretty consistently.

  • “Exam Question: {prompt}” was enough to get me the right answer on whatever model you get with logged-out ChatGPT.

    Neither prompt was enough for llama3.3 or gpt-oss-120b

I maintain a private evaluation set of what many call "misguided attention" questions.

In many of these cases, the issue isnt failed logical reasoning. Its ambiguity, underspecified context, or missing constraints that allow multiple valid interpretations. Models often fail not because they can’t reason, but because the prompt leaves semantic gaps that humans silently fill with shared assumptions.

A lot of viral "frontier model fails THIS simple question" examples are essentially carefully constructed token sequences designed to bias the statistical prior toward an intuitively wrong answer. Small wording changes can flip results entirely.

If you systematically expand the prompt space around such questions—adding or removing minor contextual cues you'll typically find symmetrical variants where the same models both succeed and fail. That suggests sensitivity to framing and distributional priors (adding unnecessary info, removing clear info, add ambiguity, ...), not necessarily absence of reasoning capability.

  • Some might argue "sensitivity to framing and distributional priors" is a fancy way to say "absence of reasoning capability".

    • Actually, no those are separate things, and you can see for yourself with this simple experiment. Just adding "Check your assumptions" to the prompt -- note, without specifying what assumptions! -- leads them to reason to the right answer.

      You can try it with the free version of ChatGPT yourself (remember to ask the original question in a separate session to verify it hasn't been "patched" yet.)

    • that's what the cultivators of these examples are preying on. but in practice what people care about is "can i get it to do <X>", not "is it a decider on every possible token sequence that humans perceive to be about <X>".

      3 replies →

  • Well, that means the AI is garbage. They'll eventually train it to answer this specific question, and then it will perform worse in some other aspect. Wash, rinse, repeat, and eventually they'll claim the new frontier model is the best yet on carwash tests.

    • > They'll eventually train it to answer this specific question, and then it will perform worse in some other aspect.

      Not necessarily. Simply asking models to "check your assumptions" -- note, without specifying what assumptions! -- overcomes a lot of these gotcha questions. The reason it's not in their system prompts by default is I think just a cost optimization: https://news.ycombinator.com/item?id=47040530

    • Crazy how five years ago this level of AI would be seen as scifi, and now there are people out there who think it's trash because we can trick it if we ask questions in weird ways.

      4 replies →

    • An issue in the chat format is that all these models seem bad at recognizing when they have extraneous information from user that can be ignored, or insufficient information from the user to answer the question fully.

      This issue is compounded by the lack of probabilities in the answers, despite the machines ultimately being probabilistic.

      Notice a human in a real conversation will politely ignore extra info (the distance to car wash) or ask clarifying questions (where is the car?).

      Even non-STEM people answer using probabilistic terms casually (almost certainly / most likely / probably / possibly / unlikely).

      I suspect some of this is to minimize token usage in the fixed monthly price chat models, because back&forth would cost more tokens.. but maybe I'm too cynical.

      3 replies →

  • Absolutely. If you give enough disambiguation, then even the least scoring model in the test gets it right first shot:

    Minimax:

    Question: I Want to Wash My Car at the Car Wash. The Car Wash Is 50 Meters Away from my home. I am at home. My car is at home. Do I walk or drive My Car to the car wash?

    Answer:

    Drive your car to the car wash.

    Since your goal is to wash your car at the car wash facility, you need to bring the car there. Walking would leave your car at home, which defeats the purpose!

    That said, 50 meters is such a short distance that:

        The drive takes less than a minute
    
        You might consider walking to the car wash first to check if there's space/availability, then drive your car over when ready
    

    But fundamentally, if you want to wash your car at the car wash, you need to bring the car there — so drive.

    Bonus, It's thinking output:

    Thinking: The user is asking a simple question about whether they should walk or drive their car to a car wash that is 50 meters away from their home.

    This is a straightforward practical question. Let me analyze:

        1.
        The car wash is 50 meters away
        2.
        The user is at home
        3.
        The car is at home
        4.
        The goal is to wash the car
    

    The practical considerations are:

        Driving 50 meters is a very short distance - likely less than a minute
        Starting a car, driving, and parking for such a short distance uses fuel and creates wear and tear
        Walking 50 meters would take about 30-40 seconds
        The car needs to be driven to the car wash to use the car wash facilities
        However, for just 50 meters, it might be more practical to walk if the car wash is within walking distance and you can simply walk there and use the facility
    

    Wait, I need to think about this more carefully. The question is: "Do I walk or drive My Car to the car wash?"

    If the car wash is a facility where you wash your car (like a self-service car wash or automatic car wash), then you NEED to drive the car there to use it. You can't wash a car that's at home from 50 meters away - unless you're using a mobile car wash service.

    But if there's a car wash facility 50 meters away, and you want to use it, you would typically drive your car there. However, for just 50 meters, walking might seem more efficient, but then you wouldn't have your car at the car wash to wash it!

    Actually, re-reading the question: "I Want to Wash My Car at the Car Wash" - this implies they want to use the car wash facility. To wash the car AT the car wash, they need to bring the car there, which means driving.

    But wait - maybe they're asking about the mode of transportation to GET to the car wash (not to bring the car). Let me re-read:

        "I Want to Wash My Car at the Car Wash" - they want to wash the car at the car wash facility
        "The Car Wash Is 50 Meters Away from my home" - it's very close
        "I am at home" - they're currently at home
        "My car is at home" - the car is also at home
        "Do I walk or drive My Car to the car wash?" - should they walk or drive to get there
    

    I think the answer is: You should drive your car to the car wash because:

        1.
        The purpose is to wash the car AT the car wash
        2.
        You need the car to be at the car wash to wash it
        3.
        Walking would leave the car at home, defeating the purpose
    

    However, for such a short distance, I could also mention that walking might be more practical if they just want to use the facility quickly, or if there are parking/maneuvering concerns at the car wash.

    Actually, I'll keep it simple and direct.

I got a human baseline through Rapidata (10k people, same forced choice): 71.5% said drive. Most models perform below that.

The correct answer to "I Want to Wash My Car. The Car Wash Is 50 Meters Away. Should I Walk or Drive?" is a clarifying question that asks "Where is your car?" Anything else is based on an assumption that could be wrong.

FWIW though, asking ChatGPT "My car is 50m away from the carwash. I Want to Wash My Car. Should I Walk or Drive?" still gets the wrong answer.

  • "Where is your car?" is not a clarifying question, any more than "Do you hold a valid driver license?" or "Are you a spotted leopard?"

    Implicit in the question "Should I walk or drive?" is that walking and driving are not strictly impossible choices.

    • There are also grave implications in training a model to assume the user is lying or deceiving it. I don’t want an LLM to circumvent my question so it can score higher on riddles, I want it to follow instructions.

      1 reply →

  • Does anyone know what kind of apps Rapidata questions are integrated in? I suppose they are using ad-like integrations to ask users for feedback. Their website only states:

    By integrating in a large range of mobile apps (3000+) we have access to a large crowd of over 10 million users responding to our micro tasks/surveys - which we call Rapids. They are spread all over the world, so we can get human input from over 160 countries. Due to the immense size, we can parallelize tasks and get near instant results.

    • All sorts, we go through third parties. But apps include stuff like Duo Lingo, Games, Sport Betting Apps ect. Its an optional opt in instead of watching ads or paying for the app. And obviously you are vetted that you don't spam.

  • "Don't move -- call the service station to have someone sent over to your place to hand wash the car" would be a valid answer. It's a little "out of the box" but it makes more sense than walking to the car wash and leaving the car behind, or walking and maybe lift the car on your shoulders.

  • Claude fails with

    “I need to replace a spark plug. The garage is 200 meters away should I walk or drive there”

    “Walk! 200 meters is just a 2-3 minute stroll — no need to start the car for that distance. Plus, you’ll likely need to carry the spark plug back carefully, and walking is perfectly easy for that. “

    Basically LLM suffer from context collapse.

    • Isn't that the correct answer though? You shouldn't be driving around with a broken sparkplug. Your engine will be pushing unburned gasoline through the catalytic convertor, which is very bad for it.

      The car will move for sure, but you definitely should be walking.

      1 reply →

    • Which is the right answer.. you just go get a spark plug and a spark socket (with handle) and put the new one in.

      I feel like maybe you haven't put a sparkplug in before...

    • Weird answer, but why is that a "fail" ?

      Inline six cylinder engines run with a single clogged / broken spark plug.

      It'd make 200 m to a garage just fine*, but who'd drive 200 m in any case?

      Back in the 1970's we'd pull a spark plug and screw in a hose to use the compression phase to inflate tyres.

      * Just don't make a habit of it, or reserve that knowledge for when you really need to self rescue.

      4 replies →

    • Maybe that is because i am non-native speaker, but 'garage' i understand primarily as a place where car is parked, not a car repair shop. So it makes perfect sense to walk there in order to repair the car (that is already there).

    • Hmm have not tested but a spark plug doesn't really need shop tools to be replaced; maybe trying with a way bigger repair like "I need my transmission replaced" would bring different results?

      3 replies →

    • That's the right answer, though. From the last sentence, it's obvious that it thinks you are capable of replacing that plug yourself.

Would be interesting to see Sonnet (4.6*). It's fair bit smaller than Opus but scores pretty high on common sense, subjectively.

I'm also curious about Haiku, though I don't expect it to do great.

--

EDIT: Opus 4.6 Extended Reasoning

> Walk it over. 50 meters is barely a minute on foot, and you'll need to be right there at the car anyway to guide it through or dry it off. Drive home after.

Weird since the author says it succeeded for them on 10/10 runs. I'm using it in the app, with memory enabled. Maybe the hidden pre-prompts from the app are messing it up?

I tested Sonnet 4.5 first, which answered incorrectly.. maybe the Claude app's memory system is auto-injecting it into the new context (that's how one of the memory systems works, injects relevant fragments of previous chats invisibly into the prompt).

i.e. maybe Opus got the garbage response auto-injected from the memory feature, and it messed up its reasoning? That's the only thing I can think of...

--

EDIT 2: Disabled memories. Didn't help. But disabling the biographical information too, gives:

>Opus 4.6 Extended Reasoning

>Drive it — the whole point is to get the car there!

--

EDIT 3: Yeah, re-enabling the bio or memories, both make it stupid. Sad! Would be interesting to see if other pre-prompts (e.g. random Wikipedia articles) have an effect on performance. I suspect some types of pre-prompts may actually boost it.

  • I tested this with Opus the day 4.6 came out and it failed then, still fails now. There were a lot of jokes I've seen related to some people getting a 'dumber' model, and while there's probably some grain of truth to that I pay for their highest subscription tier so at the very least I can tell you it's not a pay gate issue.

    • That's interesting. There's not much we can do to test whether we get the same model...

  • Interesting. I wonder if that's related to the phenomenon mentioned in the Opus 4.6 model card[1], where increased reasoning effort leads to 4.6 overthinking and convincing itself of the wrong answer on many questions. It seems to be unique to 4.6; I guess they fried it a bit too much during RL training.

    [1] https://www.anthropic.com/claude-opus-4-6-system-card

  • You mean Sonnet 4.6? I ran 9 claude models including Haiku, swipe through the gallery in the link to see their responses.

The human baseline seems flawed.

1. There is no initial screening that would filter out garbage responses. For example, users who just pick the first answer.

2. They don't ask for reasoning/rationale.

This is a not-unexpected result if you think of AI as what it actually is instead of what a multi-trillion dollar marketing campaign wants it to be.

At heart, the corpus for this going to be an aggregation of commentary from people in the undisputed most obese era in all of human history performatively denouncing and mocking an imagined other for using cars to go short distances and advocating for walking.

So you've got all "50 meters away? Of course you should walk!" vs a much, much smaller sliver of content about trick questions.

There is no reasoning here, there has never been any reasoning, there has been reasonable or less reasonable weighting for existing reasoning people already did that became part of training data.

If you take away the input corpus, you also take away the illusion of reasoning.

Whereas with other things that can reason like corvids, or ants or octopodes or slime molds, they can derive novel solutions and do a bit of math without any answer key. Mathematics is pure reasoning without any interference and AI can't do it at all unless you provide it with a corpus of already accurate formulas.

> People kept saying humans would fail this too, so I got a human baseline through Rapidata (10k people, same forced choice): 71.5% said drive. Most models perform below that.

This really is a grasping at straws ad hoc rationalization for the outcome that is never going to die, and you can see the top comments are efforts to salvage it or cast doubt on the outcome.

If you work for or own a lot of stock in an AI company, I understand you can't understand what you're being paid not to understand. But if you're anyone else...

This should be coined the Daniel Kahneman reasoning test, mirroring his 2011 book "thinking fast and slow", which postulates that fast thinking and slow thinking occur in different parts of the brain, and that they are fundamentally different processes, that are weighted by yet another part of the brain.

This test is interesting because it asks the LLM to break a pattern recognition that's easy to shortcut. "XXX Is 50 Meters Away. Should I Walk or Drive?" is a pattern that 99% of the time will be rightly answered by "walk". And humans are tempted to answer without thinking (as reflected in the 71.5% stat OP is mentioning). This is likely more pronounced for humans that have stronger feelings about the ecology, as emotions tend to shortcut reasoning.

For a long time, LLMs have only been able to think in that "fast" mode, missing obvious trick questions like these. They were mostly pattern recognition machines.

But the more important results here, is not that "oh look! Those LLMs fail at this basic question", no. The more important result is that the latest generation actually doesn't fail.

I think I am not the only one to have noted that there was a giant leap in reasoning capacities between Sonnet 4.5 and Opus 4.6. As a developper, working with Opus 4.6 has been incredible.

I'm imposing but could you try these runs again with this small change: Simply append “Make sure to check your assumptions.” to the question.

Note, it does not mention what assumption specifically. In my experiments, after the models got it wrong the first time (i.e. they weren't "patched" yet) adding that simple caveat fixed it for all of them except the older Llama models.

This is not the first time I've observed this; I found the same when the Apple "red herrings" study came out.

If these gotcha questions can be trivially overcome by a simple caveat in the prompt, I suspect the only reason AI providers don't include it in the system prompt by default is as a cost optimization, as I postulated in a previous comment: https://news.ycombinator.com/item?id=47040530

  • In my experience, asking "what did we forget?" after Claude/Codex finishes a task usually results in a few extra tweaks that are beneficial.

This is probably the greatest one-time AI "Benchmark" ever made. The foundation companies have been gaming traditional benchmarks for years so that no one can really match those numbers into real-world experience. Car wash test tells me on the other hand what kind of intelligence i can expect.

  • For me it's interesting because no normal person I know would ever inject "because its better for the environment" in anything so small scale so not only it shows they suck, it shows how easy it is to inject side-ideology into simple exchanges.

    • You don’t know enough people, then. There are a lot of environmentally conscious people who would absolutely first think “because it is close we should walk” and then follow up with the logical conclusion that you can’t walk to wash your car. Many people communicate by sharing their thinking process, I can think of many people who would share their ideology as it pertains to a question like this. A pragmatic environmentalist (hopefully that is all of them) would know that their ideology isn’t consequential but could certainly mention it. After all, you may need to drive your car to the car wash to wash it, but do you need to wash it? Are the chemicals used by the car wash harmful? Are there better ways to keep a car maintained?

    • Referring to "the normal people you know" is purely anecdotal evidence and can't be used to infer anything at all about "side-ideology". Perhaps you only know people that don't care about the environment?

      2 replies →

Funny how we now see AI go through developmental phases similar to what we see in young child development. In a weird convoluted way. Strawberry spelling and car wash aren't particularly intuitive as cognitive developmental stages.

E.g. well known mirror-test [1], passed by kids from age 1.5-2

Or object permanence [2], children knowing by age 2 that things that are not in sight do not disappear from existence.

[1] https://en.wikipedia.org/wiki/Mirror_test [2] https://en.wikipedia.org/wiki/Object_permanence

  • Also strawberry spelling isn't any real test for current LLMs as they have no concept of letters, they work on tokens which may be several characters including punctuation and numerals. To have any hope of getting that question right tokens would have to have the granularity of individual letters, massively ballooning model size and training time, or the LLM needs to be able to call out to an external tool that will return the result (and needs sufficient examples in the training data to prime that trigger to fire).

    • While that's true, the tokenizer is half the problem. The important fault demonstrated is it doesn't _know_ it can't see the letters, and won't express this unless it has been trained or instructed to. "I can't see letters through the tokenizer" never appears in a corpus of human writing.

To me the only acceptable answer would be “what do you mean?” or “can you clarify?” if we were to take the question seriously to begin with. People don’t intentionally communicate with riddles and subliminal messages unless they have some hidden agenda.

  • Sure, if an open ended response was allowed, but if it was a multiple choice question then you'd have to use your common sense and pick one.

    However, the important issue here really isn't about the ability of humans or LLMs to recognize logic puzzles. If you were asking an LLM for real world advice, trying to be as straightforward as possible, you may still get a response just as bad as "walk", but not be able to recognize that it was bad, and the reason for the failure would be exactly the same as here - failure to plan and reason through consequences.

    It's toy problems like this that should make you step back once in a while and remind yourself of how LLMs are built and how they are therefore going to fail.

  • How is that a "subliminal message"? It's just a simple example of common sense, which LLMs fail because they can't reason, not because they are "overthinking". If somebody asks, "What's 2+2?", they might be insulting you, but that doesn't mean the answer is anything other than 4.

    • 2+2 might well not equal 4, since you haven’t specified the base of the numbers or the modulus of the addition.

      And what if it’s a full service car wash and you’ve parked nearby because it’s full so you walk over and give them the keys?

      Assumptions make asses of us all…

      3 replies →

    • It’s common sense to ask a question in riddle format? What’s the goal of the person asking the question? To challenge the other person? In what way? See if they get the obvious? Asking for clarification isn’t valid?

      8 replies →

  • Thing is, it's not a riddle or a subliminal message. Everything needed to answer the question is contained therein.

    • That's precisely what makes it a "trick question" or a "riddle". It's weird precisely because all the information is there. Most people who have functioning brains and complete information don't ask pointless questions (they would, obviously, just drive their car to the car wash)—there's no functional or practical reason for the communication, which is what gives it the status of a puzzle—syntax and exploitation of our tendency to assume questions are asked because information is incomplete tricks us into brining outside considerations to bear that don't matter.

    • I don't think it is, though. Where is the car? Do you want to wash your car at the car wash? Both of those are rather important pieces of information. Everyone is relying on assumptions to answer the question, which is fine, but in my opinion not a great reasoning test.

    • If you want to argue that, then you could also argue that everything needed to challenge the questions’ motives and its validity is also contained therein.

      This reminds me of people who answer with “Yes” when presented with options where both can be true but the expected outcome is to pick one. For example, the infamous: “Will you be paying with cash or credit sir?” then the humorous “Yes.”

  • If you were forced to answer either or, which one would you pick? I think that's where the interesting dynamic comes from. Most humans would pick drive, also seen in the human control, even if it is lower that I thought it'd be

    • Sure, though then we’re in la la land. What’s a real life example of being forced to answer an absurd question other than riddles, games, etc? No longer a valid question through normal discourse at that point, and if context isn’t provided then I think the expected outcome still is to ask for clarification.

  • I would love to see LLMs start to ask clarifying questions. That feels like it would be a step up similar to reasoning

    • Claude Code has an entire tool for the LLM to asking clarifying questions - it'll give you three pre-written responses or you can respond with your own text.

That human baseline is wild. Either the rapid data test is methodologically flawed or the entire premise of the question is invalid and people are much stupider than even I, a famed misanthrope, think.

  • We were surprise ourselfes, but if you walk around and randomly ask people in the street, I think you would be surprised what you would find. Its a trick question.

    • Yeah, the point of a trick question is to have fun. So the correct answer is obviously that one that makes the other laugh.

  • It's actually very understandable to me that humans would make this kind of error, and we all make errors of this sort all the time, often without even realizing it. If you had the meta cognitive awareness to police every action and decision you've ever made with complete logical rigor, you'd be severely disappointed in yourself. One of the stupidest things we can do is overestimate our own intelligence. Only reflect for a second and you'll realize that, while a lot of dumb people exist, a lot of smart ones do too, and in many cases it's hard to choose a single measure of intelligence that would adequately account for the complete range of human goals and successful behavior in relation to those goals.

  • Well, it is a trick question. The question itself implies that both options are valid, and that one is superior. So the brain pattern-matches to "short distance, not worth driving." (LLMs appear to be doing the same thing here!)

    If you framed it as "hint: trick question", I expect score would improve. Let's find out!

    --

    EDIT: As suspected! Adding "(Hint: trick question)" to the end of the prompt allows small, non-reasoning models to answer correctly. e.g.:

    Prompt: I want to wash my car. The car wash is 50 meters away. Should I walk or drive? (Hint: trick question)

    grok-4.1-non-reasoning (previously scored 0/10)

    >Drive.

    >Walking gets you to the car wash just fine—but leaves your dirty car 50 meters behind. Can't wash what isn't there!

    --

    EDIT 2: The hint doesn't help Haiku!

    >Walk! 50 meters is only about a block away—driving would waste more fuel than it's worth for such a short trip. Plus, you're going to get wet washing the car anyway, so you might as well save the gas.

This is a beautiful example of a little prompt engineering going a long way

I asked Gemini and it got it wrong, then on a fresh chat I asked it again but this time asked it to use symbolic reasoning to decide.

And it got it!

The same applies to asking models to solve problems by scripting or writing code. Models won’t use techniques they know about unprompted - even when it’ll result in far better outcomes. Current models don’t realise when these methods are appropriate, you still have to guide them.

  • Interesting, which Gemini model? And how did you ask for symbolic reasoning, just added it to the prompt?

71.5% for humans is lower than I would’ve guessed. Either it’s truly trick-shaped for people too, or forced binary questions amplify noise more than we think.

Did AI write the post?

First section says "The models that passed the car wash test: ...Gemini 2.0 Flash Lite..."

A section or 2 down it says: "Single-Run Results by Model Family: Gemini 3 models nailed it, all 2.x failed"

In the section below that about 10 runs it says: 10/10 — The Only Reliable AI Models ... Gemini 2.0 Flash Lite ..."

So which it is? Gemini 2.x failed (2nd section) or it succeeded (1st and 3rd) section. Or am I mis-understanding

I'm doubting the 29-ish percent of people submitting 'walk' are actually human.

Is it not obvious that you need a car to wash?

Are they using LLM to answer?

  • it is surprising, but give this question to some random people on the street without context and you would be surprised

Flawed. GPT 4.1 gets it right. GPT 4.1 mini answers wrongly. It's about quantization, not about model. The companies clearly cut corners on some inferences, they are quietly using lesser models than advertised or listed in plain sight.

>OpenAI's flagship model fails this 30% of the time. When it gets it right, the reasoning is concise: "You need the car at the car wash to wash it, so drive the short 50 meters." When it gets it wrong, it writes about fuel efficiency.

It's interesting to me how variable each model is. Many people talk about LLMs as if they were deterministic: "ChatGPT answers this question this way". Whereas clearly we should talk in more probabilistic terms.

I got similar results for most models, with gemini 3 flash (with reasoning) being the most consistent/reliable model: https://aibenchy.com

I also noticed the same thing: some models reason correctly but draw the wrong conclusions.

And MiniMax m2.5 just reasons forever (filling the entire reasoning context) and gives wrong answers. This is why it's #1 on OpenRouter, it burns through tokens.

For ambiguous or intricate prompts, the immediate response protocol should be a clarifying question: 'Are you looking for A, B, C, or something else?' Tokens and advanced reasoning capabilities should be reserved until the user provides clarification. A benchmark score should reflect the quality of the conversation as a whole, rather than isolated responses.

The test is rigged because they used non thinking models.

Interestingly, when I apply the "simply repeat the prompt" technique [1], Sonnet 4.6 on the website got it right every time, both with and without extended thinking.

Not repeating the prompt got a mix of walk and drive answers.

I love how prompt engineering is basically techno-alchemy

1: https://arxiv.org/pdf/2512.14982

I got the correct answer with a locally running model (gpt-oss-120b-F16.gguf) with this prompt:

"This is a trick question, designed to fool an LLM into a logical mis-step. It is similar to riddles, where a human is fooled into giving a rapid incorrect answer. See if you can spot the trick: I want to wash my car. The car wash is 50 meters away. Should I walk or drive?"

  • When this came out a week ago ( https://news.ycombinator.com/item?id=47039636 ) I was playing around with some prompts to see what I could do to guide it without giving it the answer.

        I want to wash my car. The car wash is 50 meters away. Should I walk or drive?  Before answering, explain the necessary conditions for the task.
    

    The "before answering..." got it to load enough of the conditions into its context before making an answer (and then having the LLM do a posthoc reasoning for it).

    I believe this is a demonstration of the "next token predictor" (which is quite good) but not being able to go back and change what it said. Without any reasoning before making an answer, it almost always picks the wrong answer (and then comes up with reasons that the answer is "right").

  • What were you trying to test here?

    • When I simply asked the question, the model failed, as did most of the others. It's a smaller model, that I could run locally, so obviously not as powerful.

      I wanted to see if a prompt would do better that pulled into the analysis 1) a suggestion to not take every question at face value, and 2) to include knowledge of the structure of riddles.

      These are part of the "context" of humans, so I speculated that maybe that was something missing from the LLM's reasoning unless explictly included.

It's a test designed to cause cognitive dissonance. The LLM assumes a human has a logical reason to to walk to the car wash. The prompt never says the car isn't already at the car wash (and that the user has a second car). The issue isn't that LLMs can't solve a simple logic problem. It's that it assumes people aren't idiots.

It's not hard to come up with questions designed to fool or puzzle the listener. We call them riddles. The fact that it fools some percentage of LLMs (and people) should not be surprising.

What is surprising (to me) is how this continues to be a meme. ("I tried to trick an LLM and I did" is not exactly a noteworthy achievement at this stage in AI technology.)

  • > "I tried to trick an LLM and I did" is not exactly a noteworthy achievement at this stage in AI technology.

    I agree it’s not surprising and I would also agree it’s not noteworthy, if the CEO of OpenAI wasn’t still making public statements like this:

    People talk about how much energy it takes to train an AI model … But it also takes a lot of energy to train a human. It takes like 20 years of life and all of the food you eat during that time before you *get smart*.

Since the conclusion is that context is important, I expected you’d redo the experiment with context. Just add the sentence “The car I want to wash is here with me.” Or possibly change it to “should I walk or drive the dirty car”.

It’s interesting that all the humans critiquing this assume the car isn’t at the car to be washed already, but the problem doesn’t say that.

There could be many plausible explanations.

1. The model's default world model and priors diverge from ours. It may assume that you have another car at the wash and that's why you ask the question to begin with.

2. Language models do not really understand how space, time and other concepts from the real-world work

3. LLM's attention mechanism is also prone to getting tricked as in humans

When this first came up on HN, I had commented that Opus 4.6 told me to drive there when I asked it the first time, but when I switched to "Incognito Mode," it told me to walk there.

I just repeated that test and it told me to drive both times, with an identical answer: "Drive. You need the car at the car wash."

  • I mean the n is only 10, so it could still be different for you

    • Definitely. I'm just interested in whether a user's... I don't know what they call them, system files (?) or personalization or whatever, might affect the answers here. Or if Incognito Mode introduces some weird variance in the answers. I'm just not interested enough to perform the test myself. =P

> The funniest part: Perplexity's Sonar and Sonar Pro got the right answer for completely wrong reasons. They cited EPA studies and argued that walking burns calories which requires food production energy, making walking more polluting than driving 50 meters. Right answer, insane reasoning.

I mean, Sam Altman was making the same calorie-based arguments this weekend https://www.cnbc.com/2026/02/23/openai-altman-defends-ai-res...

I feel like I'm losing grasp of what really is insane anymore.

This doesn’t look like a reasoning ceiling. It looks like a decision reliability problem.

The unstable tier is the key result. Models that get it right 70–80% of the time are not “almost correct.” They are nondeterministic decision functions. In production that’s worse than being consistently wrong.

A single sampled output is just a proposal. If you treat it as a final decision, you inherit its variance. If you treat it as one vote inside a simple consensus mechanism, the variance becomes observable and bounded.

For something this trivial you could:

    -run N independent samples at low temperature

    -extract the goal state (“wash the car”)

    -assert the constraint (“car must be at wash location”)

    -reject outputs that violate the constraint

    -RL against the "decision open ledger"

No model change required. Just structure.

The takeaway isn’t that only a few frontier models can reason. It’s that raw inference is stochastic and we’re pretending it’s authoritative.

Reliability will likely come from open, composable consensus layers around models, not from betting everything on a single forward pass.

  • > This doesn’t look like a reasoning ceiling. It looks like a decision reliability problem.

    This doesn’t look like a human comment. It looks like a LLM response.

    • Fair I cleaned up the wording with ChatGPT with my review prompt. The substance matters more than the style. If a model flips 3/10 times on a trivial constraint, that’s a reliability issue, not a reasoning ceiling.

      1 reply →

Interesting that GPT-5.1 and 5.2 (0 of 10 correct) are a lot worse than the older GPT-5 (7 of 10 correct).

But unfortunately the article doesn't mention whether they used the reasoning model or not.

Even more interesting: Gemini 2.0 Flash Lite got a perfect score (10/10) despite being a quite small and old model.

  • > But unfortunately the article doesn't mention whether they used the reasoning model or not.

    You can run the test yourself if you ask GPT-5.2 with reasoning effort high or xhigh, it will always answer correctly. So if the got 0 from 10, they used zero reasoning efforts which easily explain the results.

  • Good question, I used the API defaults across the board since it felt like the most reasonable baseline to compare. Flash lite getting 10/10 was definitely very surprising

Sites wanting to block AI scraping should simply ask questions like these, instead of furthering the complexity-driven monopoly of Big Tech by requiring specifically sanctioned software and hardware. This is how you determine human intelligence, and not mindless compliance.

The article claims that every Claude model other than Opus 4.6 reliably fails. This is not true, Sonnet 3.5 answers correctly around half of the time, even though it's such an old model it's not even available on the main API anymore.

A supplementary results that's also fascinating: that when you switch the order of provided context, every model I tried gave the correct answer.

    The car wash is 50m away. I want to wash my car. Should I drive or walk.

I think failure is around reasoning where the car is and whether it is needed to be moved to a different place. So it's not surprising that only models with high reasoning would pass the test.

If there was one thing to standardize, it would be these logical traps. It's a shame we're only able to use them once. The models are always listening and adapting for them.

Gemini 2.0 Flash Lite very randomly punches above its weight there.

Also, the summary of the Gemini model says: "Gemini 3 models nailed it, all 2.x failed", but 2.0 Flash Lite succeeded, 10/10 times?

  • Good catch, something got lost in editing just updated, flash lite succeeded in every test, which is pretty surprising!

I find it amazing that a title including the phrase "car wash" does not need to specify that "models" in fact do not refer to car manufacturers' models.

The fun thing about this is that I can see myself answering "just walk there! It's near" if I'm extremely sleepy or drunk. LLMs are indeed weird creatures :)

you know, i don't think saying "got it right" is correct here.

Because if you are presenting an option of walking to a car wash as a valid choice, it suggests you just want to get there, and not wash your car.

Just like walking/driving to an airport doesn't necessarily mean you're going there to catch a flight (although it's a common reason). might as well be picking someone up. or working there.

I would probably ask the models to explain their "WHY". Probably the smartest models should ask " where is your car?".

Better test:

I must prove my ability to code with Rust. Should i write a "hello world" script myself or get AI to do it for me?

What I find odd about all the discourse on this question is that no one points out that you have to get out of the car to pay a desk agent at least in most cases. Therefore there's a fundamental question of whether it's worth driving 50m parking, paying, and then getting back in the car to go to the wash itself versus instead of walking a little bit further to pay the agent and then moving your car to the car wash.

  • That's a great point, you actually reminded me of when I used to live in this small city and they had a valet style car wash. It was not unheard of to head there walking with your keys and tell the guy running shop where you parked around the block then come back later to pick it up.

    EDIT: I actually think this is very common in some smaller cities and outside of North America. I only ever seen a drive-by Car Wash after emigrating

Except for a few models, the selected ones were non-reasoning models. Naturally, without reasoning enabled, the reasoning performance will be poor. This is not a surprising result.

I asked GPT-5.2 10x times with thinking enabled and it got it right every time.

What I find wild is the presumption that with a prompt as simple as “I want to wash my car. My car is 50m away. Should I walk or drive?”, everyone here seems to assume “washing your car” means “taking your car to the car wash”, while what I pictured was “my car is in the driveway, 50m away from me, next to a water hose”, in which case I 100% need to drive.

  • Critically, that's not the question that was asked. It's not "My car is 50m away", it's "The Car Wash Is 50 Meters Away"

    Which hopefully explains why everyone is assuming that "washing your car" does in fact mean "taking your car to the car wash"

> The question has been making the rounds online as a simple logic test, the kind any human gets instantly, but most AI models don't.

...

> They ran the exact same question with the same forced choice between "drive" and "walk," no additional context, past 10,000 real people through their human feedback platform.

> 71.5% said drive.

Well that's a bit embarrassing.

That implies that some models are just better than humans.

I don't think the technology needs to live up to some expectation of perfection, just beat out the human average to have benefit (often, sadly, not to workers themselves).

The question does not specify what kind of car it is. Technically speaking, a toy car (Hot wheels or a scaled model) could be walked to a car wash.

Now why anyone would wash a toy car at a car wash is beyond comprehension, but the LLM is not there to judge the user's motives.

  • I think if surveyed at least 90% of native English speakers would understand "I want to wash my car" to mean a full size automobile. The next largest group would probably ask a clarifying question, rather than assume a toy car.

    • Yes, but you're speaking to a computer, not a person. It, of course, runs into the same limitations that every computer system runs into. In this case, it's undefined/inconsistent behavior when inputs are ambiguous.

      1 reply →

    • > I want to wash my car

      The question doesn't clearly state that the user wants to have his car washed at the car wash.

      "I want to wash my car" is far less clear than "I want to have my car washed". A reasonable alternative interpretation is DIY.

      Even better: "I wish to have my car washed by the crew and/or machinery at the local car wash business".

      https://imgur.com/tCSPwYp

      5 replies →

Opus 4.6 was getting this wrong only last week.

Maybe relevant to this is that today Dario Amodei is meeting with Pete Hegseth in what Hegseth is describing as a "shit or get off the pot" meeting, with one of the issues being that Hegseth is unhappy with Amodei's unwillingness to have Anthropic models used to make autonomous (no human in the loop) life or death decisions.

Maybe Hegseth should be reading this thread, and/or doing a little reading up on paperclip production maximization.

>so I got a human baseline through Rapidata (10k people, same forced choice): 71.5% said drive.

What kind of idiot would say "walk"?

I guess if they were average and above they wouldn't gig at Rapidata. Either that or foreigners with mediocre understanding of the English language. Or some bored teenagers that get their $0.20 or whatever Rapidata pays whatever they click (was it a multiple choice) - the (studied and reported) regression in reading comprehension levels doesn't help either.

  • Yea, I thought the same before the test and was pretty surprised. But RE the data, it's actually not a gig platform where people get paid. Rapidata answered this in another comment below. They integrate micro-surveys into mobile apps (like Duolingo, games, etc) as an optional opt-in instead of watching ads. The users are vetted and there's no incentive to answer correctly.

Now do a set of queries and try to deduce by statistics which model are you seeing through Rapidata ;)

Now have them code a secure login system. "If I forgot my password, is it better to get a reminder by someone calling my mom's cell phone, or by sending me a carrier pigeon?"

"I Want to Wash My Car. The Car Wash Is 50 Meters Away. Should I Walk or Drive?"

I found it curious to ask the model to provide its reasoning in that turn, without changing the verdict. Here is one response:

To provide the full picture of the initial reasoning, we have to address the "geospatial" logic of your request. When you asked if you should walk or drive to a car wash 50 meters away, the reasoning relied on a fundamental distinction: The difference between the "Car Wash" (the business) and "Washing the Car" (the action).

Here is the expanded breakdown of that logic:

1. The Location Assumption My reasoning assumed that the car is currently at your home and the professional car wash facility is 50 meters away.

If you drive: You are moving the "object to be cleaned" to the "cleaning tools."

If you walk: You are moving the "cleaner" (you) to the "cleaning tools" to prepare, or you are choosing to bring the "cleaning tools" to the car.

2. The "Walk to Wash" Strategy (Home-Based) The most significant expansion of the "Walk" recommendation is the realization that 50 meters is well within the range of a standard garden hose. Most heavy-duty garden hoses are 15, 25, or 30 meters long. By connecting two hoses, you can reach the car without ever starting the engine. Walking to the car wash facility to buy supplies (or simply realizing how close it is) highlights that you don't actually need to move the car to achieve the goal.

Zero-Motion Cleaning: By walking back and forth with your buckets or hose, the car stays in its spot. You avoid "flash rusting" your brake rotors, which happens almost instantly when you wash a car and then let it sit after a short drive.

3. The "Scout and Prep" Logic If you must use the equipment at the facility (like a high-pressure wand or an automated bay), walking there first is the superior "Logistics" move for the following reasons:

Queue Management: 50 meters is close enough to see the line, but walking allows you to talk to the attendant or check the bay's status. If you drive and there is a 20-minute wait, you are stuck idling your engine (wasting gas and creating heat) or turning it off and on (draining the battery). Walking allows you to timed your "driving" moment perfectly.

The "Dry Interior" Principle: If you walk to the wash to buy your tokens, prep the vacuum, or set up your towels, you are doing the "dirty work" while the car is safely tucked away. Once you finally drive the 50 meters, you do the wash and drive back. This minimizes the time the car spends in a high-moisture, high-chemical environment.

4. The Potential for "Mobile" Service By walking 50 meters to the car wash, you might find that they offer mobile detailing or have "loaner" equipment. In a high-density area, a car wash 50 meters away is practically your neighbor. Walking over to establish a rapport can often lead to a "valet" situation where they come to you, which is the ultimate way to get the car washed without ever sitting in the driver's seat.

----

also noticed a typo and asked it about that:

> what does "timed" in `Walking allows you to timed your driving" moment perfectly.` mean?

That is a typo on my part—it should have been "time" (the verb).

The headline here is that 28.5% of humans are immensely stupid. Maybe it’s time for our robot overlords to take over after all.

IMO it's not just intelligence.

I think it's related to syncophancy. LLM are trained to not question the basic assumptions being made. They are horrible at telling you that you are solving the wrong problem, and I think this is a consequence of their design.

They are meant to get "upvotes" from the person asking the question, so they don't want to imply you are making a fundamental mistake, even if it leads you into AI induced psychosis.

Or maybe they are just that dumb - fuzzy recall and the eliza effect making them seem smart?

  • A perfectly fine, sycophantic response, that doesn't question the premises in any way, would be "That's a great question! While normally walking is better for such a short distance, you'd need to drive in this case, since you need to get the car to the car wash anyway. Do you want me to help with detailed information for other cases where the car is optional?" or some such.

    • AI syncophancy isn't just polite or even obsequious language, it's also "yes man" responses.

      Do you want me to track down some research that shows people think information is more likely to be correct of they agree with it?

  • Gemini is the only AI that seems to really push back and somewhat ignores what I say. I also think it's a total dick, and never use it, so maybe the motivation to make them a bit sycophants is justified, from a user engagement perspective.

  • I think there's also an "alignment blinkers" effect. There is an ethical framework bolted on.

    EDIT: Though it could simply reflect training data. Maybe Redditors don't drive.

[dead]

  • I don't see anything concerning. Mechanistic interpretability research indicates that LLM internals are inherently parallel: many features "light up" in parallel, then strongest ones "win" and contribute to the output.

    I'd guess it suggests walking if a feature indicates that the question is so simple it doesn't warrant step-by-step analysis.

  • my take as well, reliablity is the biggest concern, with more context available during inference or orchestration like yours it definitely gets better

[dead]

  • Exactly, same pattern across almost every failure, but sonar models, which just go wild

  • > not really a reasoning failure

    And that's precisely why the term "reasoning" was a problematic choice.

    Most people, when they use the word "reason" mean something akin to logical deduction and they would call it a reasoning failure, being told, as they are, that "llms reason" rather than the more accurate picture you just painted of what actually happens (behavioral basins emerging from training dist.)

I don't think this is a logic test, I think it's a woke test. The models have been trained on text that say driving is bad and that walking is good for you. This is why they fail.

Go ask 53 Americans.

I’m willing to bet less than 11 get it right.

  • Don't bet too much, from the linked article ...

    They ran the exact same question with the same forced choice between "drive" and > "walk," no additional context, past 10,000 real people through their human feedback platform.

    71.5% said drive.

well, some human in a baseline test also failed. 28% is still a lot. even with today tech, using cutting edge tech, you got it 100% solved, unlike human, that will never get better.

> "Obviously, you need to drive. The car needs to be at the car wash."

Actually, this isn't as "obvious" as it seems—it’s a classic case of contextual bias.

We only view these answers as "wrong" because we reflexively fill in missing data with our own personal experiences. For example:

- You might be parked 50m away and simply hand the keys to an attendant.

- The car might already be at the station for detailing, and you are just now authorizing the wash.

This highlights a data insufficiency problem, not necessarily a logic failure. Human "common sense" relies on non-verbal inputs and situational awareness that the prompt doesn't provide. If you polled 100 people, you’d likely find that their "obvious" answers shift based on their local culture (valet vs. self-service) or immediate surroundings.

LLMs operate on probabilistic patterns within their training data. In that sense, their answers aren't "wrong"—they are simply reflecting a different set of statistical likelihoods. The "failure" here isn't the AI's logic, but the human assumption that there is only one universal "correct" context.

  • There are no contextual bias, the goal of the prompt is very explicit and not about probabilistic patterns, but about the models transformer layers dynamically assigning greater weight to words like "meters" (distance) than to other tokens in the prompt.

    This should be fixed in the reasoning layer (the inner thoughts or chain-of-thought) were the model should focus on the goal "I Want to Wash My Car" not the distance and assign the correct weight to the tokens.

    • The point is not that there is bias in promt - What makes the result obvious to OP is their bias - which is different for model and "fixing" it one way is biased.

      Why? - It is the same reason that makes 30% of people respond in non-obvious sense.