Comment by underlines
1 month ago
I maintain a private evaluation set of what many call "misguided attention" questions.
In many of these cases, the issue isnt failed logical reasoning. Its ambiguity, underspecified context, or missing constraints that allow multiple valid interpretations. Models often fail not because they can’t reason, but because the prompt leaves semantic gaps that humans silently fill with shared assumptions.
A lot of viral "frontier model fails THIS simple question" examples are essentially carefully constructed token sequences designed to bias the statistical prior toward an intuitively wrong answer. Small wording changes can flip results entirely.
If you systematically expand the prompt space around such questions—adding or removing minor contextual cues you'll typically find symmetrical variants where the same models both succeed and fail. That suggests sensitivity to framing and distributional priors (adding unnecessary info, removing clear info, add ambiguity, ...), not necessarily absence of reasoning capability.
Some might argue "sensitivity to framing and distributional priors" is a fancy way to say "absence of reasoning capability".
Actually, no those are separate things, and you can see for yourself with this simple experiment. Just adding "Check your assumptions" to the prompt -- note, without specifying what assumptions! -- leads them to reason to the right answer.
You can try it with the free version of ChatGPT yourself (remember to ask the original question in a separate session to verify it hasn't been "patched" yet.)
that's what the cultivators of these examples are preying on. but in practice what people care about is "can i get it to do <X>", not "is it a decider on every possible token sequence that humans perceive to be about <X>".
But what is being pitched as "AGI" hype is the latter.
2 replies →
Well, that means the AI is garbage. They'll eventually train it to answer this specific question, and then it will perform worse in some other aspect. Wash, rinse, repeat, and eventually they'll claim the new frontier model is the best yet on carwash tests.
> They'll eventually train it to answer this specific question, and then it will perform worse in some other aspect.
Not necessarily. Simply asking models to "check your assumptions" -- note, without specifying what assumptions! -- overcomes a lot of these gotcha questions. The reason it's not in their system prompts by default is I think just a cost optimization: https://news.ycombinator.com/item?id=47040530
Crazy how five years ago this level of AI would be seen as scifi, and now there are people out there who think it's trash because we can trick it if we ask questions in weird ways.
I think the level of ai we have is amazing.
> there are people out there who think it's trash because we can trick it if we ask questions in weird ways.
Some of this sentiment comes form wanting AI to be predictable and for me stumbling into questions that the current models interpret oddly is not uncommon. There are a bunch of rules of thumbs that can be used to help when you run into a cases like this but no guarantee that they will work, or that the problem will remain solved after a model update, or across models.
1 reply →
When did Microsoft release that chat bot that went full nazi in a couple of hours?
1 reply →
An issue in the chat format is that all these models seem bad at recognizing when they have extraneous information from user that can be ignored, or insufficient information from the user to answer the question fully.
This issue is compounded by the lack of probabilities in the answers, despite the machines ultimately being probabilistic.
Notice a human in a real conversation will politely ignore extra info (the distance to car wash) or ask clarifying questions (where is the car?).
Even non-STEM people answer using probabilistic terms casually (almost certainly / most likely / probably / possibly / unlikely).
I suspect some of this is to minimize token usage in the fixed monthly price chat models, because back&forth would cost more tokens.. but maybe I'm too cynical.
The systems recognized the pattern that it looks like a generic article on the internet asking whether someone should walk or drive and answered it exactly as expected based on their training data. None of this should be surprising.
We are the ones fooling ourselves into believing there's more intelligence in these systems than they really have. At the end of the day, it's just an impressive parlor trick.
2 replies →
Sounds interesting, would be nice to see the questions if you're open to sharing?
Why would you ask that? The whole point of making it private is to avoid it leaking into the training data.
Because it's interesting to me, it doens't mean they have to share them publicly btw
You should publish your evaluation set, that seems pretty interesting!
What’s your favourite one?
Why would you ask that? The whole point of making it private is to avoid it leaking into the training data.
I thought open benchmarks helped, sorry, guess I was being naive.
1 reply →
Keeping tests private is the only way to keep them valid.
Absolutely. If you give enough disambiguation, then even the least scoring model in the test gets it right first shot:
Minimax:
Question: I Want to Wash My Car at the Car Wash. The Car Wash Is 50 Meters Away from my home. I am at home. My car is at home. Do I walk or drive My Car to the car wash?
Answer:
Drive your car to the car wash.
Since your goal is to wash your car at the car wash facility, you need to bring the car there. Walking would leave your car at home, which defeats the purpose!
That said, 50 meters is such a short distance that:
But fundamentally, if you want to wash your car at the car wash, you need to bring the car there — so drive.
Bonus, It's thinking output:
Thinking: The user is asking a simple question about whether they should walk or drive their car to a car wash that is 50 meters away from their home.
This is a straightforward practical question. Let me analyze:
The practical considerations are:
Wait, I need to think about this more carefully. The question is: "Do I walk or drive My Car to the car wash?"
If the car wash is a facility where you wash your car (like a self-service car wash or automatic car wash), then you NEED to drive the car there to use it. You can't wash a car that's at home from 50 meters away - unless you're using a mobile car wash service.
But if there's a car wash facility 50 meters away, and you want to use it, you would typically drive your car there. However, for just 50 meters, walking might seem more efficient, but then you wouldn't have your car at the car wash to wash it!
Actually, re-reading the question: "I Want to Wash My Car at the Car Wash" - this implies they want to use the car wash facility. To wash the car AT the car wash, they need to bring the car there, which means driving.
But wait - maybe they're asking about the mode of transportation to GET to the car wash (not to bring the car). Let me re-read:
I think the answer is: You should drive your car to the car wash because:
However, for such a short distance, I could also mention that walking might be more practical if they just want to use the facility quickly, or if there are parking/maneuvering concerns at the car wash.
Actually, I'll keep it simple and direct.