Comment by meetpateltech
5 months ago
When you ask: 'How many r's are in strawberry?'
Claude 3.7 Sonnet generates a response in a fun and cool way with React code and a preview in Artifacts
check out some examples:
[1]https://claude.ai/share/d565f5a8-136b-41a4-b365-bfb4f4400df5
[2]https://claude.ai/share/a817ac87-c98b-4ab0-8160-feefd7f798e8
A shame the underlying issue still persists:
> There is exactly 1 'r' in "blueberry" [0]
[0] https://claude.ai/share/9202007a-9d85-49e6-9883-a8d8305cd29f
This test has always been so stupid since models work at the token level. Claude 3.5 already 5xs your frontend dev speed but people still say "hurr durr it can't count strawberry" as if that's a useful problem
The problem also comes to LLMs being confidently wrong when it’s wrong.
“Already 5xs”
Even AI marketing doesn’t claim this. Totally baseless claim given how many people report negative experiences trying to use AI.
Some people report some negative experiences for any tool ever brought into existence.
This test isn't stupid. If it can't count the number of letters in a text, can you rely on it with more important calculations?
You can rely on it for anything that you can validate quickly. And it turns out, there are a lot of problems which are trivial to validate the solution to, but difficult to build the solution.
1 reply →
Not on calculations that involve counting at a sub-token level. Otherwise, it depends.
I'm guessing this is an easter egg, but this was a huge gripe I had with artifacts and eventually disabled it (now impossible to disable afaict) as I'd ask question completely unrelated to code or clearly not wanting code as an output, and I'd have to wait for it to write a program (which you can't stop afaict, it stops the current artifact then starts a new one)
(still claude sonnet is my go-to and favorite model)