Comment by ck2
2 months ago
What does the prompt "no thinking" imply to an LLM ?
I mean you can tell it "how" to "think"
> "if you break apart a word into an array of letters, how many times does the letter B appear in BLUEBERRY"
that's actually closer to how humans think no?
The problem lies in how LLM tasks a problem, it should not be applying a dictionary to blueberry and seeing blue-berry, splitting that into a two part problems to rejoin later
But that's how its meant to deal with HUGE tasks so when applied to tiny tasks, it breaks
And unless I am very mistaken, it's not even the breaking apart into tasks that's the real problem, it's the re-assembly of the results
It's just the only way I know to get GPT-5 to not emit any thinking traces into its context, or at least not any of the user-facing ones.
With GPT-4.1 you don't have to include that part and get the same result, but that's only available via the API now AFAIK. I just want to see it spell the word without having the word in its context for it to work from.