Comment by bhelkey
1 year ago
It's not a human. I imagine if you have a use case where counting characters is critical, it would be trivial to programmatically transform prompts into lists of letters.
A token is roughly four letters [1], so, among other probable regressions, this would significantly reduce the effective context window.
[1] https://help.openai.com/en/articles/4936856-what-are-tokens-...
This is the kind of task that you'd just use a bash one liner for, right? LLM is just wrong tool for the job.