← Back to context

Comment by svachalek

1 year ago

No. In the model, tokens are random numbers. But if you consider a sentence to be a sequence of words, you can say that LLMs are quite competent about reasoning about those sequences.

ChatGPT is able to spell the word "recognize" when asked.

So it is able to take a sequence of tokens ["recogn", "ize"] and transform it into a sequence of tokens [" R", " E", " C", " O", " G", " N", " I", " Z", " E"]