Comment by ksynwa

1 year ago

> first introduced in the paper Large Language Models are Zero-Shot Reasoners in May 2022

What's a zero shot reasoner? I googled it and all the results are this paper itself. There is a wikipedia article on zero shot learning but I cannot recontextualise it to LLMs.

It used to be that you had to give examples of solving similar problems to coax the LLM to solve the problem you wanted it to solve, like: """ 1 + 1 = 2 | 92 + 41 = 133 | 14 + 6 = 20 | 9 + 2 = """ -- that would be an example of 3-shot prompting.

With modern LLMs you still usually get a benefit from N-shot. But you can now do "0-shot" which is "just ask the model the question you want answered".