Comment by mmooss

2 months ago

You don't trust it yet, like a new human assistant you might hire - will they be able to handle all the variables? Eventually, they earn your trust and you start offloading everything to their inbox.

No, not like a human assistant. Competent humans will use logical reasoning, non-digital signals like body language and audible clues, and know the limits of their knowledge, so are more likely to ask for missing input. Humans will also be more predictable.

LLMs don’t learn. They’re static. You could try to fine tune, or continually add longer and longer context, but in the end you hit a wall.

  • You can provide them a significant amount of guidance through prompting. The model itself won't "learn", but if given lessons in the prompt, which you can accumulate from mistakes, it can follow them. You will always hit a wall "in the end", but you can get pretty far!