Comment by ErikBjare
2 months ago
You can provide them a significant amount of guidance through prompting. The model itself won't "learn", but if given lessons in the prompt, which you can accumulate from mistakes, it can follow them. You will always hit a wall "in the end", but you can get pretty far!
No comments yet
Contribute on Hacker News ↗