Comment by dragonwriter
2 days ago
This isn't receiving input, its generating output competitive with models with task-specific training.
I’m guessing the iterative approach burns a lot of tokens though, though that may not matter too much with 8B Llama as the LLM.
No comments yet
Contribute on Hacker News ↗