Comment by rldjbpin
2 months ago
even more naive way - just club several requests into batch of classification requests into one prompt. in practice, this is not production-ready as the llm output does not always contain results for the same number of input (sometimes more than inputted even!)
No comments yet
Contribute on Hacker News ↗