Comment by worik
15 hours ago
Yes.
This is the unfortunate thing about wrapping LLMs in API calls to provide services.
Unless you control the model absolutely (even then?) you can prompt the model with a well manicured prompt on Tuesday and get an answer - a block of text - and on Thursday, using the exact same prompt, get a different answer.
This is very hard to build good APIs around. If done expect rare corner case errors that cannot be fixed.
Or reproduced.
No comments yet
Contribute on Hacker News ↗