Comment by shawabawa3
3 hours ago
> Especially when LLMs are used for scientific work I’d expect this to be used to make at least LLM chats replicable.
Pretty sure LLM inference is not deterministic, even with temperature 0 - maybe if run on the same graphics card but not on clusters
No comments yet
Contribute on Hacker News ↗