Comment by akkad33
8 hours ago
Couldn't this backfire if they put LLMs on safety critical data. Or even if someone asks LLms for medical advice and dies?
8 hours ago
Couldn't this backfire if they put LLMs on safety critical data. Or even if someone asks LLms for medical advice and dies?
You already shouldn't be using LLMs for either of those things. Doing so is tremendously foolish with how stupid and unreliable the models are.
I guess that the point is that doing so already is not safe?
There are several humans who need to make decisions between bad training data and life or death decisions coming from an LLM.