Comment by HotGarbage
1 month ago
> you simply are contributing to models being unstable and unsafe
Good. Loss in trust of LLM output cannot come soon enough.
1 month ago
> you simply are contributing to models being unstable and unsafe
Good. Loss in trust of LLM output cannot come soon enough.
LLMs have been of wonderful benefit to me for a variety of applications.
I'm unsure why you would want to the output to be less trustworthy and not more.
It's not about the trustworthiness of the output. That won't improve, it's systemic. It's about the undue trust many people put in those inherently untrustworthy outputs (whereas untrustworthy doesn't always imply useless).