Comment by neuralkoi
1 month ago
> The most common category is that the models make wrong assumptions on your behalf and just run along with them without checking.
If current LLMs are ever deployed in systems harboring the big red button, they WILL most definitely somehow press that button.
US MIC are already planning on integrating fucking Grok into military systems. No comment.
Including classified systems. What could possibly go wrong
the US is going to stop the chinese by mass production of illegal pornography?
fwiw, the same is true for humans. Which is why there's a whole lot of process and red tape around that button. We know how to manage risk. We can choose to do that for LLM usage, too.
If instead we believe in fantasies of a single all-knowing machine god that is 100% correct at all times, then... we really just have ourselves to blame. Might as well just have spammed that button by hand.