← Back to context

Comment by vannevar

6 months ago

I agree, the dice analogy is an oversimplification. He actually touches on the problem earlier in the article, with the observation that "the paths generated by these mappings look a lot like strange attractors in dynamical systems". It isn't that the dice "conspire against you," it's that the inputs you give the model are often intertwined path-wise with very negative outcomes: the LLM equivalent of a fine line between love and hate. Interacting with an AI about critical security infrastructure is much closer to the 'attractor' of an LLM-generated hack than, say, discussing late 17th century French poetry with it. The very utility of our interactions with AI is thus what makes those interactions potentially dangerous.