Comment by joshuajooste05
19 hours ago
Does anyone have any thoughts on privacy/safety regarding what he said about GPT memory.
I had heard of prompt injection already. But, this seems different, completely out of humans control. Like even when you consider web search functionality, he is actually right, more and more, users are losing control over context.
Is this dangerous atm? Do you think it will become more dangerous in the future when we chuck even more data into context?
Sort of. The thing is with agentic models, you are basically entering probability space where it can do real actions in the form of http requests if the statistical output leads it to it.
I've had Cursor/Claude try to call rm -rf on my entire User directory before.
The issue is that LLMs have no ability to organise their memory by importance. Especially as the context size gets larger.
So when they are using tools they will become more dangerous over time.