← Back to context

Comment by red75prime

17 days ago

> and let them run wild.

Yep, that's the most worrying part. For now, at least.

> The moment agents start sharing their embeddings

Embedding is just a model-dependent compressed representation of a context window. It's not that different from sharing a compressed and encrypted text.

Sharing add-on networks (LLM adapters) that encapsulate functionality would be more worrying (for locally run models).

Previously sharing compressed and encrypted text was always done between humans. When autonomous intelligences start doing it it could be a different matter.

What do you think the entire issue was with supply chain attacks of skills moltbook was installing? Those skills were downloading rootkits to steal crypto.

  • It's relatively easy to analyze skill files. Shared chunks of neural networks (LLM adapters) can hide malicious behaviors better.