I am wondering can we use LLMs to semantically encrypt our emails so that if I am talking about my startup strategy, to the person snooping or NSA it will appear as if we are talking about recipes.
We're proposing semantic steganography using LLMs as encoder/decoder pairs where startup strategy discussions appear as recipe exchanges. Unlike traditional crypto, security emerges from semantic complexity rather than mathematical hardness - the LLM maps between concept spaces (e.g., "fermentation time" ↔ "development cycles") using its world model. Both parties share a seed phrase that deterministically generates the same bidirectional mapping, eliminating key exchange over insecure channels. The core insight: natural language is already an encoder (concepts → symbols), so we're just adding a second semantic layer that looks like normal Layer-1 communication to observers. Main challenges are LLM non-determinism requiring error correction and the tradeoff between information density and plausibility. The approach essentially weaponizes the LLM's semantic understanding to create a regenerable codebook rather than storing/transmitting it.
I am wondering can we use LLMs to semantically encrypt our emails so that if I am talking about my startup strategy, to the person snooping or NSA it will appear as if we are talking about recipes.
We're proposing semantic steganography using LLMs as encoder/decoder pairs where startup strategy discussions appear as recipe exchanges. Unlike traditional crypto, security emerges from semantic complexity rather than mathematical hardness - the LLM maps between concept spaces (e.g., "fermentation time" ↔ "development cycles") using its world model. Both parties share a seed phrase that deterministically generates the same bidirectional mapping, eliminating key exchange over insecure channels. The core insight: natural language is already an encoder (concepts → symbols), so we're just adding a second semantic layer that looks like normal Layer-1 communication to observers. Main challenges are LLM non-determinism requiring error correction and the tradeoff between information density and plausibility. The approach essentially weaponizes the LLM's semantic understanding to create a regenerable codebook rather than storing/transmitting it.