← Back to context

Comment by ThrowawayR2

1 day ago

Humans are already investigating whether LLMs might work more efficiently if they work directly in latent space representations for the entirety of the calculation: https://news.ycombinator.com/item?id=43744809. It doesn't seem unlikely that two LLMs instances using the same underlying model could communicate directly in latent space representations and, from there, it's not much of a stretch for two LLMs with different underlying models could communicate directly in latent space representations as long as some sort of conceptual mapping between the two models could be computed.