Comment by x312

14 hours ago

This paper has an major issue that they are not surfacing, these activations can just be correlated on a common latent. For example, both the original activation and the explanation could share a broad latent like "this is an adversarial scenario". That could make reconstruction loss look good without showing that the actual explanation was the correct cause for the LLM's response.

I find this rather disturbing. Anthropic has quite a habit of overclaiming on questionable research results when they definitely know better. For example, their linked circuits blogpost ("The Biology of LLMs") was released after these methods were known to have major credibility issues in the field (e.g., see this from Deepmind - https://www.lesswrong.com/posts/4uXCAJNuPKtKBsi28/negative-r...). Similarly this new blog is heavily based on another academic paper (LatentQA) and the correlation/causation issue is already known.

Shoddy methodology is whatever, but it feels like this is always been done intentionally with the goal of trying to humanize LLMs or overhype their similarities to biological entities. What is the agenda here?

Didn't they show proper causation by changing "rabbit" to "mouse" in the rhyming example and having the generation change accordingly?