Circuit Tracing: Revealing Computational Graphs in Language Models (Anthropic)

3 days ago (transformer-circuits.pub)

> Deep learning models produce their outputs using a series of transformations distributed across many computational units (artificial “neurons”). The field of mechanistic interpretability seeks to describe these transformations in human-understandable language.

This is the central theme behind why I find techniques like genetic programming to be so compelling. You get interpretability by default. The second order effect of this seems to be that you can generalize using substantially less training data. The humans developing the model can look inside the box and set breakpoints, inspect memory, snapshot/restore state, follow the rabbit, etc.

The biggest tradeoff here being that the search space over computer programs tends to be substantially more rugged. You can't use math tricks to cheat the computation. You have to run every damn program end-to-end and measure the performance of each directly. However, you can execute linear program tapes very, very quickly on modern x86 CPUs. You can search through a billion programs with a high degree of statistical certainty in a few minutes. I believe we are at a point where some of the ideas from the 20th century are viable again.

  • For a complex enough problem (like next word prediction on arbitrary text), I really have my doubts that any such method will result in an "interpretable" solution. More likely you end up with a giant stack of indecipherable if statements, gotos, and random multiplications. And that's assuming no matrices are involved, introduce those and you've just got a non-differentiable, non-parallelizable neural network.

  • Intepretability is nice, I guess, but what if the underlying latent model for a real-world system is not human-understandable. if a system provides interpretability by default, does it fail to build a model for a system that can't be interpreted? Personally I think the answer is, it still builds a model, but produces an interpretation that can't be understood by people.

  • Where do the features come from, feature engineering? That's the method that failed the bitter lesson. Why would you use genetic programming when you can do gradient descent?

    • > Where do the features come from, feature engineering? That's the method that failed the bitter lesson.

      That would be the whole point of genetic programming. You don't have to do feature engineering at all.

      Genetic programming is a more robust interpretation of the bitter lesson than transformer architecture and DNNs. You have less clever tricks you need to apply to get the job done. It is more about unmitigated raw compute than anything out there.

      In my experiments, there are absolutely zero transformation, feature engineering, normalization, tokenization, etc. It is literally:

      1. Copy input byte sequence to program data region

      2. Execute program

      3. Copy output byte sequence from program data region

      Half of this problem is about how you search for the programs. The other half is about how you measure them. There isn't much other problem to worry about other than how many CPUs you have on hand.

      13 replies →

  • Seems interesting, do you have any place to read more?

    I took a look at DEAP, but it seems to be more tree-based, where as you seem to be talking about "linear program tapes" which I know nothing about.

    Also, it seems like the examples I find online of genetic programming are mostly discrete optimization, sometimes policy. The only classification problem that DEAP gave as an example was spambase, which uses pre-computed features (word frequencies) as the dataset (rather than the raw emails).

    Can you describe linear program tapes a bit? And give an example of a machine learning task more similar to where DNN are used that would be amenable to GP without feature engineering?

  • I’m also intrigued by genetic programming. One of the benefits, if I understand correctly, is that it is more resistant to getting stuck in local maxima.

    • Overparameterized neural networks don't have that problem because there are no local maxima; there are many roads to Rome.

Is the pdf available somewhere?

  • The Transformer Circuits Thread is an HTML-only journal. Of course you can convert the content to PDF, but then you lose the interactive elements.

    • That's kind of worrying for perenity. I was hoping some export were available by default, even without the interactions. I don't care that much about interactions, I care more about the content. Web technologies come and go and are subject to change and break.

      2 replies →