Were RNNs all we needed?

1 year ago (arxiv.org)

> Transformers required ~2.5x more training steps to achieve comparable performance, overfitting eventually.

> RNNs are particularly suitable for sequence modelling settings such as those involving time series, natural language processing, and other sequential tasks where context from previous steps informs the current prediction.

I would like to draw an analogy to digital signal processing. If you think of the recurrent-style architectures as IIR filters and feedforward-only architectures as FIR filters, you will likely find many parallels.

The most obvious to me being that IIR filters typically require far fewer elements to produce the same response as an equivalent FIR filter. Granted, the FIR filter is often easier to implement/control/measure in practical terms (fixed-point arithmetic hardware == ML architectures that can run on GPUs).

I don't think we get to the exponential scary part of AI without some fundamentally recurrent architecture. I think things like LSTM are kind of an in-between hack in this DSP analogy - You could look at it as FIR with dynamic coefficients. Neuromorphic approaches seem like the best long term bet to me in terms of efficiency.

  • Again from signal processing: depending on position of the poles in z-transformed filter transfer function the output of IIR has a narrow stability region that is typically carefully designed for. Otherwise IIR filters either exponentially decay to zero to exponentially grow to infinity. RNN cells like LSTM are "decaying filters" with non-linear gates introduced to stop decay and to "remember" things.

    FIR filters are way simpler to design and can capture memory without hacks.

  • ELI5: Could you explain what neuromorphic approaches mean, and how they contribute to AI/AGI? My first impression as a layperson (probably wrong) is that this approach resembles ideas from the book "The Society of the Mind", where the system isn't just simulating neurons but involves a variety of methods and interactions across "agents" or sub-systems.

    • Neuromorphic mostly just means "like how the brain works". It encompasses a variety of software & hardware approaches.

      The most compelling and obvious one to me is hardware purpose-built to simulate spiking neural networks. In the happy case, SNNs are extremely efficient. Basically consuming no energy. You could fool yourself into thinking we can just do this on the CPU due to the sparsity of activations. I think there is even a set of problems this works well for. But, in the unhappy cases SNNs are impossible to simulate on existing hardware. Neuronal avalanches follow power law distribution and meaningfully-large ones would require very clever techniques to simulate with any reasonable fidelity.

      > the system isn't just simulating neurons but involves a variety of methods and interactions across "agents" or sub-systems.

      I think the line between "neuron" and "agent" starts to get blurry in this arena.

      7 replies →

    • Neuromorphic has been an ongoing failure (for general purpose processors or even AI accelerators), ever since Carver Mead introduced (and quickly abandoned them) them nearly half a century ago. Bill Dally (NVidia CTO) concurs: "I keep getting those calls from those people who claim they are doing neuromorphic computing and they claim there is something magical about it because it's the way that the brain works ... but it's truly more like building an airplane by putting feathers on it and flapping with the wings!" From: Hardware for Deep Learning, HotChips 2023 keynote.

      We have NO idea how the brain produces intelligence, and as long as that doesn't change, "neuromorphic" is merely a marketing term, like Neurotypical, Neurodivergent, Neurodiverse, Neuroethics, Neuroeconomics, Neuromarketing, Neurolaw, Neurosecurity, Neurotheology, Neuro-Linguistic Programming: the "neuro-" prefix is suggesting a deep scientific insight to fool the audience. There is no hope of us cracking the question of how the human brain produces high-level intelligence in the next decade or so.

      Neuromorphic does work for some special purpose applications.

      2 replies →

    • I love this book and have it sitting on my shelf right now! Read it when I was a kid and was amazed at the ideas in it, nowadays it's clearer to me that the author only had a grasp of how things like that would be built but still cool nonetheless.

      I would highly recommend it to people who love a good "near future" scifi book.

      1 reply →

  • > I don’t think we get to the exponential scary part of AI without some fundamentally recurrent architecture

    I’ve been thinking the same for a while, but I’m starting to wonder if giant context windows are good enough to get us there. I think recurrency is more neuromorphic, and possibly important in the longer run, but maybe not required for SI.

    I’m also just a layman with just a surface level understanding of these things, so I may be completely ignorant and wrong.

  • Can we even implement IIR filters to give good performance and scaling at large scale on current architectures like GPUs ?

I find the entire field lacking when it comes to long-horizon problems. Our current, widely used solution is to scale, but we're nowhere near achieving the horizon scales even small mammal brains can handle. Our models can have trillions of parameters, yet a mouse brain would still outperform them on long-horizon tasks and efficiency. It's something small, simple, and elegant—an incredible search algorithm that not only finds near-optimal routes but also continuously learns on a fixed computational budget.

I'm honestly a bit envious of future engineers who will be tackling these kinds of problems with a 100-line Jupyter notebook on a laptop years from now. If we discovered the right method or algorithm for these long-horizon problems, a 2B-parameter model might even outperform current models on everything except short, extreme reasoning problems.

The only solution I've ever considered for this is expanding a model's dimensionality over time, rather than focusing on perfect weights. The higher dimensionality you can provide to a model, the greater its theoretical storage capacity. This could resemble a two-layer model—one layer acting as a superposition of multiple ideal points, and the other layer knowing how to use them.

When you think about the loss landscape, imagine it with many minima for a given task. If we could create a method that navigates these minima by reconfiguring the model when needed, we could theoretically develop a single model with near-infinite local minima—and therefore, higher-dimensional memory. This may sound wild, but consider the fact that the human brain potentially creates and disconnects thousands of new connections in a single day. Could it be that these connections steer our internal loss landscape between different minima we need throughout the day?

  • Yes... The field lacks the HOLY GRAIL (long-horizon problems). But we don't need a mouse-brain to sort spam emails. The Hail Mary 2B+ parameter models and above are still niche uses of these algorithms (too heavy to run practically). There is plenty of room for clever and small models running on limited hardware and datasets to solve useful problems and nothing more.

    Models that change size as needed have been experimented with, but they are either too inefficient or difficult to optimize at a limited power budget. However, I agree that they are likely what is needed if we want to continue to scale upward in size.

    I suspect the real bottleneck is a breakthrough in training itself. Backpropagation loss is too simplistic to optimize our current models perfectly, let alone future larger ones. But there is no guarantee a better alternative exists which may create a fixed limit to current ML approaches.

    • I'm not proposing a change in model size; rather, I'm suggesting a higher dimensionality within the current structure. There’s an interesting paper on LLM explainability, which found that individual neurons often represent a superposition of various data elements.

      What I’m advocating is a substantial increase in this aspect—keeping model size the same while expanding dimensionality. The "curse of dimensionality" illustrates how a modest increase in dimensions leads to a significantly larger volume.

      While I agree that backpropagation isn’t a complete solution, it’s ultimately just a stochastic search method. The key point here is that expanding the dimensionality of a model’s space is likely the only viable long-term direction. To achieve this, backpropagation needs to work within an increasingly multidimensional space.

      A useful analogy is training a small model on random versus structured data. With structured data, we can learn an extensive amount, but with random data, we hit a hard limit imposed by the network. Why is that?

It's curse and a blessing that discussion of topics happens in so many different places. I found this comment on Twitter/X interesting: https://x.com/fchollet/status/1841902521717293273

"Interesting work on reviving RNNs. https://arxiv.org/abs/2410.01201 -- in general the fact that there are many recent architectures coming from different directions that roughly match Transformers is proof that architectures aren't fundamentally important in the curve-fitting paradigm (aka deep learning)

Curve-fitting is about embedding a dataset on a curve. The critical factor is the dataset, not the specific hard-coded bells and whistles that constrain the curve's shape. As long as your curve is sufficiently expressive all architectures will converge to the same performance in the large-data regime."

  • > The critical factor is the dataset, not the specific hard-coded bells and whistles that constrain the curve's shape

    I have almost the opposite take. We've had a lot of datasets for ages, but all the progress in the last decade has come from advances how curves are architected and fit to the dataset (including applying more computing power).

    Maybe there's some theoretical sense in which older models could have solved newer problems just as well if only we applied 1000000x the computing power, so the new models are 'just' an optimisation, but that's like dismissing the importance of complexity analysis in algorithm design, and thus insisting that bogosort and quicksort are equivalent.

    When you start layering in normalisation techniques to minimise overfitting, and especially once you start thinking about more agentic architectures (eg. Deep Q Learning, some of the search space design going into OpenAI's o1), then I don't think the just-an-optimisation perspective can hold much water at all - more computing power simply couldn't solve those problems with older architectures.

    • I see what you are saying, and I made a similar comment.

      However it's still an interesting observation that many architectures can arrive at the same performance (even though the training requirements are different).

      Naively, you wouldn't expect eg 'x -> a * x + b' to fit the same data as 'x -> a * sin x + b' about equally well. But that's an observation from low dimensions. It seems once you add enough parameters, the exact model doesn't matter too much for practical expressiveness.

      I'm faintly reminded of the Church-Turing Thesis; the differences between different computing architectures are both 'real' but also 'just an optimisation'.

      > When you start layering in normalisation techniques to minimise overfitting, and especially once you start thinking about more agentic architectures (eg. Deep Q Learning, some of the search space design going into OpenAI's o1), then I don't think the just-an-optimisation perspective can hold much water at all - more computing power simply couldn't solve those problems with older architectures.

      You are right, these normalisation techniques help you economise on training data, not just on compute. Some of these techniques can be done independent of the model, eg augmenting your training data with noise. But some others are very model dependent.

      I'm not sure how the 'agentic' approaches fit here.

      10 replies →

    • Wait! We certainly did NOT have huge datasets (like current internet) for ages. Not even decades. I’ve seen a lecture by a MIT professor (which I cannot find now) where he asserted categorically, that the advances in AI are mostly because of the huge data that we now have and we didn’t before. And that was an old video.

      1 reply →

    • I think by far the biggest advances are related to compute power. The amount of processing needed to run training algorithms on the amounts of data needed for the latest models was just not possible even five years ago, and definitely not ten years ago.

      I'm sure there are optimizations from the model shape as well, but I don't think that running the best algorithms we have today with hardware from five-ten years ago would have worked in any reasonable amount of time/money.

      2 replies →

    • Isn’t bogosort transformer and quicksort proposed modified rnn (175 times faster training for 500 seq) here?

  • > "As long as your curve is sufficiently expressive all architectures will converge to the same performance in the large-data regime."

    I haven't fully ingested the paper yet, but it looks like it's focused more on compute optimization than the size of the dataset:

    > ... and (2) are fully parallelizable during training (175x faster for a sequence of length 512

    Even if many types of architectures converge to the same loss over time, finding the one that converges the fastest is quite valuable given the cost of running GPU's at scale.

    • > Even if many types of architectures converge to the same loss over time, finding the one that converges the fastest is quite valuable given the cost of running GPU's at scale.

      This! Not just fastest but with the lowest resources in total.

      Fully connected neural networks are universal functions. Technically we don’t need anything but a FNN, but memory requirements and speed would be abysmal far beyond the realm of practicality.

      10 replies →

    • > finding the one that converges the fastest is quite valuable given the cost of running GPU's at scale

      Not to him, he runs the ARC challenge. He wants a new approach entirely. Something capable of few-shot learning out of distribution patterns .... somehow

  • One big thing that bells and whistles do is limit the training space.

    For example when CNNs took over computer vision that wasn't because they were doing something that dense networks couldn't do. It was because they removed a lot of edges that didn't really matter, allowing us to spend our training budget on deeper networks. Similarly transformers are great because they allow us to train gigantic networks somewhat efficiently. And this paper finds that if we make RNNs a lot faster to train they are actually pretty good. Training speed and efficiency remains the big bottleneck, not the actual expressiveness of the architecture

    • This is true. This is the reason, in many of our experiments we find that using a new algorithm, KESieve, we actually find the planes much faster than the traditional deep learning training approaches. The premise is, a neaural network builds planes which separate the data and adjusts these planes through an iterative learning process. What if we can find a non iterative method which can draw these same planes. We have been trying this and so far we have been able to replace most network layers using this approach. haven't tried for transformers though yet.

      Some links if interested:

      [1] https://gpt3experiments.substack.com/p/understanding-neural-...

      [2] https://gpt3experiments.substack.com/p/building-a-vector-dat...

  • I figured this was pretty obvious given that MLPs are universal function approximators. A giant MLP could achieve the same results as a transformer. The problem is the scale - we can’t train a big enough MLP. Transformers are a performance optimization, and that’s why they’re useful.

  • What it will come down to is computational efficiencies. We don’t want to retrain once a month - we want to retrain continuously. We don’t want one agent talking to 5 LLMs. We want thousands of LLMs all working in concert.

  • I remember one of the initial transformer people saying in an interview that they didn't think this was the "one true architecture" but a lot of the performance came from people rallying around it and pushing in the one direction.

    On the other hand, while "As long as your curve is sufficiently expressive all architectures will converge to the same performance in the large-data regime." is true, a sufficiently expressive mechanism may not be computationally or memory efficient. As both are constraints on what you can actually build, it's not whether the architecture can produce the result, but whether a feasible/practical instantiation of that architecture can produce the result.

    • > I remember one of the initial transformer people saying in an interview that they didn't think this was the "one true architecture" but a lot of the performance came from people rallying around it and pushing in the one direction.

      You may be referring to Aidan Gomez (CEO of Cohere and contributor to the transformer architecture) during his Machine Learning Street Talk podcast interview. I agree, if as much attention had been put towards the RNN during the initial transformer hype, we may have very well seen these advancements earlier.

  • > is proof that architectures aren't fundamentally important in the curve-fitting paradigm (aka deep learning)

    (Somewhat) fun and (somewhat) related fact: there's a whole cottage industry of "is all you need" papers https://arxiv.org/search/?query=%22is+all+you+need%22&search...

  • Architecture matters because while deep learning can conceivably fit a curve with a single, huge layer (in theory... Universal approximation theorem), the amount of compute and data needed to get there is prohibitive. Having a good architecture means the theoretical possibility of deep learning finding the right N dimensional curve becomes a practical reality.

    Another thing about the architecture is we inherently bias it with the way we structure the data. For instance, take a dataset of (car) traffic patterns. If you only track the date as a feature, you miss that some events follow not just the day-of-year pattern but also holiday patterns. You could learn this with deep learning with enough data, but if we bake it into the dataset, you can build a model on it _much_ simpler and faster.

    So, architecture matters. Data/feature representation matters.

    • > can conceivably fit a curve with a single, huge layer

      I think you need a hidden layer. I’ve never seen a universal approximation theorem for a single layer network.

      2 replies →

  • Well, you also need an approach to 'curve fitting' where it's actually computationally feasible to fit the curve. The approach of mixing layers of matrix multiplication with a simple non-linearity like max(0, x) (ReLU) works really well for that. Earlier on they tried more complicated non-linearities, like sigmoids, or you could try an arbitrary curve that's not split into layers at all, you would probably find it harder. (But I'm fairly sure in the end you might end up in the same place, just after lots more computation spent on fitting.)

  • If you spent some time actually training networks you know that's not true, that's why batch norm, dropout, regularization is so successful. They don't increase the network's capacity (parameter count) but they increase its ability to learn.

  • well yes but actually no I guess: the transformers benefit at the time was that they were more stable while learning, enabling larger and larger network and dataset to be learnt.

  • I mean, transformer-based LLMs are RNNs, just really really really big ones with very wide inputs that maintain large amounts of context.

    • No. An RNN has an arbitrarily-long path from old inputs to new outputs, even if in practice it can't exploit that path. Transformers have fixed-size input windows.

      8 replies →

  • [flagged]

  • Chollet is just a philosopher. He also thinks that keras and tensorflow are important, when nobody uses those. And he punished false days about their usage.

  • Most LLMs aren't even using a "curve" yet at all, right? All they're using is a series of linear equations because the model weights are a simple multiply and add (i.e. basic NN Perceptron). Sure there's a squashing function on the output to keep it in a range from 0 to 1 but that's done BECAUSE we're just adding up stuff.

    I think probably future NNs will be maybe more adaptive than this perhaps where some Perceptrons use sine wave functions, or other kinds of math functions, beyond just linear "y=mx+b"

    It's astounding that we DID get the emergent intelligence from just doing this "curve fitting" onto "lines" rather than actual "curves".

    • The "squashing function" necessarily is nonlinear in multilayer nueral networks. A single layer of a neural network can be quite simply written a weight matrix, times an input vector, equalling an output vector, like so

      Ax = y

      Adding another layer is just multiplying a different set of weights times the output of the first, so

      B(Ax)= y

      If you remember your linear algebra course, you might see the problem: that can be simplified

      (BA)x = y

      Cx = y

      Completely indistinguishable from a single layer, thus only capable of modeling linear relationships.

      To prevent this collapse, a non linear function must be introduced between each layer.

      30 replies →

    • > It's astounding that we DID get the emergent intelligence from just doing this "curve fitting" onto "lines" rather than actual "curves".

      In Ye Olden days (the 90’s) we used to approximate non-linear models using splines or seperate slopes models - fit by hand. They were still linear, but with the right choice of splines you could approximate a non-linear model to whatever degree of accuracy you wanted.

      Neural networks “just” do this automatically, and faster.

      2 replies →

My feeling is that the answer is "no", in the sense that these RNNs wouldn't be able to universally replace Transformers in LLMs, even though they might be good enough in some cases and beat them in others.

Here's why.

A user of an LLM might give the model some long text and then say "Translate this into German please". A Transformer can look back at its whole history. But what is an RNN to do? While the length of its context is unlimited, the amount of information the model retains about it is bounded by whatever is in its hidden state at any given time.

Relevant: https://arxiv.org/abs/2402.01032

  • > the amount of information the model retains about it is bounded by whatever is in its hidden state

    This is no different than a transformer, which, after all, is bound by a finite state, just organized in a different manner.

    • > This is no different than a transformer, which, after all, is bound by a finite state, just organized in a different manner.

      It's not just a matter of organizing things differently. Suppose your network dimension and sequence length are both X.

      Then your memory usage (per layer) will be O(X^2), while your training update cost will be O(X^3). That's for both Transformers and RNNs.

      However, at the end of the sequence, a Transformer layer can look back see O(X^2) numbers, while an RNN can only see O(X) numbers.

      7 replies →

  • That problem has plagued RNNs since the 90s: there's an information precision problem (how many bits do you need older states to carry), a decay problem (the oldest information is the weakest) and a mixing problem (it tends to mix/sum representations).

  • The counterargument here is that you can just scale the size of the hidden state sufficiently such that it can hold compressed representations of whatever-length sequence you like. Ultimately, what I care about is whether RNNs could compete with transformers if FLOPs are held constant—something TFA doesn't really investigate.

    • Well, that's what Transformer already does... One problem with the scaling you're describing is that there would be a massive amount of redundant information stored in hidden activations during training the RNN. The hidden state at each time step t in the sequence would need to contain all info that (i) could be useful for predicting the token at time t and (ii) that could be useful for predicting tokens at times >t. (i) is obvious and (ii) is since all information about the past is transferred to future predictions through the current hidden state. In principle, Transformers can avoid storing redundant info in multiple hidden states at the cost of having to maintain and access (via attention) a larger hidden state at test/eval time.

      3 replies →

  • >> A user of an LLM might give the model some long text and then say "Translate this into German please". A Transformer can look back at its whole history.

    Which isn't necessary. If you say "translate the following to german." Instead, all it needs is to remember the task at hand and a much smaller amount of recent input. Well, and the ability to output in parallel with processing input.

    • It's necessary for arbitrary information processing if you can forget and have no way to "unforget".

      A model can decide to forget something that turns out to be important for some future prediction. A human can go back and re-read/listen etc, A transformer is always re-reading but a RNN can't and is fucked.

      5 replies →

    • Also, a lightweight network could do a first pass to identify tasks, instructions, constraints etc, and then a second pass could use the RNN.

      Consider the flood fill algorithm or union-find algorithm, which feels magical upon first exposure.

      https://en.wikipedia.org/wiki/Hoshen%E2%80%93Kopelman_algori...

      Having 2 passes can enable so much more than a single pass.

      Another alternative could be to have a first pass make notes in a separate buffer while parsing the input. The bandwidth of the note taking and reading can be much much lower than that required for fetching the billions of parameters.

I remember that, the way I understood it, Transformers solved two major "issues" of RNNs that enabled the later boom: Vanishing gradients limiting the context (and model?) size and difficulty in parallelisation limiting the size of the training data.

Do we have solutions for these two problems now?

  • Transformers can also fetch at any moment any previous information that become useful.

    RNN are constantly updating and overwriting their memory. It means they need to be able to predict what is going to be useful in order to store it for later.

    This is a massive advantage for Transformers in interactive use cases like in ChatGPT. You give it context and ask questions in multiple turns. Which part of the context was important for a given question only becomes known later in the token sequence.

    To be more precise, I should say it's an advantage of Attention-based models, because there are also hybrid models successfully mixing both approaches, like Jamba.

    • You could theoretically run the input twice, allowing the model to correlate later tokens with previous ones. It would fix the problem with not knowing what information to retain. A more complicated approach would train the RNN to request replaying some earlier data when needed.

      A great thing about RNNs is they can easily fork the state and generate trees, it would be possible to backtrack and work on combinatorial search problems.

      Also easier to cache demonstrations for free in the initial state, a model that has seen lots of data is not using more memory than a model starting from scratch.

      2 replies →

  • Vanishing (or exploding) gradients affected all deep architectures, not just RNNs. They were solved by LSTMs first proposed in 1997. See:

    https://www.semanticscholar.org/paper/Long-Short-Term-Memory...

    I find it interesting that this knowledge seems to be all but forgotten now. Back in the day, ca. 2014, LSTMs were all the rage, e.g. see:

    https://karpathy.github.io/2015/05/21/rnn-effectiveness/

    https://colah.github.io/posts/2015-08-Understanding-LSTMs/

    • > They were solved by LSTMs first proposed in 1997.

      I see this stuff everywhere online and it's often taught this way so I don't blame folks for repeating it, but I think it's likely promulgated by folks who don't train LSTMs with long contexts.

      LSTMs do add something like a "skip-connection" (before that term was a thing) which helps deal with the catastrophic vanishing gradients you get from e.g. Jordan RNNs right from the jump.

      However (!), while this stops us from seeing vanishing gradients after e.g. 10s or 100s of time-steps, when you start seeing multiple 1000s of tokens, the wheels start falling off. I saw this in my own research, training on amino acid sequences of 3,000 length led to a huge amount of instability. It was only after tokenizing the amino acid sequences (which was uncommon at the time) which got us down to ~1500 timesteps on average, did we start seeing stable losses at training. Check-out the ablation at [0].

      You can think of ResNets by analogy. ResNets didn't "solve" vanishing gradients, there's a practical limit of the depth of networks, but it did go a long way towards dealing with it.

      EDIT: I wanted to add, while I was trying to troubleshoot this for myself, it was super hard to find evidence online of why I was seeing instability. Everything pertaining to "vanishing gradients" and LSTMs were blog posts and pre-prints which just merrily repeated "LSTMs solve the problem of vanishing gradients". That made it hard for me, a junior PhD at the time, to suss out the fact that LSTMs do demonstrably and reliably suffer from vanishing gradients at longer contexts.

      [0] https://academic.oup.com/bioinformatics/article/38/16/3958/6...

      4 replies →

    • Agreed, Ilya Sutskever himself has spent a long time with lstm and published papers like this one while working at Google. http://proceedings.mlr.press/v37/jozefowicz15.pdf

      Recent comments from him have said that any architecture can achieve transformer accuracy and recall, but we have devoted energy to refining transformers, due to the early successes.

    • LSTM and GRU did not quite solve the issue, but they made it less bad. Overall, recurrent units are nutritiously prone to vanishing and exploding gradients.

      I don't want to downplay the value of these models. Some people seem to be under the perception that transformers replaced or made them obsolete, which is faar from the truth.

  • From my (admittedly loose) reading of the paper, this paper particularly targets parallelization and fast training, not "vanishing gradients." However, by simplifying the recurrent units, they managed to improve both!

    This is very clever and very interesting. The paper continuously calls it a "decade-old architecture," but in practice, it's still used massively, thanks to its simplicity in adapting to different domains. Placing it as a "competitor" to transformers is also not quite fully fair, as transformers and RNNs are not mutually exclusive, and there are many methods that merge them.

    Improvement in RNNs is an improvement in a lot of other surprising places. A very interesting read.

I strongly enjoy the simplicity of their "minGRU" architecture. It's basically just:

  class MinGRU(nn.Module):
    def __init__(self, token_size, hidden_state_size):
      self.token_to_proposal = nn.Linear(token_size, hidden_size)
      self.token_to_mix_factors = nn.Linear(token_size, hidden_size)

    def forward(self, previous_hidden_state, current_token):
      proposed_hidden_state = self.token_to_proposal(current_token)
      mix_factors = torch.sigmoid(self.token_to_mix_factors(current_token))
      return torch.lerp(proposed_hidden_state, previous_hidden_state, mix_factors)

And since the proposed hidden states and mix factors for each layer are both only dependent on the current token, you can compute all of them in parallel if you know the whole sequence ahead of time (like during training), and then combine them in linear time using parallel scan.

The fact that this is competitive with transformers and state-space models in their small-scale experiments is gratifying to the "best PRs are the ones that delete code" side of me. That said, we won't know for sure if this is a capital-B Breakthrough until someone tries scaling it up to parameter and data counts comparable to SOTA models.

One detail I found really interesting is that they seem to do all their calculations in log-space, according to the Appendix. They say it's for numerical stability, which is curious to me—I'm not sure I have a good intuition for why running everything in log-space makes the model more stable. Is it because they removed the tanh from the output, making it possible for values to explode if calculations are done in linear space?

EDIT: Another thought—it's kind of fascinating that this sort of sequence modeling works at all. It's like if I gave you all the pages of a book individually torn out and in a random order, and asked you to try to make a vector representation for each page as well as instructions for how to mix that vector with the vector representing all previous pages — except you have zero knowledge of those previous pages. Then, I take all your page vectors, sequentially mix them together in-order, and grade you based on how good of a whole-book summary the final vector represents. Wild stuff.

FURTHER EDIT: Yet another thought—right now, they're just using two dense linear layers to transform the token into the proposed hidden state and the lerp mix factors. I'm curious what would happen if you made those transforms MLPs instead of singular linear layers.

  • This architecture, on the surface, seems to preclude the basic function of recognizing sequences of tokens. At the very least, it seems like it should suffer from something like the pumping lemma: if [the ][cat ][is ][black ] results in the output getting close to a certain vector, [the ][cat ][is ][black ][the ][cat ][is ][black ][the ][cat ][is ][black ] should get even closer to that vector and nowhere close to a "why did you just repeat the same sentence three times" vector? Without non-linear mixing between input token and hidden state, there will be a lot of linear similarities between similar token sequences...

    • Counterpoint: the hidden state at the beginning of ([the][cat][is][black]) x 3 is (probably) initialized to all zeros, but after seeing those first 4 tokens, it will not be all zeros. Thus, going into the second repetition of the sentence, the model has a different initial hidden state, and should exhibit different behavior. I think this makes it possible for the model to learn to recognize repeated sequences and avoid your proposed pitfall.

      1 reply →

  • I don't think it's a capital-B Breakthrough, but recurrent networks are everywhere, and a simplification that improved training and performance clears the stage to build back complexity up again to even higher hights.

  • Log space is important if the token probabilities span a large range of values (powers). There is a reason that maximum likelihood fitting is always performed with log likelihoods.

I made a RNN for a college project because I was interested in obsolete historical technology and I thought I needed to seize the opportunity while it lasted, because once I was out of school, I'd never hear about neural networks ever again.

Mine worked, but it was very simple and dog slow, running on my old laptop. Nothing was ever going to run fast on that thing, but I remember my RNN being substantially slower than a feed-forward network would have been.

I was so confident that this was dead technology -- an academic curiosity from the 1980s and 1990s. It was bizarre to see how quickly that changed.

  • I feel old. I made my masters thesis on RNN's for learning dynamic systems e.g. for control purposes (quite a novelty at the time, around 2000). We wrote the backprop in C++ and ran it over night. Yes it was slow as hell with the tiny gradients. The network architectures were e.g. 5 or 10 neurons in a single hidden layer. NN's were a tiny subject that you were lucky to find courses in. Then closed my eyes for two seconds and looked at the subject again in 2015. Wow.

To their credit, the authors (Y. Bengio among them) end the paper with the question, not suggesting they know the answer. These models are very small even by academic standards so any finding would not necessarily extend to current LLM scales. The main conclusion is that RNN class networks can be trained as efficiently as modern alternatives but the resulting performance is only competitive at small scale.

  • >> These models are very small even by academic standards so any finding would not necessarily extend to current LLM scales.

    Emphasis on not necessarily.

    >> The main conclusion is that RNN class networks can be trained as efficiently as modern alternatives but the resulting performance is only competitive at small scale.

    Shouldn't the conclusion be "the resulting competitive performance has only been confirmed at small scale"?

    • yes, that is clearer indeed. However S4 and Mamba class models have also performed well at small scale and started lagging with larger models and larger context sizes, or at particular tasks.

The model in the paper isn't a "real" RNN due making it parallelizable, for same the reasons described in https://arxiv.org/abs/2404.08819 , and hence is theoretically less powerful than a "real" RNN (struggles at some classes of problems that RNNs traditionally excel at). On the other hand, https://arxiv.org/abs/2405.04517 contains a "real" RNN component, which demonstrates a significant improvement on the kind of state-tracking problems that transformers struggle with.

  • These are real RNNs, they still depend upon the prior hidden state, it’s just that the gating does not. The basic RNN equation can be parallelized with parallel prefix scan algorithms.

I haven’t gone through the paper in detail yet but maybe someone can answer. If you remove the hidden state from an rnn as they say they’ve done, what’s left? An mlp predicting from a single token?

  • They didn't remove the hidden state entirely, they just removed it from the input, forget and update gates. I haven't digested the paper either, but I think that in the case of a GRU this means that the hidden state update masking (z_t and r_t in the paper's formulas) only depends on the new input, not the input plus the prior hidden state.

  • It doesn't completely remove it, it removes certain dependencies on it so that it can be computed by parallel scan, there is still a hidden state. It bears some similarity to what was done with Mamba.

  • I only had a quick look, but it looks like they tweaked the state update so the model can be run with parallel scan instead of having to do it sequentially.

  • The trick is to make sure the recursive dependency stays linear, that's how you enable parallel training.

Everyone wants to use less compute to fit more in, but (obviously?) the solution will be to use more compute and fit less. Attention isn't (topologically) attentive enough. All these RNN-lite approaches are doomed, beyond saving costs, they're going to get cooked by some other arch—even more expensive than transformers.

  • Would you mind expanding upon your thesis? If that compute and all those parameters aren't "fitting" the training examples, what is it that the model is learning, and how should that be analyzed?

    • I think there are two distinct areas. One is the building of the representations, which is achieved by fitting. The other area is loosely defined as "computing" which is some kind of searching for a path through representation space. All of that is wrapped in a translation layer that can turn those representations into stuff we humans can understand and interact with. All of that is achieved to some extent by current transformer architectures, but I guess some believe that they are not quite as effective at the "computation/search" stage.

      2 replies →

In 2016 & 2017 my team at Capital One built several >1B parameter models combining LSTMs with a few other tricks.

We were able to build generators that could replicate any dataset they were trained on, and would produce unique deviations, but match the statistical underpinnings of the original datasets.

https://medium.com/capital-one-tech/why-you-dont-necessarily...

We built several text generators for bots that similarly had very good results. The introduction of the transformer improved the speed and reduced the training / data requirements, but honestly the accuracy changed minimal.

I still find it remarkable how we need such an extreme amount of electrical energy to power large modern AI models.

Compare with one human brain. Far more sophisticated, even beyond our knowledge. What does it take to power it for a day? Some vegetables and rice. Still fine for a while if you supply pure junk food -- it'll still perform.

Clearly we have a long, long way to go in terms of the energy efficiency of AI approaches. Our so-called neural nets clearly don't resemble the energy efficiency of actual biological neurons.

  • Food is extremely dense in energy. 1 food calorie is about 1.1 Watt-hours. A hamburger is about 490 Wh. An AI model requires 0.047 kWh = 47 Wh to generate 1000 text responses.[1] If an LLM could convert hamburgers to energy, it could generate over 10000 prompt completions on a single hamburger.

    Based on my own experience, I would struggle to generate that much text without fries and a drink.

    [1] https://www.theverge.com/24066646/ai-electricity-energy-watt...

  • It's even less! A lot of those vegetables and rice go into powering your heart, muscles, organs, etc. and only a fraction is used for the brain.

    Maybe the future of AI is in organic neurons?

  • This is more likely to be a hardware issue than an algorithms issue. The brain physically is a neural network, as opposed to a software simulation of one.

It’d be nice to see more of how this compares to Mamba. Looks like, in performance, they’re not leagues apart and it’s just a different architecture, not necessarily better or worse?

  • Look at the memory consumption diagram on page 6. It looks like you're basically getting the same running time for less memory usage.

The only strength of transformers is that they can run once for each token and they can pass to themselves intermediate state as they solve your problems. They have to conceal it in tokens that look to humans like a part of the response.

It's obvious why the newest toy from openai can solve problems better mostly by just being allowed to "talk to itself" for a moment before starting the answer that human sees.

Given that, modern incarnation of RNN can be vastly cheaper than transformers provided that they can be trained.

Convolutional neural networks get more visual understanding by "reusing" their capacity across the area of the image. RNN's and transformers can have better understanding of a given problem by "reusing" their capacity to learn and infer across time (across steps of iterative process really).

When it comes to transformer architecture the attention is a red herring. It's just more or less arbitrary way to partition the network so it can be parallelized. The only bit of potential magic is with "shortcut" links between non adjacent layers that help propagate learning back through many layers.

Basically the optimal network is deep, dense (all neurons connect with all belonging to all preceding layers) that is ran in some form of recurrence.

But we don't have enough compute to train that. So we need to arbitrarily sever some connections so the whole thing is easier to parallelized. It really doesn't matter which unless we do in some obviously stupid way.

Actual inventive magic part of LLMs possibly happens in token and positional encoders.

R == Recurrent

From theory the answer to the question should be "yes", they are Turing complete.

The real question is about how to train them, and the paper is about that.

  • Why aren't AI researchers automating the search for efficient architectures?

    • There has been some work, but the problem is that its such a massive search space. Philosophically speaking, if you look at how humans came into existence, you could make an argument that the process of evolution from basic lifeforms can be represented as one giant compute per minute across of all of earth, where genetic selection happens and computation proceeds to the next minute. Thats a fuckload of compute.

      In more practical terms, you would imagine that an advanced model contains some semblance of a CPU to be able to truly reason. Given that CPUs can be all NAND gates (which take 2 neurons to represent), and are structured in a recurrent way, you fundamentally have to rethink how to train such a network, because backprop obviously won't work to capture things like binary decision points.

      6 replies →

    • Program synthesis is a generalization of this. I’m not sure that many ML researchers have thought about the connections yet.

    • The search space is all off too wide, difficult to parameterize, and there is a wide gap between effective and ineffective architectures - ie: a very small change can make a network effectively DOA.

      1 reply →

  • What are you saying is Turing-complete?

I finally got around to reading this. Nice paper, but it fails to address a key question about RNNs:

Can RNNs be as good as Transformers at recalling information from previous tokens in a sequence?

Transformers excel at recalling info, likely because they keep all previous context around in an ever-growing KV cache.

Unless proponents of RNNs conclusively demonstrate that RNNs can recall info from previous context at least as well as Transformers, I'll stick with the latter.

Yes, all machine learning can be interpreted in terms of approximating the partition function.

This is obvious when one considers the connections between Transformers, RNNs, Hopfield networks and the Ising model, a model from statistical mechanics which is solved by calculating the partition function.

This interpretation provides us with some very powerful tools that are commonplace in math and physics but which are not talked about in CS & ML.

I'm working on a startup http://traceoid.ai which takes this exact view. Our approach enables faster training and inference, interpretability and also scalable energy-based models, the Holy Grail of machine learning.

Join the discord https://discord.com/invite/mr9TAhpyBW or follow me on twitter https://twitter.com/adamnemecek1

The name of the paper contrasts with the paper that spawned Transformer architecture, which itself is a reference to the song "All You Need Is Love" by the Beatles. https://en.wikipedia.org/wiki/Attention_Is_All_You_Need

This is such a interesting paper, sadly they don't have big models, I'd like to see a model trained on TinyStories or even C4 since it should be faster than the transformer variant and see how it compares.

RNNs always had better scaling law curves than transformers.

BPTT was their problem

Note Yoshua Bengio in the author list. This shouldn't be taken lightly.

  • And this is where science breaks down.

    • Not really, because

      1) Yoshua's reputation would take a hit if this paper were bullshit, so he has extrinsic motivation to make it good 2) Yoshua has enough experience in the field to know what is going on in the field, you don't have to ask if he forgot about a certain architecture or the work of a certain research group which would contradict his findings-- if such work exists and is credible, it is very likely to be discussed in the paper. 3) This test answers something a leader in the field thinks is important enough for them to work on, else he wouldn't be involved.

      Also note, the poster said the paper shouldn't be taken lightly. That doesn't mean we need to take it blindly. It only means we cannot dismiss it out of hand, if we have a different view we would need substantive arguments to defend our view.

      I've overturned the field leader several times in science, but that's only because I acknowledged what they got right and that they were indeed the person who got it right.

      5 replies →

Excited to see more people working on RNNs but wish their citations were better.

In 2016 my team from Salesforce Research published our work on the Quasi-Recurrent Neural Network[1] (QRNN). The QRNN variants we describe are near identical (minGRU) or highly similar (minLSTM) to the work here.

The QRNN was used, many years ago now, in the first version of Baidu's speech recognition system (Deep Voice [6]) and as part of Google's handwriting recognition system in Gboard[5] (2019).

Even if there are expressivity trade-offs when using parallelizable RNNs they've shown historically they can work well and are low resource and incredibly fast. Very few of the possibilities regarding distillation, hardware optimization, etc, have been explored.

Even if you need "exact" recall, various works have shown that even a single layer of attention with a parallelizable RNN can yield strong results. Distillation down to such a model is quite promising.

Other recent fast RNN variants such as the RWKV, S4, Mamba et al. include citations to QRNN (2016) and SRU (2017) for a richer history + better context.

The SRU work has also had additions in recent years (SRU++), doing well in speech recognition and LM tasks where they found similar speed benefits over Transformers.

I note this primarily as the more data points, especially when strongly relevant, the better positioned the research is. A number of the "new" findings from this paper have been previously explored - and do certainly show promise! This makes sure we're asking new questions with new insights (with all the benefit of additional research from ~8 years ago) versus missing the work from those earlier.

[1] QRNN paper: https://arxiv.org/abs/1611.01576

[2] SRU paper: https://arxiv.org/abs/1709.02755

[3]: SRU++ for speech recognition: https://arxiv.org/abs/2110.05571

[4]: SRU++ for language modeling: https://arxiv.org/abs/2102.12459

[5]: https://research.google/blog/rnn-based-handwriting-recogniti...

[6]: https://arxiv.org/abs/1702.07825

Yes, and it’s hardly surprising, since the Chinese room thought experiment is completely wrong; that is in fact exactly how you learn something.

We really need a [preprint] flag for unreviewed papers.

  • IMHO reviews are almost indistinguishable from noise at the AI conferences I'm familiar with these days anyway, so I don't see much of a value add.

    • Sad state of affairs, people are incentivized to get more papers and citations at all costs, and quality be damned.

      An AI Winter is not a great an idea, but an AI Autumn may be beneficial.

      Just have no major AI conferences for ‘25, perhaps only accept really high tier literature reviews.

"Was all along a scheme by Google to sell more tensor processing units that didn't run RNNs well?"

Betteridge's law of headlines?

  • For paper titles, the law is that the answer is always "yes"

    • Not always, I think?

      Opinions probably differ, for example, on John Backus's paper "Can programming be liberated from the Von Neumann style?" Many fans of functional programming would say the answer is yes, but Backus himself expressed less enthusiasm in interviews later in his life.

      I think the important point, though, is that academic papers and newspaper articles are not the same, and titles in the form of questions function differently in the two domains. Journalists tend to use titles like these to dissemble and sensationalize. When academics use these kinds of titles for peer-reviewed articles, it's because they really are asking an honest question. Backus was doing it in his paper. The authors of this paper are doing the same. They end the paper by re-iterating the question before launching into a discussion of the limitations that prevent them from reaching any firm conclusions on the answer to this question.

    • More like "we aren't sure, but we have good reasons not to exclude the possibility"

Guys, I’m gonna stop this before it gets out of hand: All we need is love and a shit ton of compute.

Everything else is just details.

To me this is further evidence that these LLMs learn only to speak English, but there is no reasoning at all in them. If you simplify a lot and obtain the same results and we know how complex the brain is.

  • Every LLM expert on the planet agrees LLMs are doing "reasoning". No one says they have feelings or qualia, but we all know there's definitely genuinely artificial reasoning happening.

    What LLMs have shown both Neuroscience and Computer Science is that reasoning is a mechanical process (or can be simulated by mechanical processes) and is not purely associated only with consciousness.