Comment by bob1029
10 hours ago
I've been playing around with interpreted variants of brainfuck for genetic programming experiments. The intended audience of the language is the evolutionary algorithm, not a human. The central goals are to minimize the size of the search space while still providing enough expressivity to avoid the Turing tarpit scenario (i.e., where we need an infeasible # of cycles to calculate a result).
I've recently found that moving from a linear memory model to a stack-based model creates a dramatic improvement in performance. The program tape is still linear, but the memory is a stack interface. It seems the search space is made prohibitively large by using pointer-based memory access. Stack based makes it a lot easier to stick arbitrary segments of programs together and have meaningful outcomes. Crossover of linear program tapes does not seem practical without constraining the memories in some way like this.
Hey! Have you come across the recent(ish) paper from Google researchers about self-replicators? In one of their experiments they used a self-modifying (metaprogrammable) variant of BrainFuck that I've found very interesting for EAs. I haven't fully replicated their findings as I've been experimenting with better ways to observe the evolution progress, but perhaps it might be interesting for your work as well.