Comment by gabriel666smith

2 days ago

I've just added the model generation code! I hope that's helpful.

1. Yes — that’s exactly right. It counts. If "Hacker" appeared 2 places before "News" multiple times, the counts would reflect that.

Later, when generating, these counts are turned into probabilities by normalising (dividing by the total).

2. So I think this part is a Fibonacci-structured Markov-like model (not a neural network, I don't think).

3.

> When you say a word is only chosen if it's probable in both the forward and backward direction, what does that mean?

This is the key part, potentially.

When generating, the script does this:

Forward model: “Given seed word A, what words appear fib_distance ahead?” → gives P_forward(B | A)

Backward model: “Given candidate word B, what words appear fib_distance behind?” → gives P_backward(A | B)

It then checks both directions.

If word B is predicted forward and backward, it multiplies the probabilities. If a word only shows up in the forward direction but never appears in backward training data (or vice versa), it gets discarded.

It’s a kind of bidirectional filter to avoid predictions that only hold in one direction.

I'm learning a lot of these words as I go, so questions like these are really helpful for me - thanks.