← Back to context

Comment by Ygg2

18 days ago

"Brute force" here is about the amount of data you're ingesting. It's no Alpha Zero, that will learn from scratch.

What? Either option requires sufficient data. Brute force implies iterating over all combinations until you find the best weights. Back-prop is an optimization technique.

  • In context of grandparents post.

         > You determine the weights via brute force. Simply running a large amount of data where you have the input as well as the correct output 
    

    Brute force just means guessing all possible combinations. A dataset containing most human knowledge is about as brute force as you can get.

    I'm fairly sure that Alpha Zero data is generated by Alpha Zero. But it's not an LLM.

    • No, a large dataset does not make something brute force. Rather than backprop, an example of brute force might be taking a single input output pair then systematically sampling the model parameter space to search for a sufficiently close match.

      The sampling stage of Evolution Strategies at least bears a resemblance but even that is still a strategic gradient descent algorithm. Meanwhile backprop is about as far from brute force as you can get.