Comment by woah

17 days ago

Getting "bitter lesson" vibes from this post

The bitter lesson is very little of the sort.

If we had unlimited memory, compute and data we'd use a rank N tensor for an input of length N and call it a day.

Unfortunately N^N grows rather fast and we have to do all sorts of interesting engineering to make ML calculations complete before the heat death of the universe.

  • > Most AI research has been conducted as if the computation available to the agent were constant (in which case leveraging human knowledge would be one of the only ways to improve performance) but, over a slightly longer time than a typical research project, massively more computation inevitably becomes available. Seeking an improvement that makes a difference in the shorter term, researchers seek to leverage their human knowledge of the domain, but the only thing that matters in the long run is the leveraging of computation. These two need not run counter to each other, but in practice they tend to. Time spent on one is time not spent on the other. There are psychological commitments to investment in one approach or the other. And the human-knowledge approach tends to complicate methods in ways that make them less suited to taking advantage of general methods leveraging computation.

    • To solve mnist without mathematical tricks like convolutions or attention heads you would nees 2.5e42 weights. Assuming that you're using 16 bit weights that 5e42 bytes. A yotta byte is 10e24.

      That is you'd need 5 exa yotta bytes to solve it.

      Currently the whole world has around 200 zetabytes of storage.

      I short for the next 120 years mnist will need mathematical tricks to be solved.

      1 reply →

  • I think you are being pedantic here and business decisions aren't made based on purely cost but brittleness, maintenance, time to market.

    You are assuming you can match Gemini's performance, Google's engineering resources and costs being constant in to the future.

    • >You are assuming you can match Gemini's performance

      I'm not assuming. We already did, 18 months ago with better performance than the current generation of Gemini for our use case.

      You're falling into the usual trap of thinking that because big tech spends big money it gets big results. It doesn't. To quote a friend who was a manager at google "If only I could get my team of 100 to be as productive as my first team of three.".