← Back to context

Comment by jomohke

2 days ago

You're validly critiquing where it is now.

The hype people are excited because they're guessing where it's going.

This is notable because it's a milestone that was not previously possible: a driver that works, from someone who spent ~zero effort learning the hardware or driver programming themselves.

It's not production ready, but neither is the first working version of anything. Do you see any reason that progress will stop abruptly here?

Not a huge fan of @sama, but he is quoted as saying: this is the worst these models will every be!

Puts all criticism in a new perspective.

  • That's like Bill Gates saying XP is the worst Windows will ever be

    • Not Windows: Operating systems. We did get more capable operating systems. The point of the quote is "this is the worst the SOTA will ever be".

      If Windows XP were fully supported today I still wouldn't use it, personally, despite having respect for it in its era. The core technology of how, eg OS sandboxing, security, memory, driver etc stacks are implemented have vastly improved in newer OSes.

      2 replies →

>> Do you see any reason that progress will stop abruptly here?

I do. When someone thinks they are building next generation super software for 20$ a month using AI, they conveniently forget someone else is paying the remaining 19,980$ for them for compute power and electricity.

People abstract upon new leaps in invention way too early though. Believing these leaps are becoming the standard. Look at cars, airplanes, phones, etc.

After we landed on the moon people were hyped for casual space living within 50 years.

The reality is it often takes much much longer as invention isn't isolated to itself. It requires integration into the real world and all the complexities it meets.

Even moreso, we may have ai models that can do anything perfectly but it will require so much compute that only the richest of the rich are able to use it and it effectively won't exist for most people.

> Do you see any reason progress will stop abruptly here?

Yeah, money and energy. And fundamental limitations of LLM's. I mean, I'm obviously guessing as well because I'm not an expert, but it's a view shared by some of the biggest experts in the field ¯\_(ツ)_/¯

I just don't really buy the idea that we're going to have near-infinite linear or exponential progress until we reach AGI. Reality rarely works like that.

  • At the very least, computers are still getting faster. Models will get faster and cheaper to run over time, allowing them more time to "think", and we know that helps. Might be slow progress, but it seems inevitable.

    I do agree that exponential progress to AGI is speculation.

  • You think all AI companies will never release a better model days after they all release better models?

    That is a position to take.

  • I know some proponents have AGI as their target, but to me it seems to be unrelated to the steadily increasing effectiveness of using LLMs to write computer code.

    I think of it as just another leap in human-computer interface for programming, and a welcome one at that.

    • If you imagine it just keeps improving, the end point would be some sort of AGI though. Logically, once you have something better at making software than humans, you can ask it to make a better AI than we were able to make.

      5 replies →