← Back to context

Comment by marcyb5st

6 days ago

Even if it pops, I think the cat is already out of the bag.

About LLMs: Maybe there won't be much training going around as model we might starting to hit diminishing returns with the current model architectures/approaches, but I believe inference is here to stay. Scaffolding code, trivial functions, ... are things that LLMs excel at doing and once you get used to offload those to the LLM it is really hard to get back to doing it manually.

About Image/Video generations: Here I believe there is even more to explore and I consider this separate from LLMs in the context of AI winters/bubbles. Especially regarding the video generation part.

I am not in the field, but I believe the appeal of being able to create movies without needing xM$ for actors' salaries is huge. And if you can even replace the VFX also with Video Generation then it is a 3 birds with one stone scenario (You pay for AI compute and you replace actors, VFX specialists, and compute costs for render farms).

So, I believe that there is not a scenario in which electricity prices plummet as you describe. Maybe locally around datacenters that were primarily built for LLMs training, but not globally.

> scaffolding code, trivial functions, ... are things that LLMs excel at doing and once you get used to offload those to the LLM it is really hard to get back to doing it manually.

True, but local models cover that use-case very well already and consume little power doing so.

  • That is a fair point. Honestly, for programmers/technical people I believe you are right and it is trivial already to get started. However for non technical people I believe it is just easier to open claude/gemini/openai web interfaces and chat away. Especially if they have already great integrations (say Google Drive) out of the box.