Not just building GPU infrastructure to serve inference at scale, but also constraining GPU infrastructure and pricing it out of reach for individuals who wish to pursue open models.
I'm not sure what entirely to do about it, but the fact that some providers still release open models is hopeful to me.
Why? Almost all of the open source models end up actually sucking for general purpose use, and require a decent bit of effort to even make remotely usable for specialized use cases.
From what I could tell (Corridor Digital mentioned this as well), actual pros use local models through ComfyUI, rather than prompting these proprietary models.
Control and consistency, as well as support for specialized workflows is more important, even if it comes at the expense of model quality.
Because from what I understand, the whole premise of billionaires giving AI all their money is
1. AI succeeds and this success is defined by they will be able to raise prices really high
2. AI fails and receives a bailout
But if there are alternatives I can run locally, they can't raise prices as high because at some point I'd rather run stuff locally even if not state of the art. Like personally, USD 200 a month is already too expensive if you ask me.
There's two scenarios going forward that I can see.
Either there's a massive reduction of inference and training costs due to new discoveries and then those big hardware hoarders have no moat anymore or nothing new is discovered and they won't be able to scale forward.
You are saying it needs at most minor tweaks when used for something specific like a product support channel?
For the long-term value of a horribly overpriced and debt ridden corporation it matters quite a lot if the catch up of open source to adequate is less than a decade in instead of several decades in some uses and a few more in others.
If the industry doesn't fear open-weight models, they should. There is no moat
The moat is building GPU infrastructure to serve inference at scale.
Not just building GPU infrastructure to serve inference at scale, but also constraining GPU infrastructure and pricing it out of reach for individuals who wish to pursue open models.
I'm not sure what entirely to do about it, but the fact that some providers still release open models is hopeful to me.
Why? Almost all of the open source models end up actually sucking for general purpose use, and require a decent bit of effort to even make remotely usable for specialized use cases.
From what I could tell (Corridor Digital mentioned this as well), actual pros use local models through ComfyUI, rather than prompting these proprietary models.
Control and consistency, as well as support for specialized workflows is more important, even if it comes at the expense of model quality.
2 replies →
Because from what I understand, the whole premise of billionaires giving AI all their money is
1. AI succeeds and this success is defined by they will be able to raise prices really high
2. AI fails and receives a bailout
But if there are alternatives I can run locally, they can't raise prices as high because at some point I'd rather run stuff locally even if not state of the art. Like personally, USD 200 a month is already too expensive if you ask me.
And the prices can only go up, right?
1 reply →
There's two scenarios going forward that I can see.
Either there's a massive reduction of inference and training costs due to new discoveries and then those big hardware hoarders have no moat anymore or nothing new is discovered and they won't be able to scale forward.
Either way it doesn't sound good for them.
You are saying it needs at most minor tweaks when used for something specific like a product support channel?
For the long-term value of a horribly overpriced and debt ridden corporation it matters quite a lot if the catch up of open source to adequate is less than a decade in instead of several decades in some uses and a few more in others.
At the moment. AI company management is not so myopic and can see the risk that this could change.
FYI there are no open source models, only open weights. Open source means you get source code.
1 reply →