Comment by miki123211
2 days ago
I have an overwhelming feeling that what we're trying to do here is "Netflix over DialUp."
We're clearly seeing what AI will eventually be able to do, just like many VOD, smartphone and grocery delivery companies of the 90s did with the internet. The groundwork has been laid, and it's not too hard to see the shape of things to come.
This tech, however, is still far too immature for a lot of use cases. There's enough of it available that things feel like they ought to work, but we aren't quite there yet. It's not quite useless, there's a lot you can do with AI already, but a lot of use cases that are obvious not only in retrospect will only be possible once it matures.
Some people even figured it out in the 80's. Sears founded and ran Prodigy, a large BBS and eventually ISP. They were trying to set themselves up to become Amazon. Not only that, Prodigy's thing (for a while) was using advertising revenue to lower subscription prices.
Your "Netflix over dialup" analogy is more accessible to this readership, but Sears+Prodigy is my favorite example of trying to make the future happen too early. There are countless others.
Today I learned that Sears founded Prodigy!
Amazing how far that company has fallen; they were sort of a force to be reckoned with in the 70's and 80's with Craftsman and Allstate and Discover and Kenmore and a bunch of other things, and now they're basically dead as far as I can tell.
On the topic of how Sears used to be high-tech: back in 1981, when IBM introduced the IBM PC, it was the first time that they needed to sell computers through retail. So they partnered with Sears, along with the Computerland chain of computer stores, since Sears was considered a reasonable place for a businessperson to buy a computer. To plan this, meetings were held at the Sears Tower, which was the world's tallest building at the time.
4 replies →
My favorite anecdote about Sears is from Starbucks current HQs - the HQs used to be a warehouse for Sears. Before renovation the first floor walls next to the elevators used to have Sears' "commitment to customers" (or something like that).
To me it read like it was written by Amazon decades earlier. Something about how Sears promises that customers will be 100% satisfied with the purchase, and if for whatever reason that is not the case customers can return the purchase back to Sears and Sears will pay for the return transportation charges.
11 replies →
:-) Then it's going to blow your mind that CompuServe (while not founded by them) was a product of H&R Block.
There were quite a few small ISP's in the 1990's. Even Bill Gothard[0] had one.
[0]https://web.archive.org/web/19990208003742/http://characterl...
3 replies →
Blame short sighted investors asking Sears to "focus"
11 replies →
For more on this -- and how Sears had everything it needed (and more) to be what Amazon became -- see this comment from a 2007 MetaFilter thread: https://www.metafilter.com/62394/The-Record-Industrys-Declin...
1 reply →
This is a great example that I hadn't heard of and reminds me of when Nintendo tried to become an ISP when they built the Family Computer Network System in 1988
A16Z once talked about the scars of being too early causes investors/companies to get fixed that an idea will never work. Then some new younger people who never got burned will try the same idea and things will work.
Prodigy and the Faminet probably fall into that bucket along with a lot of early internet companies where they tried things early, got burned and then possibly were too late to capitalise when it was finally the right time for the idea to flourish
Reminds me of Elon not taking a no for an answer. He did it twice, with a massive success.
A true shame to see how he's completely lost track with Tesla, the competition particularly from China is eating them alive. And in space, it's a matter of years until the rest of the world catches up.
And now, he's ran out of tricks - and more importantly, on public support. He can't pivot any more, his entire brand is too toxic to touch.
3 replies →
On the flip side, they didn't actually learn that lesson... that it was a matter of immature tech with relatively limited reach... by the time the mid-90's came through, "the internet is just a fad" was pretty much the sentiment from Sears' leadership...
They literally killed their catalog sales right when they should have been ramping up and putting it online. They could easily have beat out Amazon for everything other than books.
My cousin used to tell me that things works because they were the right thing at the right time. I think he gave the idea of amazon only.
But I guess in startup culture, one has to die trying the idea of right time, as sure one can do surveys to feel like it, but the only way we can ever find if its the right time is the users feedback when its lauched / over time.
Newton at Apple is another great one, though they of course got there.
They sure did. This reminds me of when I was in the local Mac Dealer right after the iPod came out. The employees were laughing together saying “nobody is going to buy this thing”.
the problem is ISP became a Utility, not some fountain of unlimited growth.
What you're arguing is that AI is fundamentally going to be a utility, and while that's worth a floor of cash, it's not what investors or the market clamor for.
I agree though, it's fundamentally a utility, which means theres more value in proper government authority than private interests.
Sears started Prodigy to become Amazon, not Comcast.
1 reply →
> We're clearly seeing what AI will eventually be able to do
Are we though? Aside from a narrow set of tasks like translation, grammar, and tone-shifting, LLMs are a dead end. Code generation sucks. Agents suck. They still hallucinate. If you wouldn't trust its medical advice without review from an actual doctor, why would you trust its advice on anything else?
Also, the companies trying to "fix" issues with LLMs with more training data will just rediscover the "long-tail" problem... there is an infinite number of new things that need to be put into the dataset, and that's just going to reduce the quality of responses.
For example: the "there are three 'b's in blueberry" problem was caused by so much training data in response to "there are two r's in strawberry". it's a systemic issue. no amount of data will solve it because LLMs will -never- be sentient.
Finally, I'm convinced that any AI company promising they are on the path to General AI should be sued for fraud. LLMs are not it.
I have a feeling that you believe "translation, grammar, and tone-shifting" works but "code generation sucks" for LLMs because you're good at coding and hence you see its flaws, and you're not in the business of doing translation etc.
Pretty sure if you're going to use LLMs for translating anything non-trivial, you'd have to carefully review the outputs, just like if you're using LLMs to write code.
You know, you're right. It -also- sucks at those tasks because on top of the issue you mention, unedited LLM text is identifiable if you get used to its patterns.
Exactly. Books are still being translated by human translators.
I have a text on my computer, the first couple of paragraphs from the Dutch novel "De aanslag", and every few years I feed it to the leading machine translation sites, and invariably, the results are atrocious. Don't get me wrong, the translation is quite understandable, but the text is wooden, and the translation contains 3 or 4 translation blunders.
GPT-5 output for example:
Far, far away in the Second World War, a certain Anton Steenwijk lived with his parents and his brother on the edge of Haarlem. Along a quay, which ran for a hundred meters beside the water and then, with a gentle curve, turned back into an ordinary street, stood four houses not far apart. Each surrounded by a garden, with their small balconies, bay windows, and steep roofs, they had the appearance of villas, although they were more small than large; in the upstairs rooms, all the walls slanted. They stood there with peeling paint and somewhat dilapidated, for even in the thirties little had been done to them. Each bore a respectable, bourgeois name from more carefree days: Welgelegen Buitenrust Nooitgedacht Rustenburg Anton lived in the second house from the left: the one with the thatched roof. It already had that name when his parents rented it shortly before the war; his father had first called it Eleutheria or something like that, but then written in Greek letters. Even before the catastrophe occurred, Anton had not understood the name Buitenrust as the calm of being outside, but rather as something that was outside rest—just as extraordinary does not refer to the ordinary nature of the outside (and still less to living outside in general), but to something that is precisely not ordinary.
5 replies →
By definition, transformers can never exceed average.
That is the thing, and what companies pushing LLMs don't seem to realize yet.
2 replies →
> Aside from a narrow set of tasks like translation, grammar, and tone-shifting, LLMs are a dead end.
I consider myself an LLM skeptic, but gee saying they are a "dead end" seems harsh.
Before LLM's came along computers understanding human language was graveyard academics when to end their careers in. Now computers are better at it and far faster than most humans.
LLM's also have an extortionary ability to distill and compress knowledge, so much so that you can download a model whose since is measured in GB, and it seems to have a pretty good general knowledge of everything of the internet. Again, far better than any human could do. Yes, the compression is lossy, and yes they consequently spout authoritative sounding bullshit on occasion. But I use them regardless as a sounding board, and I can ask them questions in plain English rather than go on a magical keyword hunt.
Merely being able to understand language or having a good memory is not sufficient to code or do a lot else, on it's own. But they are necessary ingredients for many tasks, and consequently it's hard to imagine a AI that can competently code that doesn't have an LLM as a component.
> it's hard to imagine a AI that can competently code that doesn't have an LLM as a component.
That's just it. LLMs are a component, they generate text or images from a higher-level description but are not themselves "intelligent". If you imagine the language center of your brain being replaced with a tiny LLM powered chip, you would not say it's sentient. it translates your thoughts into words which you then choose to speak or not. That's all modulated by consciousness.
> If you wouldn't trust its medical advice without review from an actual doctor, why would you trust its advice on anything else?
When an LLM gives you medical advice, it's right x% of the time. When a doctor gives you medical advice, it's right y% of the time. During the last few years, x has gone from 0 to wherever it is now, while y has mostly stayed constant. It is not unimaginable to me that x might (and notice I said might, not will) cross y at some point in the future.
The real problem with LLM advice is that it is harder to find a "scapegoat" (particularly for legal purposes) when something goes wrong.
Microsoft claims that they have an AI setup that outperforms human doctors on diagnosis tasks: https://microsoft.ai/new/the-path-to-medical-superintelligen...
"MAI-DxO boosted the diagnostic performance of every model we tested. The best performing setup was MAI-DxO paired with OpenAI’s o3, which correctly solved 85.5% of the NEJM benchmark cases. For comparison, we also evaluated 21 practicing physicians from the US and UK, each with 5-20 years of clinical experience. On the same tasks, these experts achieved a mean accuracy of 20% across completed cases."
Of course, AI "doctors" can't do physical examinations and the best performing models cost thousands to run per case. This is also a test of diagnosis, not of treatment.
If you consider how little time doctors have to look at you (at least in Germanys half broken public health sector) and how little they actually care ...
I think x is already higher than y for me.
1 reply →
Or maybe not. Scaling AI will require an exponential increase in compute and processing power, and even the current LLM models take up a lot of resources. We are already at the limit of how small we can scale chips and Moore’s law is already dead.
So newer chips will not be exponentially better but will be more of incremental improvements, so unless the price of electricity comes down exponentially we might never see AGI at a price point that’s cheaper than hiring a human.
Most companies are already running AI models at a loss, scaling the models to be bigger(like GPT 4.5) only makes them more expensive to run.
The reason why internet, smartphones and computers have seen exponential growth from the 90s is due to underlying increase in computing power. I personally used a 50Mhz 486 in the 90s and now use a 8c/16t 5Ghz CPU. I highly doubt if we will see the same form of increase in the next 40 years
Scaling AI will require an exponential increase in compute and processing power,
A small quibble... I'd say that's true only if you accept as an axiom that current approaches to AI are "the" approach and reject the possibility of radical algorithmic advances that completely change the game. For my part, I have a strongly held belief that there is such an algorithmic advancement "out there" waiting to be discovered, that will enable AI at current "intelligence" levels, if not outright Strong AI / AGI, without the absurd demands on computational resources and energy. I can't prove that of course, but I take the existence of the human brain as an existence proof that some kind of machine can provide human level intelligence without needing gigawatts of power and massive datacenters filled with racks of GPU's.
If we suppose that ANNs are more or less accurate models of real neural networks, the reason why they're so inefficient is not algorithmic, but purely architectural. They're just software. We have these huge tables of numbers and we're trying to squeeze them as hard as possible through a relatively small number of multipliers and adders. Meanwhile, a brain can perform a trillion fundamental simultaneously because every neuron is a complete processing element independent of every other one. To bring that back into more concrete terms, if we took an arbitrary model and turned it into a bespoke piece of hardware, it would certainly be at least one or two orders of magnitude faster and more efficient, with the downside that since it's dead silicon it could not be changed and iterated on.
15 replies →
Deepmind where experimenting with this https://github.com/google-deepmind/lab a few years ago.
Having AI agents learn to see, navigate and complete tasks in a 3d environment. I feel like it had more potential than LLMs to become an AGI (if that is possible).
They haven't touched it in a long time though. But Genie 3 makes me think they haven't completely dropped it.
> Scaling AI will require an exponential increase in compute and processing power,
I think there is something more happening with AI scaling; I think the scaling factor per user is a lot higher and a lot more expensive. Compare to the big initial internet companies. You added one server you could handle thousands more users; incremental cost was very low, not to mention the revenue captured through whatever adtech means. Not so with AI workloads; they are so much more expensive than ad revenue it's hard to break even even with an actual paid subscription.
I dont fully even get why; inference costs are way lower than training costs no?
We know for a fact that human level general intelligence can be achieved on a relatively modest power budget. A human brain runs on somewhere from about 20-100W, depending on how much of the rest of the body's metabolism you attribute to supporting it.
The fact that the human brain, heck all brains, are so much more efficient than “state of the art” nnets, in terms of architecture, power consumption, training cost, what have you … while also being way more versatile and robust … is what convinces me that this is not the path that leads to AGI.
> We are already at the limit of how small we can scale chips
I strongly suspect this is not true for LLMs. Once progress stabilizes, doing things like embedding the weights of some model directly as part of the chip will suddenly become economical, and that's going to cut costs down dramatically.
Then there's distillation, which basically makes smaller models get better as bigger models get better. You don't necessarily need to run a big model al of the time to reap its benefits.
> so unless the price of electricity comes down exponentially
This is more likely than you think. AI is extremely bandwidth-efficient and not too latency-sensitive (unlike e.g. Netflix et al), so it's pretty trivial to offload AI work to places where electricity is abundant and power generation is lightly regulated.
> Most companies are already running AI models at a loss, scaling the models to be bigger(like GPT 4.5) only makes them more expensive to run.
"We're profitable on inference. If we didn't pay for training, we'd be a very profitable company." Sam Altman, OpenAI CEO[1].
[1] https://www.axios.com/2025/08/15/sam-altman-gpt5-launch-chat...
>doing things like embedding the weights of some model directly as part of the chip will suddenly become economical, and that's going to cut costs down dramatically.
An implementation of inference on some specific ANN in fixed function analog hardware can probably pretty easily beat a commodity GPU by a couple orders of magnitude in perf per watt too.
> "We're profitable on inference. If we didn't pay for training, we'd be a very profitable company."
That's OpenAI (though I'd be curious if that statement holds for subscriptions as opposed to API use). What about the downstream companies that use OpenAI models? I'm not sure the picture is as rosy for them.
[dead]
> The groundwork has been laid, and it's not too hard to see the shape of things to come.
The groundwork for VR has also been laid and it's not too hard to see the shape of things to come. Yet VR hasn't moved far beyond the previous hype cycle 10 years ago, because some problems are just really, really hard to solve.
Is it still giving people headaches and making them nauseous?
Yes, it still gives people headaches because the convergence-accommodation conflict remains unsolved. We have a few different technologies to address that, but they're expensive, don't fully address the issue, and none of them have moved beyond the prototype stage.
Motion sickness can be mostly addressed with game design, but some people will still get sick regardless of what you try. Mind you, some people also get motion sick by watching a first-person shooter on a flat screen, so I'm not sure we'll ever get to a point where no one ever gets motion sick in VR.
1 reply →
As someone who was a customer of Netflix from the dialup to broadband world, I can tell you that this stuff happens much faster than you expect. With AI we're clearly in the "it really works, but there are kinks and scaling problems" of, say, streaming video in 2001 -- whereas I think you mean to indicate we're trying to do Netflix back in the 1980s where the tech for widespread broadband was just fundamentally not available.
Oh, like RealPlayer in the late 90's (buffering... buffering...)
RealPlayer in the late 90s turned into (working) Napster, Gnutella and then the iPod in 2001, Podcasts (without the name) immediately after, with the name in 2004, Pandora in 2005, Spotify in 2008. So a decade from crummy idea to the companies we’re familiar with today, but slowed down by tremendous need for new (distributed) broadband infrastructure and complicated by IP arrangements. I guess 10 years seems like a long time from the front end, but looking back it’s nothing. Don’t go buying yourself a Tower Records.
2 replies →
> We're clearly seeing what AI will eventually be able to do
I think this is one of the major mistakes of this cycle. People assume that AI will scale and improve like many computing things before it, but there is already evidence scaling isn't working and people are putting a lot of faith in models (LLMs) structurally unsuited to the task.
Of course that doesn't mean that people won't keep exploiting the hype with hand-wavy claims.
>I have an overwhelming feeling that what we're trying to do here is "Netflix over DialUp."
I totally agree with you... though the other day, I did think the same thing about the 8bit era of video games.
It's a logical fallacy that just because some technology experienced some period of exponential growth, all technology will always experience constant exponential growth.
There are plenty of counter-examples to the scaling of computers that occurred from the 1970s-2010s.
We thought that humans would be traveling the stars, or at least the solar system, after the space race of the 1960s, but we ended up stuck orbiting the earth.
Going back further, little has changed daily life more than technologies like indoor plumbing and electric lighting did in the late 19th century.
The ancient Romans came up with technologies like concrete that were then lost for hundreds of years.
"Progress" moves in fits and starts. It is the furthest thing from inevitable.
Most growth is actually logistic. An S shaped curve that starts exponential but slows down rapidly as it reaches some asymptote. In fact basically everything we see as exponential in the real world is logistic.
True, but adoption of AI has certainly seen exponential growth.
Improvement of models may not continue to be exponential.
But models might be good enough, at this point it seems more like they need integration and context.
I could be wrong :)
4 replies →
Speaking of Netflix -
I think the image, video, audio, world model, diffusion domains should be treated 100% separately from LLMs. They are not the same thing.
Image and video AI is nothing short of revolutionary. It's already having huge impact and it's disrupting every single business it touches.
I've spoken with hundreds of medium and large businesses about it. They're changing how they bill clients and budget projects. It's already here and real.
For example, a studio that does over ten million in revenue annually used to bill ~$300k for commercial spots. Pharmaceutical, P&G, etc. Or HBO title sequences. They're now bidding ~$50k and winning almost everything they bid on. They're taking ten times the workload.
Fwiw LLMs are also revolutionary. There's currently more anti-AI hype than AI hype imho. As in there's literally people claiming it's completely useless and not going to change a thing. Which is crazy.
4 replies →
You're right, and I also think LLMs have an impact.
The issue is the way the market is investing they are looking for massive growth, in the multiples.
That growth can't really come from trading cost. It has to come from creating new demand for new things.
I think that's what not happened yet.
Are diffusion models increasing the demand for video and image content? Is it having customers spend more on shows, games, and so on? Is it going to lead to the creation of a whole new consumption medium ?
2 replies →
I see the point at the moment on “low quality advertising”, but we are still far from high quality video generated for AI.
It’s the equivalent of those cheap digital effects. They look bad for a Hollywood movie, but it allows students to shot their action home movies
4 replies →
It's quite incredible how fast the generative media stuff is moving.
The self-hostable models are improving rapidly. How capable and accessible WAN 2.2 (text+image to video; fully local if you have the VRAM) is feels unimaginable from last year when OpenAI released Sora (closed/hosted).
As long as you do not make ads with four-fingered hands, like those clowns ... :)
https://www.lapresse.ca/arts/chroniques/2025-07-08/polemique...
8 replies →
> I did think the same thing about the 8bit era of video games.
Can you elaborate? That sounds interesting.
too soon to get it to market, though it obviously all sold perfectly well, people were sufficiently wowed by it
> Netflix over DialUp
https://en.wikipedia.org/wiki/RealNetworks
There’s no evidence that it’ll scale like that. Progress in AI has always been a step function.
There's also no evidence that it won't, so your opinion carries exactly the same weight as theirs.
> Progress in AI has always been a step function.
There's decisively no evidence of that, since whatever measure you use to rate "progress in AI" is bound to be entirely subjective, especially with such a broad statement.
> There's also no evidence that it won't
There are signs, though. Every "AI" cycle, ever, has revolved around some algorithmic discovery, followed by a winter in search for the next one. This one is no different and propped up by LLMs, whose limitations we know quite well by now: "intelligence" is elusive, throwing more compute at them produces vastly diminishing returns, throwing more training data at them is no longer feasible (we came short of it even before the well got poisoned). Now the competitors are stuck at the same level, within percent points of one another, with the difference explained by fine-tuning techniques and not by technical prowess. Unless a cool new technique come yesterday to dislodge LLMs, we are in for a new winter.
1 reply →
What is your definition of "evidence" here? The evidence, in my view, are physical (as in, available computing power) and algorithmic limitations.
We don't expect steel to suddenly have new properties, and we don't expect bubble sort to suddenly run in O(n) time. You could ask -- well what is the evidence they won't, but it's a silly question -- the evidence is our knowledge of how things work.
Saying that improvement in AI is inevitable depends on the assumption of new discoveries and new algorithms beyond the current corpus of machine learning. They may happen, or they may not, but I think the burden of proof is higher on those spending money in a way that assumes it will happen.
I don’t follow. We have benchmarks that have survived decades and illustrate the steps.
What do you call GPT 3.5?
rodent -> homo sapiens brain scales just fine? It's tenuous evidence, but not zero.
Uh it’s been multiple repeated step ups in the last 15 years. The trend line is up up up.
The innovation here is that the step function didn't traditionally go down
Is some potential AGI breakthrough in the future going to be from LLMs or will they plateau in terms of capabilities?
Its hard for me to imagine Skynet growing from chatgpt
The old story of paperclip AI shows that AGI is not needed for sufficiently smart computer to be dangerous.
I'm starting to agree with this viewpoint. As the technology seems to solidify to roughly what we can do now, the aspirations are going to have to get cut back until there's a couple more breakthroughs.
I'm not convinced that the immaturity of the tech is what's holding back the profits. The impact and adoption of the tech are through the roof. It has shaken the job market across sectors like I've never seen before. My thinking is that if the bubble bursts, it won't be because the technology failed to deliver functionally; it will be because the technology simply does not become as profitable to operate as everyone is betting right now.
What will it mean if the cutting edge models are open source, and being OpenAI effectively boils down to running those models in your data center? Your business model is suddenly not that different from any cloud service provider; you might as well be Digital Ocean.