Comment by alsetmusic
2 days ago
It almost seems as though the record-setting bonuses they were dolling out to hire the top minds in AI might have been a little shortsighted.
A couple of years ago, I asked a financial investment person about AI as a trick question. She did well by recommending investing in companies that invest in AI (like MS) but who had other profitable businesses (like Azure). I was waiting for her to put her foot in her mouth and buy into the hype.She skillfully navigated the question in a way that won my respect.
I personally believe that a lot of investment money is going to evaporate before the market resets. What we're calling AI will continue to have certain uses, but investors will realize that the moonshot being promised is undeliverable and a lot of jobs will disappear. This will hurt the wider industry, and the economy by extension.
I have an overwhelming feeling that what we're trying to do here is "Netflix over DialUp."
We're clearly seeing what AI will eventually be able to do, just like many VOD, smartphone and grocery delivery companies of the 90s did with the internet. The groundwork has been laid, and it's not too hard to see the shape of things to come.
This tech, however, is still far too immature for a lot of use cases. There's enough of it available that things feel like they ought to work, but we aren't quite there yet. It's not quite useless, there's a lot you can do with AI already, but a lot of use cases that are obvious not only in retrospect will only be possible once it matures.
Some people even figured it out in the 80's. Sears founded and ran Prodigy, a large BBS and eventually ISP. They were trying to set themselves up to become Amazon. Not only that, Prodigy's thing (for a while) was using advertising revenue to lower subscription prices.
Your "Netflix over dialup" analogy is more accessible to this readership, but Sears+Prodigy is my favorite example of trying to make the future happen too early. There are countless others.
Today I learned that Sears founded Prodigy!
Amazing how far that company has fallen; they were sort of a force to be reckoned with in the 70's and 80's with Craftsman and Allstate and Discover and Kenmore and a bunch of other things, and now they're basically dead as far as I can tell.
36 replies →
This is a great example that I hadn't heard of and reminds me of when Nintendo tried to become an ISP when they built the Family Computer Network System in 1988
A16Z once talked about the scars of being too early causes investors/companies to get fixed that an idea will never work. Then some new younger people who never got burned will try the same idea and things will work.
Prodigy and the Faminet probably fall into that bucket along with a lot of early internet companies where they tried things early, got burned and then possibly were too late to capitalise when it was finally the right time for the idea to flourish
4 replies →
On the flip side, they didn't actually learn that lesson... that it was a matter of immature tech with relatively limited reach... by the time the mid-90's came through, "the internet is just a fad" was pretty much the sentiment from Sears' leadership...
They literally killed their catalog sales right when they should have been ramping up and putting it online. They could easily have beat out Amazon for everything other than books.
My cousin used to tell me that things works because they were the right thing at the right time. I think he gave the idea of amazon only.
But I guess in startup culture, one has to die trying the idea of right time, as sure one can do surveys to feel like it, but the only way we can ever find if its the right time is the users feedback when its lauched / over time.
Newton at Apple is another great one, though they of course got there.
1 reply →
the problem is ISP became a Utility, not some fountain of unlimited growth.
What you're arguing is that AI is fundamentally going to be a utility, and while that's worth a floor of cash, it's not what investors or the market clamor for.
I agree though, it's fundamentally a utility, which means theres more value in proper government authority than private interests.
2 replies →
> We're clearly seeing what AI will eventually be able to do
Are we though? Aside from a narrow set of tasks like translation, grammar, and tone-shifting, LLMs are a dead end. Code generation sucks. Agents suck. They still hallucinate. If you wouldn't trust its medical advice without review from an actual doctor, why would you trust its advice on anything else?
Also, the companies trying to "fix" issues with LLMs with more training data will just rediscover the "long-tail" problem... there is an infinite number of new things that need to be put into the dataset, and that's just going to reduce the quality of responses.
For example: the "there are three 'b's in blueberry" problem was caused by so much training data in response to "there are two r's in strawberry". it's a systemic issue. no amount of data will solve it because LLMs will -never- be sentient.
Finally, I'm convinced that any AI company promising they are on the path to General AI should be sued for fraud. LLMs are not it.
I have a feeling that you believe "translation, grammar, and tone-shifting" works but "code generation sucks" for LLMs because you're good at coding and hence you see its flaws, and you're not in the business of doing translation etc.
Pretty sure if you're going to use LLMs for translating anything non-trivial, you'd have to carefully review the outputs, just like if you're using LLMs to write code.
10 replies →
> Aside from a narrow set of tasks like translation, grammar, and tone-shifting, LLMs are a dead end.
I consider myself an LLM skeptic, but gee saying they are a "dead end" seems harsh.
Before LLM's came along computers understanding human language was graveyard academics when to end their careers in. Now computers are better at it and far faster than most humans.
LLM's also have an extortionary ability to distill and compress knowledge, so much so that you can download a model whose since is measured in GB, and it seems to have a pretty good general knowledge of everything of the internet. Again, far better than any human could do. Yes, the compression is lossy, and yes they consequently spout authoritative sounding bullshit on occasion. But I use them regardless as a sounding board, and I can ask them questions in plain English rather than go on a magical keyword hunt.
Merely being able to understand language or having a good memory is not sufficient to code or do a lot else, on it's own. But they are necessary ingredients for many tasks, and consequently it's hard to imagine a AI that can competently code that doesn't have an LLM as a component.
1 reply →
> If you wouldn't trust its medical advice without review from an actual doctor, why would you trust its advice on anything else?
When an LLM gives you medical advice, it's right x% of the time. When a doctor gives you medical advice, it's right y% of the time. During the last few years, x has gone from 0 to wherever it is now, while y has mostly stayed constant. It is not unimaginable to me that x might (and notice I said might, not will) cross y at some point in the future.
The real problem with LLM advice is that it is harder to find a "scapegoat" (particularly for legal purposes) when something goes wrong.
3 replies →
Or maybe not. Scaling AI will require an exponential increase in compute and processing power, and even the current LLM models take up a lot of resources. We are already at the limit of how small we can scale chips and Moore’s law is already dead.
So newer chips will not be exponentially better but will be more of incremental improvements, so unless the price of electricity comes down exponentially we might never see AGI at a price point that’s cheaper than hiring a human.
Most companies are already running AI models at a loss, scaling the models to be bigger(like GPT 4.5) only makes them more expensive to run.
The reason why internet, smartphones and computers have seen exponential growth from the 90s is due to underlying increase in computing power. I personally used a 50Mhz 486 in the 90s and now use a 8c/16t 5Ghz CPU. I highly doubt if we will see the same form of increase in the next 40 years
Scaling AI will require an exponential increase in compute and processing power,
A small quibble... I'd say that's true only if you accept as an axiom that current approaches to AI are "the" approach and reject the possibility of radical algorithmic advances that completely change the game. For my part, I have a strongly held belief that there is such an algorithmic advancement "out there" waiting to be discovered, that will enable AI at current "intelligence" levels, if not outright Strong AI / AGI, without the absurd demands on computational resources and energy. I can't prove that of course, but I take the existence of the human brain as an existence proof that some kind of machine can provide human level intelligence without needing gigawatts of power and massive datacenters filled with racks of GPU's.
18 replies →
> Scaling AI will require an exponential increase in compute and processing power,
I think there is something more happening with AI scaling; I think the scaling factor per user is a lot higher and a lot more expensive. Compare to the big initial internet companies. You added one server you could handle thousands more users; incremental cost was very low, not to mention the revenue captured through whatever adtech means. Not so with AI workloads; they are so much more expensive than ad revenue it's hard to break even even with an actual paid subscription.
1 reply →
We know for a fact that human level general intelligence can be achieved on a relatively modest power budget. A human brain runs on somewhere from about 20-100W, depending on how much of the rest of the body's metabolism you attribute to supporting it.
1 reply →
> We are already at the limit of how small we can scale chips
I strongly suspect this is not true for LLMs. Once progress stabilizes, doing things like embedding the weights of some model directly as part of the chip will suddenly become economical, and that's going to cut costs down dramatically.
Then there's distillation, which basically makes smaller models get better as bigger models get better. You don't necessarily need to run a big model al of the time to reap its benefits.
> so unless the price of electricity comes down exponentially
This is more likely than you think. AI is extremely bandwidth-efficient and not too latency-sensitive (unlike e.g. Netflix et al), so it's pretty trivial to offload AI work to places where electricity is abundant and power generation is lightly regulated.
> Most companies are already running AI models at a loss, scaling the models to be bigger(like GPT 4.5) only makes them more expensive to run.
"We're profitable on inference. If we didn't pay for training, we'd be a very profitable company." Sam Altman, OpenAI CEO[1].
[1] https://www.axios.com/2025/08/15/sam-altman-gpt5-launch-chat...
2 replies →
[dead]
> The groundwork has been laid, and it's not too hard to see the shape of things to come.
The groundwork for VR has also been laid and it's not too hard to see the shape of things to come. Yet VR hasn't moved far beyond the previous hype cycle 10 years ago, because some problems are just really, really hard to solve.
Is it still giving people headaches and making them nauseous?
2 replies →
As someone who was a customer of Netflix from the dialup to broadband world, I can tell you that this stuff happens much faster than you expect. With AI we're clearly in the "it really works, but there are kinks and scaling problems" of, say, streaming video in 2001 -- whereas I think you mean to indicate we're trying to do Netflix back in the 1980s where the tech for widespread broadband was just fundamentally not available.
Oh, like RealPlayer in the late 90's (buffering... buffering...)
3 replies →
> We're clearly seeing what AI will eventually be able to do
I think this is one of the major mistakes of this cycle. People assume that AI will scale and improve like many computing things before it, but there is already evidence scaling isn't working and people are putting a lot of faith in models (LLMs) structurally unsuited to the task.
Of course that doesn't mean that people won't keep exploiting the hype with hand-wavy claims.
>I have an overwhelming feeling that what we're trying to do here is "Netflix over DialUp."
I totally agree with you... though the other day, I did think the same thing about the 8bit era of video games.
It's a logical fallacy that just because some technology experienced some period of exponential growth, all technology will always experience constant exponential growth.
There are plenty of counter-examples to the scaling of computers that occurred from the 1970s-2010s.
We thought that humans would be traveling the stars, or at least the solar system, after the space race of the 1960s, but we ended up stuck orbiting the earth.
Going back further, little has changed daily life more than technologies like indoor plumbing and electric lighting did in the late 19th century.
The ancient Romans came up with technologies like concrete that were then lost for hundreds of years.
"Progress" moves in fits and starts. It is the furthest thing from inevitable.
6 replies →
Speaking of Netflix -
I think the image, video, audio, world model, diffusion domains should be treated 100% separately from LLMs. They are not the same thing.
Image and video AI is nothing short of revolutionary. It's already having huge impact and it's disrupting every single business it touches.
I've spoken with hundreds of medium and large businesses about it. They're changing how they bill clients and budget projects. It's already here and real.
For example, a studio that does over ten million in revenue annually used to bill ~$300k for commercial spots. Pharmaceutical, P&G, etc. Or HBO title sequences. They're now bidding ~$50k and winning almost everything they bid on. They're taking ten times the workload.
23 replies →
> I did think the same thing about the 8bit era of video games.
Can you elaborate? That sounds interesting.
1 reply →
There’s no evidence that it’ll scale like that. Progress in AI has always been a step function.
There's also no evidence that it won't, so your opinion carries exactly the same weight as theirs.
> Progress in AI has always been a step function.
There's decisively no evidence of that, since whatever measure you use to rate "progress in AI" is bound to be entirely subjective, especially with such a broad statement.
5 replies →
rodent -> homo sapiens brain scales just fine? It's tenuous evidence, but not zero.
Uh it’s been multiple repeated step ups in the last 15 years. The trend line is up up up.
The innovation here is that the step function didn't traditionally go down
> Netflix over DialUp
https://en.wikipedia.org/wiki/RealNetworks
Is some potential AGI breakthrough in the future going to be from LLMs or will they plateau in terms of capabilities?
Its hard for me to imagine Skynet growing from chatgpt
The old story of paperclip AI shows that AGI is not needed for sufficiently smart computer to be dangerous.
I'm starting to agree with this viewpoint. As the technology seems to solidify to roughly what we can do now, the aspirations are going to have to get cut back until there's a couple more breakthroughs.
I'm not convinced that the immaturity of the tech is what's holding back the profits. The impact and adoption of the tech are through the roof. It has shaken the job market across sectors like I've never seen before. My thinking is that if the bubble bursts, it won't be because the technology failed to deliver functionally; it will be because the technology simply does not become as profitable to operate as everyone is betting right now.
What will it mean if the cutting edge models are open source, and being OpenAI effectively boils down to running those models in your data center? Your business model is suddenly not that different from any cloud service provider; you might as well be Digital Ocean.
> A couple of years ago, I asked a financial investment person about AI as a trick question. She did well by recommending investing in companies that invest in AI (like MS) but who had other profitable businesses (like Azure)
If you had actually invested in AI pure players and Nvidia, the shovel seller, a couple years ago and were selling today, you would have made a pretty penny.
The hard thing with potential bubbles is not entirely avoiding them, it’s being there early enough and not being left at the end holding the bag.
Financial advisors usually work on wholistic plans not short term ones. It isn't about timing markets its about a steady hand that doesn't panic and makes sure you don't get caught with your pants down when you need cash.
Hard to know what OP asked for but if they asked for AI specifically, the advise does not need to be holistic.
Are you bearish on the shovel seller? Is now the time to sell out? I'm still +40% on nvda - quite late to the game but people still seem to be buying the shovels.
Personal opinion, I'm bearish on the shovel seller long term because the companies that are training AI are likely to build their own hardware. Google already does this. Seems like a matter of time for the rest of the mag 7 to join. The rest of the buyers aren't growing enough to offset that loss imo.
1 reply →
What's the old Rockefeller clique? When your shoe shiner is giving you stock advice it is time to sell (may have heard the taxicab driver version).
It depends on how risk adverse you are and how much money you have there.
If you're happy with those returns, sell. FOMO is dumb. You can't time the market, the information just isn't available. If those shares are worth a meaningful amount of money, sell. Take your wins and walk away. A bird in your hand is worth more than two in the bush, right? That money isn't worth anything until it is realized[0].
Think about it this way: how much more would you need to make to risk making nothing? Or losing money? This is probably the most important question when investing.
If you're a little risk adverse or a good chunk of your profile is in it, sell 50-80% of it and then diversify. You're taking wins and restructuring.
If you wanna YOLO, then YOLO.
My advice? Don't let hindsight get in the way of foresight.
[0] I had some Nvidia stocks at 450 and sold at 900 (before the split, so would be $90 today). I definitely would have made more money if I kept them. Almost double if I sold today! But I don't look back for a second. I sold those shares and was able to pay off my student debt. Having this debt paid off is still a better decision in my mind because I can't predict the future. I could have sold 2 weeks later and made less! Or even in April of this year and made the same amount of money.
I have absolutely no clue whatsoever. I have zero insider information. For all I know, the bubble could pop tomorrow or we might be at the beginning of a shift of a similar magnitude to the industrial revolution. If I could reliably tell, I wouldn’t tell you anyway. I would be getting rich.
I’m just amused by people who think they are financially more clever by taking conservative positions. At that point, just buy ETF. That’s even more diversification that buying Microsoft.
It boggles the mind that this kind of management is what it takes to create one of the most valuable companies in the world (and becoming one of the world's richest in the process).
It's a cliche but people really underestimate and try to downplay the role of luck[0].
[0] https://www.scientificamerican.com/blog/beautiful-minds/the-...
Luck. And capturing strong network effect.
The ascents of the era all feel like examples of anti-markets, of having gotten yourself into an intermediary position where you control both side's access.
People also underestimate the value of maximizing opportunities for luck. If we think of luck as random external chance that we can't control, then what can we control? Doing things that increase your exposure to opportunities without spreading yourself too thin is the key. Easier said than done to strike that balance, but getting out there and trying a lot of things is a viable strategy even if only a few of them pay off. The trick is deciding how long to stick with something that doesn't appear to be working out.
2 replies →
Success happens when luck meets hard work.
1 reply →
Ability vastly increases your luck surface area. A single poker hand has a lot of luck, and even a game, but over long periods, ability starts to strongly differentiate peoples' results.
5 replies →
[flagged]
12 replies →
Every billionaire could have died from childhood cancer.
Past a certain point, skill doesn't contribute to the magnitude of success and it becomes all luck. There are plenty of smart people on earth, but there can only be 1 founder of facebook.
Plenty of smart people prefer not to try their luck, though. A smart but risk-avoidant person will never be the one to create Facebook either.
5 replies →
I view success as the product of three factors, luck, skill and hard work.
If any of these is 0, you fail, regardless of how high the other two are. Extraordinary success needs all three to be extremely high.
10 replies →
You should read Careless People if this boggles your mind.
youre thinking of ordinary people by john lennon
1 reply →
Giving 1.5 million salary is nothing for these people.
It shouldn’t be mind boggling. They see revolutionary technology that has potential to change the world and is changing the world already. Making a gamble like that is worth it because losing is trivial compared to the upside of success.
You are where you are and not where they are because your mind is boggled by winning strategies that are designed to arrive at success through losing and dancing around the risk of losing.
Obviously mark is where he is also because of luck. But he’s not an idiot and clearly it’s not all luck.
But how is it worth for meta, since they won't really monetize it.
At least the others can kinda bundle it as a service.
After spending tens of billions in AI how has it impacted a single dollar on meta's revenue?
3 replies →
When you start to think about who exactly determines what makes a valuable company, and if you believe in the buffalo herd theory, then it makes a little bit of sense.
It all makes much more sense when you start to realize that capitalism is a casino in which the already rich have a lot more chips to bet and meritocracy is a comforting lie.
> meritocracy is a comforting lie.
Meritocracy used to be a dirty word, before my time, of course, but for different reasons than you may think. Think about the racial quotas in college admissions and you’ll maybe see why the powers that be didn’t want merit to be a determining factor at that time.
Now that the status quo is in charge of college admissions, we don’t need those quotas generally, and yet meritocracy still can’t save us. The problem of merit is that we rarely need the best person for a given job, and those with means can be groomed their entire life to do that job, if it’s profitable enough. Work shouldn’t be charity either, as work needs to get done, after all, and it’s called work instead of charity or slavery for good reasons, but being too good at your job at your current pay rate can make you unpromotable, which is a trap just as hard to see as the trap of meritocracy.
Meritocracy is ego-stroking writ large if you get picked, just so we can remind you that you’re just the best one for our job that applied, and we can replace you at any time, likely for less money.
The answer is fairly straightforward. It's fraud, and lots of it.
A honest businessman wouldn't put their company into a stock bubble like this. Zuckerberg runs his mouth and tells investors what they want to hear, even if it's unbacked.
A honest businessman would never have gotten Facebook this valuable because so much of the value is derived from ad-fraud that Facebook is both party to and knows about.
A honest businessman would never have gotten Facebook this big because it's growth relied extensively on crushing all competition through predatory pricing, illegal both within the US and internationally as "dumping".
Bear in mind that these are all bad as they're unsustainable. The AI bubble will burst and seriously harm Meta. They would have to fall back on the social media products they've been filling up with AI slop. If it takes too long for the bubble to burst, if zuckerberg gets too much time to shit up Facebook, too much time for advertisers to wisen up to how many of their impressions are bots, they might collapse entirely.
The rest of Big Tech is not much better. Microsoft and Google's CEOs are fools who run their mouth. OpenAI's new "CEO of apps" is Facebook's pivot-to-video ghoul.
As I've said in other comments - expecting honesty and ethical behavior from Mark Zuckerberg is a fool's errand at best. He has unchecked power and cannot be voted out by shareholders.
He will say whatever he wants and because the returns have been pretty decent so far, people will just take his word for it. There's not enough class A shares to actually force his hand to do anything he doesn't want to do.
2 replies →
What is a good resource to read about the ad fraud? This is the first I'm hearing of that.
1 reply →
Ha ha.
You used “honest” and “businessman” in the same sentence.
Good one.
I'll differ from the siblingposters who compare it to the luck of the draw, essentially explaining this away as the excusable randomness of confusion rather than the insidious evil of stupidity; while the "it's fraud" perspective presumes a solid grasp of which things out there are not fraud besides those which are coercion, but that's not a subject I'm interested in having an opinion about.
Instead, think of whales for a sec. Think elephants - remember those? Think of Pando the tree, the largest organism alive. Then compare with one of the most valuable companies in the world. To a regular person's senses, the latter is a vaster and more complex entity than any tree or whale or elephant.
Gee, what makes it grow so big though? The power of human ambition?
And here's where I say, no, it needs to be this big, because at smaller scales it would be too dumb to exist.
To you and me it may all look like the fuckup of some Leadership or Management, a convenient concept beca corresponding to a mental image of a human or group of humans. That's some sort of default framing, such as can only be provided to boggle the mind; considering that they'll keep doing this and probably have for longer than I've been around. The entire Internet is laughing at Zuckerberg for not looking like their idea of "a person" but he's not the one with the impostor syndrome.
For ours are human minds, optimized to view things in term of person-terms and Dunbar-counts; even the Invisible Hand of the market is hand-shaped. But last time I checked my hand wasn't shaped anything like the invisible network of cause and effect that the metaphor represents; instead I would posit that for an entity like Facebook, to perform an action that does not look completely ridiculous from the viewpoint of an individual observer, is the equivalent an anatomical impossibility. It did evolve after all from American college students
See also: "Beyond Power / Knowledge", Graeber 2006.
why is there so much of this on HN? I'm on a few social networks, but this is the only one where I find this kind of quasi-spiritual, stream of consciousness, word length steadily increasing, pseudo-technical, word salad diatribes?
It's very unique to this site and these type of comments all have an eerily similar vibe.
16 replies →
> record-setting bonuses they were dolling out to hire the top minds in AI
That was soooo 2 weeks ago.
I think we will see the opposite. If we made no progress with LLMs we'd still have huge advancements and growth opportunities enhancing the workflows and tuning them to domain specific tasks.
I think you could both be right at the same time. We will see a large number of VC funded AI startup companies and feature clones vanish soon, and we will also see current or future LLMs continue to make inroads into existing business processes and increase productivity and profitability.
Personally, I think what we will witness is consolidation and winner-takes-all scenarios. There just isn't a sustainable market for 15 VS Code forks all copying each other along with all other non-VS Code IDEs cloning those features in as fast as possible. There isn't space for Claude Code, Gemini CLI, Qwen Code, Opencode all doing basically the same thing with their special branding when the thing they're actually selling is a commoditized LLM API. Hell, there _probably_ isn't space for OpenAI and Anthropic and Google and Mistral and DeepSeek and Alibaba and whoever else, all fundamentally creating and doing the same thing globally. Every single software vendor can't innovate and integrate AI features faster than AI companies themselves can build better tooling to automate that company's tools for them. It reeks of the 90's when there were a dozen totally viable but roughly equal search engines. One vendor will eventually pull ahead or have a slightly longer runway and claim the whole thing.
I agree with this, but how will these companies make money? Short of a breakthrough, the consumer isn't ready to pay for it, and even if they were, open source models just catch up.
My feelings are that most of the "huge advancements" are not going to benefit the people selling AI.
I'd put my money on those who sell the pickaxes, and the companies who have a way to use this new tech to deliver more value.
Yeah, I've always found it a bit puzzling how companies like OpenAI/Anthropic have such high valuations. Like what is the actual business model? You can sell inference-as-a-service of course but given that there are a half-dozen SOTA frontier models and the compute cost of inference is still very high it just seems like there is no margin in it. Nvidia captures so much value on the compute infrastructure and competition pushes prices down for inference and what is left?
The people who make money serving in users will be the one with the best integrations. Those are harder to do, require business relationships, and are massively differentiating.
You'll probably have a player that sells privacy as well.
I don't see how this works, as the costs of running inference is so much higher than the revenues earned by the frontier labs. Anthropic and OpenAI don't continue to exist long-term in a world where GPT-5 and Claude 4.1 cost-quality models are SOTA.
With gpt5 I’m not sure this is true. Certainly openAI is still losing money but if they stopped research and just focused on productionizing inference use cases I think they’d be profitable.
3 replies →
> It almost seems as though the record-setting bonuses they were dolling out to hire the top minds in AI might have been a little shortsighted
Or the more likely explanation is that they feel they've completed the hiring necessary to figure out what's next.
> …lot of jobs will disappear.
So it’s true that AI will kill jobs, but not in the way they’ve imagined?!
> A couple of years ago, I asked a financial investment person about AI as a trick question.
Why do you assume this people know any better than average Joe on the street?
Study after study demonstrates they can't even keep up with the market benchmarks, how would they be any wiser to tell you what's a fad or not.
I think the point of the question was to differentiate this person from the average Jane on the Street.
But half the Janes will hold similar views and positions.
1 reply →
>It almost seems as though the record-setting bonuses they were dolling out to hire the top minds in AI might have been a little shortsighted.
Everything zuck has done since the "dawn of AI" has been to intentionally subvert and sabotage existing AI players, because otherwise Meta would be too far behind. In the same way that AI threatens Search, we are seeing emergently that AI is also threatening social networks -- you can get companionship, advice, information, emotional validation, etc. directly from an AI. People are forming serious relationships with these things in as much a real way as you would with anyone else on Facebook or Instagram. Not to mention, how long before most of the "people" on those platforms are AI themselves?
I believe exactly 0 percent of the decision to make Llama open-source and free was done altruistically as much as it was simply to try and push the margins of Anthropic, OpenAI, etc. downward. Indeed, I feel like even the fearmongering of this article is also strategically intended to devalue AI incumbents. AI is very much an existential threat to Meta.
Is AI currently fulfilling the immense hype around it? In my opinion, maybe not, but the potential value is obvious. Much more obvious than, for example, NFTs and crypto just a few years ago.
> AI is very much an existential threat to Meta.
How so?
“you can get companionship, advice, information, emotional validation, etc. directly from an AI. People are forming serious relationships with these things in as much a real way as you would with anyone else on Facebook or Instagram. Not to mention, how long before most of the "people" on those platforms are AI themselves?”
1 reply →
"little shortsighted"
Or, this knowingly could not be sustained. So they scooped up all the talent they wanted before anybody could react, all at once, with big carrots. And then hit pause button to let all that new talent figure out the next step.
The line was to buy Amazon as it was undervalued a la IBM or Apple based on its cloud computing capabilities relative to the future (projected) needs of AI.
Correction if I may: Lot of AI jobs will disappear. Lot of usual jobs that were put on hold will return. This is good news for most of humankind.
When will the investors run out of money and stop funding hypes?
As someone using LLMs daily, it's always interesting to read something about AI being a bubble or just hype. I think you're going to miss the train, I am personally convinced this is the technology of our lifetime.
You are welcome to share how AI has transformed a revenue generating role. Personally, I have never seen a durable example of it, despite my excitement with the tech.
In my world, AI has been little more than a productivity boost in very narrowly scoped areas. For instance, generating an initial data mapping of source data against a manually built schema for the individual to then review and clean up. In this case, AI is helping the individual get results faster, but they're still "doing" data migrations themselves. AI is simply a tool in their toolbox.
What you've described is reasonable and a clear takeaway is that AI is a timesaving tool you should learn.
Where i share concern with the parent is the claims that AI is useless which isn't coming from your post at all but i have definitely seen instances of it in the programmer community still to this day. As in the parents concern that some programmers are missing the train is unfortunately completely warranted.
1 reply →
I know a company that replaced their sales call center with an AI calling bot instead. The bot got better sales and higher feedback scores from customers.
4 replies →
why is it a train? If it's so transformative surely I can join in in a year or so?
I'll say it again since I've said it a million times, it can be useful and a bubble. The logic of investors before the last market crash was something like "houses are useful, so no amount of hype around the housing market could be a bubble"
Or, quite similarly, the internet bubble of the large ‘90s
Very obviously the internet is useful, and has radically changed our lives. Also obviously, most of the high stock valuations of the ‘90s didn’t pan out.
How are you using it? The execs and investors believe the road to profit is by getting rid of your role in the process. Do you think that’d be possible?
I'm an exec lol
If you really think this, `baby` is an apt name! Internet, Smartphones, and social media will all be more impactful than LLMs could possibly be... but hey, if you're like 18 y/o then sure, maybe LLMs is the biggest.
Also disagree with missing the train, these tools are so easy to use a monkey (not even a smart one like an ape, more like a Howler) can effectively use them. Add in that the tooling landscape is changing rapidly; ex: everyone loved Cursor, but now it's fallen behind and everyone loves Claude Code. There's some sense in waiting for this to calm down and become more open. (Why are users so OK with vendor lock-in??? It's bothersome)
The hard parts are running LLMs locally (what quant do I use? K/V quant? Tradeoffs? Llama.cpp or ollama or vllm? What model? How much context can I cram in my vram? What if I do CPU inference? Fine tuning? etc..) and creating/training them.
Tu quoque
> It almost seems as though the record-setting bonuses they were dolling out to hire the top minds in AI might have been a little shortsighted.
If AI is going to be integral to society going forward, how is it shortsighted?
> She did well by recommending investing in companies that invest in AI (like MS) but who had other profitable businesses (like Azure).
So you prefer a 2x gain rather than 10X gain from the likes of Nvidia or Broadcom? You should check how much better META has done compared to MSFT the past few years. Also a "financial investment person"? The anecdote feels made up.
> She skillfully navigated the question in a way that won my respect.
She won your respect by giving you advice that led to far less returns than you could have gotten otherwise?
> I personally believe that a lot of investment money is going to evaporate before the market resets.
But you believe investing in MSFT was a better AI play than going with the "hype" even when objective facts show otherwise. Why should any care what you think about AI, investments and the market when you clearly know nothing about it?