every other article these days on this site is about AI. And it's incredibly tedious and annoying.
Isn't it enough that clueless marketers who get their Tech knowledge from businessinsider and bloomberg are constantly harping on about AI.
Seems we as a community have resigned or given up in this battle against common sense. Maybe long ago. Still there should be some form of moderation penalizing these shill posts that only glorify AI as being the future, ... the same way that not everything about crypto or the blockchain ended up on the FP. Seems with AI we're looking the other way and are OK with it?
Not just you. Clearly useful tools coming out of this AI crazy, but a LOT of fluff.
Outside of pure tech companies, there's a lot of "Head of AI" hiring by CTOs to show "we are doing AI" regardless of if they have found any application for it yet.
I've also seen a lot of product pivots to AI where they don't really have a need or explanation for the use case that AI helps with.
Further I've seen a number of orgs that were laggards in their internal technology become incredibly distracted thinking AI will solve for them not having even a proper rudimentary 2010s class IT org.
I think the comedown from this will be worse than crypto as while there will be more real use cases, there is far more hype based "we have to do something" adoption that hasn't found a use yet. A lot of orgs that remained weary of crypto got fully on the AI bandwagon. Investment must be an order of magnitude more.
A lot of the fluff is all about boosting sales IMO which is where a lot of money for tech comes from. When MBA types (a large chunk of tech's buyers) hear the promises of efficiency, and replacing workers they get all excited very very quickly unless it requires lots of capital to do so - where they might instead look at cheaper labor (think offshore). AI is the ultimate SaaS product to these types, or at least how it it pitched to them it is. These people see tech workers, and IP as just "resources" - they are fungible bodies not qualified professionals. Obviously this creates a lot of technology delivery issues; and dysfunction. Places run by these types (many corporations) see technology as an expensive cost centre and secondary to the main business. I've seen this even in companies where they have to pivot to being a tech business - because of market competition they were forced to invest, but because it was reluctant investment the old culture remains. Engineers and other "builders/do'ers" are usually second class to the "decision makers" in these places. These are the places where engineers keep the place running, sometimes doing a lot of work and absolutely critical, but get paid little and receive little recognition. This is very common position for a software engineer outside the US.
With this kind of thinking often comes being a laggard in technology as you put it - engineers are a "forced necessary cost" because competitors are forcing us to keep up; not because we actually value it.
AI in their minds has vindicated their thinking hence the excitement about it. As a product it is very easy to "sell/fluff" to these kinds of people; it really excites them. They think engineers are now the expendable people they always wanted them to be rather than the people they had to put up with to get what they wanted. They were now justified in being "laggards" - they now have AI to do it cheaper than they would of had to pay an engineer before.
Yes there's a lot wrong with the thinking above, overestimation of current capabilities, etc and real innovative growth leading companies don't think this way. But the decision makers in these companies don't have that perspective. Much of technology trends, corporate hype is around things you can sell to these decision makers who often overpay for the wrong kind of technologies if you sell to them right (think typical RFQ/RFP corporate processes) - AI is an easy sell/dream to these people.
The AI discussions can indeed be repetitive and tiresome here, especially for regulars, but they already seem to be downweighted and clear off the front page quite fast.
But it's a major focus of the industry right now, involving a genuinely novel and promising new class of tools, so the posts belong here and the high engagement that props them up seems expected.
> But it's a major focus of the industry right now, involving a genuinely novel and promising new class of tools, so the posts belong here and the high engagement that props them up seems expected.
In your opinion (and admittedly others), but that doesn't make the overhype any less tiresome. Yes it is novel technology, but there's alway novel technology, and it isn't all in one area, but you wouldn't know it by what hits the front page these days.
Anyway, it's useless to shake fists at the clouds. This hype will pass, just like all the others before it, and the discussion can again be proportional to the relevance of the topic.
How does any of that apply to this particular article? Isn't a broader historical perspective exactly what's needed if you want to be free from the immediate hype cycle?
One of my biggest irritations with HN comment sections is how frequently people seem to want to ignore the specific interesting thing an article is about and just express uninteresting and repetitive generic opinions about the general topic area instead.
that's not really a justification in my view. The entire education industry is complicit in this circus. It's not just engineers hoping to get a payday it's academics too that are hoping to get funding and tenure.
good point about the business model. probably AI has more even the ones reaping the rewards are only 4 or 5 big corps.
It seems with crypto the business "benefits" were mostly adversarial (winners were those doing crimes on the darknet, or to allow ransomware operators to get paid). The underlying blockchain Tech itself though failed to replace transactions in a database.
The main value for AI today seems to be generative Tech to improve the quality of Deepfakes or to help everyone in Business write their communication with an even more "neutral" non-human like voice, free of any emotion, almost psychopathic. Like the dudes who are writing about their achievements on LinkedIn in 3rd person, ... Only now it's psychopathy enabled by the machine.
Also I've seen people who, without AI are barely literate, are now sending emails that look like they've been penned by a post-doc in English literature. The result is it's becoming a lot harder to separate the morons, and knuckle-draggers from those who are worth reaching out and talking to.
My problem is the abuse of the term AI to a point where it has lost all meaning. I'd be all for a ban on the term in favour of the specific method driving the 'intelligence' as I would rule out some of qualifying simple because they are not capable of making intelligent decisions, even if they can make complex ones (looking at you random forest).
Do you mean that cryptocurrency submissions were penalized that way? I recall them being about as annoying and similarly filling the front page with uninformative submissions, but have not heard of such penalties. Same as with other subjects during their hype waves.
Well AI probably is the future. Might not necessarily be LLMs (I personally don't rate LLMs) but enough people are interested in it nowadays that it's almost certain AGI will happen in our lifetimes.
Because almost once a quarter there's a big release that raises the expectations for top AI companies. Which brings up discussions, new articles and eventually posts in the front page.
2020-2022 HN front page was full of crypto news, mostly in negative light, but still. And before that there was more hot bubble topics. It's very usual.
In the 1980s, AI was a few people at Stanford, a few people at CMU, a few people at MIT, and a scattering of people elsewhere. There were maybe a half dozen startups and none of them got very big.
Quite incorrect, even smaller colleges like in Greeley Colorado had Symbolics machines and there are threads of Expert Systems all throughout the industry.
The industry as a whole was smaller though.
The word sense disambiguation problem did kill a lot of it pretty quickly though.
Threads, yes. We had one Symbolics 3600, the infamous refrigerator-sized personal computer, at the aerospace company. But it wasn't worth the trouble. Real work was done with Franz LISP on a VAX and then on Sun workstations.
There were a lot of places that tried a bit of '80s "AI", but didn't accomplish much.
>> In the 1980s, AI was a few people at Stanford, a few people at CMU, a few people at MIT, and a scattering of people elsewhere.
Maybe that's the view from the US. In the '70s, '80s and '90s, ymbolic and logic-based AI flourished in Europe, in the UK and France with seminal work on program verification and model checking, with rich collaborations on logic programming between mainly British and French institutions, in japan with the 5th Generation Computer project, and in Australia with the foundational work of J. Ross Quinlan and others on machine learning, which at the time (late 8'0s and early 90's) meant primarily symbolic approaches, like decision tree learners.
But, as usual, the US thinks progress is only what happens in the US.
From “big program, small data” to “big data, small program” seems like a useful way to summarize the main shift from elaborate rules in the first generation, to huge piles of data today.
The biggest problem was "expert systems and a flood of public money" - that public money led to complacency and a lot of research in unproductive areas. It is private money that has really kickstarted the new systems since Google bought AlexNet
every other article these days on this site is about AI. And it's incredibly tedious and annoying.
Isn't it enough that clueless marketers who get their Tech knowledge from businessinsider and bloomberg are constantly harping on about AI.
Seems we as a community have resigned or given up in this battle against common sense. Maybe long ago. Still there should be some form of moderation penalizing these shill posts that only glorify AI as being the future, ... the same way that not everything about crypto or the blockchain ended up on the FP. Seems with AI we're looking the other way and are OK with it?
Or maybe it's me.
Not just you. Clearly useful tools coming out of this AI crazy, but a LOT of fluff.
Outside of pure tech companies, there's a lot of "Head of AI" hiring by CTOs to show "we are doing AI" regardless of if they have found any application for it yet.
I've also seen a lot of product pivots to AI where they don't really have a need or explanation for the use case that AI helps with.
Further I've seen a number of orgs that were laggards in their internal technology become incredibly distracted thinking AI will solve for them not having even a proper rudimentary 2010s class IT org.
I think the comedown from this will be worse than crypto as while there will be more real use cases, there is far more hype based "we have to do something" adoption that hasn't found a use yet. A lot of orgs that remained weary of crypto got fully on the AI bandwagon. Investment must be an order of magnitude more.
A lot of the fluff is all about boosting sales IMO which is where a lot of money for tech comes from. When MBA types (a large chunk of tech's buyers) hear the promises of efficiency, and replacing workers they get all excited very very quickly unless it requires lots of capital to do so - where they might instead look at cheaper labor (think offshore). AI is the ultimate SaaS product to these types, or at least how it it pitched to them it is. These people see tech workers, and IP as just "resources" - they are fungible bodies not qualified professionals. Obviously this creates a lot of technology delivery issues; and dysfunction. Places run by these types (many corporations) see technology as an expensive cost centre and secondary to the main business. I've seen this even in companies where they have to pivot to being a tech business - because of market competition they were forced to invest, but because it was reluctant investment the old culture remains. Engineers and other "builders/do'ers" are usually second class to the "decision makers" in these places. These are the places where engineers keep the place running, sometimes doing a lot of work and absolutely critical, but get paid little and receive little recognition. This is very common position for a software engineer outside the US.
With this kind of thinking often comes being a laggard in technology as you put it - engineers are a "forced necessary cost" because competitors are forcing us to keep up; not because we actually value it.
AI in their minds has vindicated their thinking hence the excitement about it. As a product it is very easy to "sell/fluff" to these kinds of people; it really excites them. They think engineers are now the expendable people they always wanted them to be rather than the people they had to put up with to get what they wanted. They were now justified in being "laggards" - they now have AI to do it cheaper than they would of had to pay an engineer before.
Yes there's a lot wrong with the thinking above, overestimation of current capabilities, etc and real innovative growth leading companies don't think this way. But the decision makers in these companies don't have that perspective. Much of technology trends, corporate hype is around things you can sell to these decision makers who often overpay for the wrong kind of technologies if you sell to them right (think typical RFQ/RFP corporate processes) - AI is an easy sell/dream to these people.
1 reply →
> Clearly useful tools coming out of this AI crazy, but a LOT of fluff.
Isn't this true of every boom? Like A.C. Clarke said, you find the limits of the possible by venturing into the impossible.
1 reply →
It's you.
The AI discussions can indeed be repetitive and tiresome here, especially for regulars, but they already seem to be downweighted and clear off the front page quite fast.
But it's a major focus of the industry right now, involving a genuinely novel and promising new class of tools, so the posts belong here and the high engagement that props them up seems expected.
> It's you.
Not just him.
> But it's a major focus of the industry right now, involving a genuinely novel and promising new class of tools, so the posts belong here and the high engagement that props them up seems expected.
In your opinion (and admittedly others), but that doesn't make the overhype any less tiresome. Yes it is novel technology, but there's alway novel technology, and it isn't all in one area, but you wouldn't know it by what hits the front page these days.
Anyway, it's useless to shake fists at the clouds. This hype will pass, just like all the others before it, and the discussion can again be proportional to the relevance of the topic.
10 replies →
> It's you.
I disagree.
How does any of that apply to this particular article? Isn't a broader historical perspective exactly what's needed if you want to be free from the immediate hype cycle?
One of my biggest irritations with HN comment sections is how frequently people seem to want to ignore the specific interesting thing an article is about and just express uninteresting and repetitive generic opinions about the general topic area instead.
It's a CACM article. Without having read this one, I'd say CACM articles on HN are absolutely appropriate.
that's not really a justification in my view. The entire education industry is complicit in this circus. It's not just engineers hoping to get a payday it's academics too that are hoping to get funding and tenure.
CACM was totally complicit in spreading the blockchain hype: https://cacm.acm.org/?s=blockchain
That said, I'm not hating the player, people gotta eat. But I totally lack appreciation for the game.
1 reply →
It's been a common problem with HN. I remember when NodeJS came out, it was exactly the same, and then with all the crypto-craze.
Nah, it’s not just you.
AI is really neat. I don’t understand how a business model that makes money pops out on the other end.
At least crypto cashed out on NFTs for a while.
> I don’t understand how a business model that makes money pops out on the other end
Tractors and farming.
By turning what is traditionally a labour intensive product into a capital intensive one.
For now, the farmers who own tractors will beat the farmers who need to hire, house and retain workers (or half a dozen children).
This goes well for quite some time, where you can have 3 people handle acres & acres.
I'll be around explaining how coffee beans can't be picked by a tractor or how vanilla can't be pollinated with it.
5 replies →
> I don’t understand how a business model that makes money pops out on the other end.
What issues do you see?
I pay for ChatGPT and for cursor and to me that's money very well spent.
I imagine tools like cursor will become common for other text intensive industries, like law, soon.
Agreed that the hype can be over the top, but these are valuable productivity tools, so I have some trouble understanding where you're coming from.
10 replies →
good point about the business model. probably AI has more even the ones reaping the rewards are only 4 or 5 big corps.
It seems with crypto the business "benefits" were mostly adversarial (winners were those doing crimes on the darknet, or to allow ransomware operators to get paid). The underlying blockchain Tech itself though failed to replace transactions in a database.
The main value for AI today seems to be generative Tech to improve the quality of Deepfakes or to help everyone in Business write their communication with an even more "neutral" non-human like voice, free of any emotion, almost psychopathic. Like the dudes who are writing about their achievements on LinkedIn in 3rd person, ... Only now it's psychopathy enabled by the machine.
Also I've seen people who, without AI are barely literate, are now sending emails that look like they've been penned by a post-doc in English literature. The result is it's becoming a lot harder to separate the morons, and knuckle-draggers from those who are worth reaching out and talking to.
yes old man yelling at cloud.
3 replies →
Crypto is coming back for another heist. Will probably die a bit once Trump finishes his term
>every other article
On a quick count it seems to be more like 1/10. Maybe just ignore them and read something else?
I'm interested in the AI stuff personally.
My problem is the abuse of the term AI to a point where it has lost all meaning. I'd be all for a ban on the term in favour of the specific method driving the 'intelligence' as I would rule out some of qualifying simple because they are not capable of making intelligent decisions, even if they can make complex ones (looking at you random forest).
Do you mean that cryptocurrency submissions were penalized that way? I recall them being about as annoying and similarly filling the front page with uninformative submissions, but have not heard of such penalties. Same as with other subjects during their hype waves.
Well AI probably is the future. Might not necessarily be LLMs (I personally don't rate LLMs) but enough people are interested in it nowadays that it's almost certain AGI will happen in our lifetimes.
Honestly I'm intrigued on why you don't rate LLM. Arguably the main reason AI got out from its winter is the emergence of LLM.
1 reply →
Because almost once a quarter there's a big release that raises the expectations for top AI companies. Which brings up discussions, new articles and eventually posts in the front page.
2020-2022 HN front page was full of crypto news, mostly in negative light, but still. And before that there was more hot bubble topics. It's very usual.
The 1980s AI "boom" was tiny.
In the 1980s, AI was a few people at Stanford, a few people at CMU, a few people at MIT, and a scattering of people elsewhere. There were maybe a half dozen startups and none of them got very big.
Quite incorrect, even smaller colleges like in Greeley Colorado had Symbolics machines and there are threads of Expert Systems all throughout the industry.
The industry as a whole was smaller though.
The word sense disambiguation problem did kill a lot of it pretty quickly though.
Threads, yes. We had one Symbolics 3600, the infamous refrigerator-sized personal computer, at the aerospace company. But it wasn't worth the trouble. Real work was done with Franz LISP on a VAX and then on Sun workstations.
There were a lot of places that tried a bit of '80s "AI", but didn't accomplish much.
2 replies →
>> In the 1980s, AI was a few people at Stanford, a few people at CMU, a few people at MIT, and a scattering of people elsewhere.
Maybe that's the view from the US. In the '70s, '80s and '90s, ymbolic and logic-based AI flourished in Europe, in the UK and France with seminal work on program verification and model checking, with rich collaborations on logic programming between mainly British and French institutions, in japan with the 5th Generation Computer project, and in Australia with the foundational work of J. Ross Quinlan and others on machine learning, which at the time (late 8'0s and early 90's) meant primarily symbolic approaches, like decision tree learners.
But, as usual, the US thinks progress is only what happens in the US.
> Artificial life fizzled as a meta discipline
I've wondered for a while if Artificial Life is in its own winter, waiting for someone to apply the lessons of scale we learned from neural nets.
If you want to follow someone playing with this, there's Steve Grand, who wrote the original creatures game and kickstarted something a bit more ambitious https://www.kickstarter.com/projects/1508284443/grandroids-r...
There was this AI civilization approach shared here a couple of weeks ago, that I think is an interesting move in this direction - https://news.ycombinator.com/item?id=42035319
We're seeing artificial life come back as non-player characters in video games.
From “big program, small data” to “big data, small program” seems like a useful way to summarize the main shift from elaborate rules in the first generation, to huge piles of data today.
The biggest problem was "expert systems and a flood of public money" - that public money led to complacency and a lot of research in unproductive areas. It is private money that has really kickstarted the new systems since Google bought AlexNet
I'm not an expert in the field, but I find this article incredible. For someone like me who didn't major in AI at CS, it's clear and entertaining.
Judea Pearl is a professor at UCLA, not Berkeley.
[flagged]
This comment sounds suspiciously AI-generated.
Because it is