Comment by halfmatthalfcat
5 days ago
The overconfidence/short sightedness on HN about AI is exhausting. Half the comments are some weird form of explaining how developers will be obsolete in five years and how close we are to AGI.
5 days ago
The overconfidence/short sightedness on HN about AI is exhausting. Half the comments are some weird form of explaining how developers will be obsolete in five years and how close we are to AGI.
> Half the comments are some weird form of explaining how developers will be obsolete in five years and how close we are to AGI.
I do not see that at all in this comment section.
There is a lot of denial and cynicism like the parent comment suggested. The comments trying to dismiss this as just “some high school math problem” are the funniest example.
[flagged]
congrats ur autistic, like me.
1 reply →
I went through the thread and saw nothing that looked like this.
I don’t think developers will be obsolete in five years. I don’t think AGI is around the corner. But I do think this is the biggest breakthrough in computer science history.
I worked on accelerating DNNs a little less than a decade ago and had you shown me what we’re seeing now with LLMs I’d say it was closer to 50 years out than 20 years out.
its very clearly a major breakthrough for humanity
Perhaps only for a very small part of humanity...
[flagged]
you can add an /s to your earlier comment to make it more obvious it was a joke rather than a direct rebuttal that it got taken as
1 reply →
Greatest breakthru in compsci.
You mean the one that paves the way for ancient Egyptian slave worker economies?
Or totalitarian rule that 1984 couldn't imagine?
Or...... Worse?
The intermediate classes of society always relied on intelligence and competence to extract money from the powerful.
AI means those classes no longer have power.
Right, if people want to talk about how they are worried about a future with super intelligence AI, that's I think something almost everyone can agree on is a worthy topic, maybe to different degrees but not the issue in my mind.
I think what it feels like I see a lot, are people who - because of their fear of a future with super intelligent AI - try to like... Deny the significance of the event, if only because they don't _want_ to wrestle with the implications.
I think it's very important we don't do that. Let's take this future seriously, so we can align ourselves on a better path forward... I fear a future where we have years of bickering in the public forums on the veracity or significance of claims, if only because this subset of the public who are incapable of mentally wrestling with the wild fucking shit we are walking into.
If not this, what is your personal line in the sand? I'm not specifically talking to any person when I say this. I just can't help but to feel like I'm going crazy, seeing people deny what is right in front of their eyes.
1 reply →
I don’t typically find this to be true. There is a definite cynicism on HN especially when it comes to OpenAI. You already know what you will see. Low quality garbage of “I remember when OpenAI was open”, “remember when they used to publish research”, “sama cannot be trusted”, it’s an endless barrage of garbage.
its honestly ruining this website, you cant even read the comments sections anymore
But in the case of OpenAI, this is fully justified. Isn't that so?
1 reply →
Nobody likes the idea that this is only "economical superior AI". Not as good as humans, but a LOT cheaper.
The "It will just get better" is bubble baiting the investors. The tech companies learned from the past and they are riding and managing the bubble to extract maximum ROI before it pops.
The reality is a lot of work done by humans can be replaced by an LLM with lower quality and nuance. The loss in sales/satisfaction/ect is more than offset by the reduced cost.
The current model of LLMs are enshitification accelerators and that will have real effects.
Incredible how many HNers cannot see this comment for what it is.