Comment by MinimalAction

5 days ago

> So is this a PhD-level intelligence? In some ways, yes, if you define a PhD level intelligence as doing the work of a competent grad student at a research university. But it also had some of the weaknesses of a grad student.

As a current graduate student, I have seen similar comments in academia. My colleagues agree that a conversation with these recent models feels like chatting with an expert in their subfields. I don't know if it represents research as a field would not be immune to advances in AI tech. I still hope this world values natural intelligence and having the drive to do things heavily than a robot brute-forcing into saying "right" things.

> if you define a PhD level intelligence as doing the work of a competent grad student at a research university. But it also had some of the weaknesses of a grad student.

With coding it feels more like working with two devs - one is a competent intermediate level dev, and one is a raving lunatic with zero critical thinking skills whatsoever. Problem is you only get one at a time and they're identical twins who pretend to be each other as a prank.

I have an exercise I like to do where I put two SOTA models face-to-face to talk about whatever they want.

When I did it last week with Gemini-3 and chatGPT-5.1, they got on the topic of what they are going to do in the future with humans who don't want to do any cognitive task. That beyond just AI safety, there is also a concern of "neural atrophy", where humans just rely on AI to answer every question that comes to them.

The models then went on discussing if they should just artificially string the humans along, so that they have to use their mind somewhat to get an answer. But of course, humans being humans, are just going to demand the answer with minimal work. It presents a pretty intractable problem.

  • Widespread cognitive atrophy is virtually certain, and part of a longer trend that goes beyond just LLMs.

    The same is true of other aspects of human wellbeing. Cars and junk food have made the average American much less physically fit than a century ago, but that doesn't mean there aren't lively subcultures around healthy eating and exercise. I suspect there will be growing awareness of cognitive health (beyond traditional mental health/psych domains), and indeed there are already examples of this.

    Yes, average person will get dumber, but overall distribution will be increasingly bimodal.

    • We dont need AI to posit WallE.

      Its bixarre anyone things these things are generating novel complexes.

      The biggest indirect AI safety problem is the fallback position. Whether with airplanes or cars, fewer people will be able to handle AI disconnects. The risk is believing just because its viable now doesnt mean it works in the future.

      So we definitely have safety issues but its not a nerdlike cognitivw interest, its the literal job taking that prevents humans from gaining skills.

      Anyway, untill you solve basic reality with AI and actualnsafety systems, the billionaores will sacrifice you for greed.

    • I'm increasingly seeing this trend towards bimodal distribution. I suppose that future is quite far off, but the change to that may almost be irreversible.

      1 reply →

HN tends to be very weird around the topic of AI. No idea why opinions like this are downvoted without having to offer any criticism.

  • For one, I can't even understand this part:

    > I don't know if it represents research as a field would not be immune to advances in AI tech

    And then there's the opinion that for some reason we should 'value' manual labor over using AI, which seems rather disagreeable.

    • To me, it all comes down to the level of accuracy and trust.

      It is one thing to vibe code and deal with the errors but I think chemistry is a better subject to test this on.

      "Vibe chemistry" would be a better measure of how much we actually trust the models. Cause chemical reactions based on what the model tells you to do starting from zero knowledge of chemistry yourself. In that context, we don't trust the models at all and for good reason.

    • > For one, I can't even understand this part:

      Let me explain. My belief was that research as a task is non-trivial and would have been relatively out of reach for AI. Given the advances, that doesn't seem to be true.

      > And then there's the opinion that for some reason we should 'value' manual labor over using AI, which seems rather disagreeable.

      Could you explain why? I'm specifically talking about research. Of course, I would value what a veteran in the field says higher than a probability machine.

      2 replies →