Comment by orionsbelt

6 days ago

At what point would you be impressed by a human being if you asked it to help you with a task every 6 months from birth until it was 30 years old?

If you ask different people the above question, and if you vary it based on type of task, or which human, you would get different answers. But as time goes on, more and more people would become impressed with what the human can do.

I don't know when LLMs will stop progressing, but all I know is they continue to progress at what is to me a similar astounding rate as to a growing child. For me personally, I never used LLMs for anything, and since o3 and Gemini 2.5 Pro, I use them all the time for all sorts of stuff.

You may be smarter than me and still not impressed, but I'd try the latest models and play around, and if you aren't impressed yet, I'd bet money you will be within 3 years max (likely much earlier).

> At what point would you be impressed by a human being if you asked it to help you with a task every 6 months from birth until it was 30 years old?

In this context, never. Especially because the parent knows you will always ask 2+2 and can just teach the child to say “four” as their first and only word. You’ll be on to them, too.

  • > In this context, never. Especially because the parent knows you will always ask 2+2 and can just teach the child to say “four” as their first and only word. You’ll be on to them, too.

    On the assumption that you'll always only ask it "what's 2+2?" Keywords being "always" & "you".

    In aggregate, the set of questions will continuously expand as a non-zero percentage of people will ask new questions. The set of questions asked will continue to expand, and the LLMs will continue to be trained to fill in the last 20%.

    Even under the best interpretations, this is the detractors continuously moving goalposts, because the last 20% will never be filled: New tasks will continuously be found, and critics will point to them as "oh, see, they can't do that". By the time that the LLMs can do those tasks, the goalpost will be moved to a new point and they'll continue to be hypocrites.

    ------

    > > At what point would you be impressed by a human being if you asked it to help you with a task every 6 months from birth until it was 30 years old?

    Taking GP's question seriously:

    When a task consisting of more than 20 non-decomposable (atomic) sub-tasks is completed above 1 standard deviation of the human average in that given task. (much more likely)

    OR

    When an advancement is made in a field by that person. (statistically much rarer)

  • To be clear, I’m just saying the analogy isn’t great, not that one can never be impressed by an LLM (or a person for that matter)!