← Back to context

Comment by mitthrowaway2

9 days ago

What's an example of an intellectual task that you don't think AI will be capable of by 2027?

It won't be able to write a compelling novel, or build a software system solving a real-world problem, or operate heavy machinery, create a sprite sheet or 3d models, design a building or teach.

Long term planning and execution and operating in the physical world is not within reach. Slight variations of known problems should be possible (as long as the size of the solution is small enough).

programming

  • Why would it get 60-80% as good as human programmers (which is what the current state of things feels like to me, as a programmer, using these tools for hours every day), but stop there?

    • So I think there's an assumption you've made here, that the models are currently "60-80% as good as human programmers".

      If you look at code being generated by non-programmers (where you would expect to see these results!), you don't see output that is 60-80% of the output of domain experts (programmers) steering the models.

      I think we're extremely imprecise when we communicate in natural language, and this is part of the discrepancy between belief systems.

      Will an LLM model read a person's mind about what they want to build better than they can communicate?

      That's already what recommender systems (like the TikTok algorithm) do.

      But will LLMs be able to orchestrate and fill in the blanks of imprecision in our requests on their own, or will they need human steering?

      I think that's where there's a gap in (basically) belief systems of the future.

      If we truly get post human-level intelligence everywhere, there is no amount of "preparing" or "working with" the LLMs ahead of time that will save you from being rendered economically useless.

      This is mostly a question about how long the moat of human judgement lasts. I think there's an opportunity to work together to make things better than before, using these LLMs as tools that work _with_ us.

    • It's 60-80% as good as Stack Overflow copy-pasting programmers, sure, but those programmers were already providing questionable value.

      It's nowhere near as good as someone actually building and maintaining systems. It's barely able to vomit out an MVP and it's almost never capable of making a meaningful change to that MVP.

      If your experiences have been different that's fine, but in my day job I am spending more and more time just fixing crappy LLM code produced and merged by STAFF engineers. I really don't see that changing any time soon.

      4 replies →

    • Try this, launch Cursor.

      Type: print all prime numbers which are divisible by 3 up to 1M

      The result is that it will do a sieve. There's no need for this, it's just 3.

      1 reply →

    • Because ewe still haven't figured out fusion but its been promised for decades. Why would everything thats been promised by people with highly vested interests pan out any different?

      One is inherently a more challenging physics problem.

  • Can you phrase this in a concrete way, so that in 2027 we can all agree whether it's true or false, rather than circling a "no true scotsman" argument?

    • Good question. I tried to phrase a concrete-enough prediction 3.5 years ago, for 5 years out at the time: https://news.ycombinator.com/item?id=29020401

      It was surpassed around the beginning of this year, so you'll need to come up with a new one for 2027. Note that the other opinions in that older HN thread almost all expected less.