← Back to context

Comment by dataviz1000

21 hours ago

Several years ago I thought a good litmus test for mastery of coding is not finding a solution using internet search nor getting well written questions about esoteric coding problems answered on StackOverflow. For a while, I would post a question and answer my own question after I solved the problem for posterity (or AI bots). I always loved getting the "I've been working on this for 3 days and you saved my life" comments.

I've been working on a challenging problem all this week and all the AI copilot models are worthless helping me. Mastery in coding is being alone when nobody else nor AI copilots can help you and you have dig deep into generalization, synthesis, and creativity.

(I thought to myself, at least it will be a little while longer before I'm replaced with AI coding agents.)

Your post misses the fact that 99% of programming is repetitive plumbing and that the overwhelming majority of developers, even ivy league graduates, suck at coding and problem solving.

Thus, AI is a great productivity tool if you know how to use it for the overwhelming majority of problems out there. And it's a boost even for those that are not even good at the craft as well.

This whole narrative of "okay but it can't replace me in this or that situation" is honestly between an obvious touche (why would you think AI would replace rather than empower those who know their craft) and stale luddism.

  • > 99% of programming is repetitive plumbing

    Even IF that were true (and I'd argue that it is NOT, and it's people who believe that and act that way who produce the tangled messes of spiderweb code that are utterly opaque to public searches and AI analysis -- the supposed "1%"), if even as low as 1% of the code I interacted with was the kind of code that required really deep thought and analysis, it could easily balloon to take up as much time as the other "99%".

    Oh, and Ned Ludd was right, by the way. Weavers WERE replaced by the powered loom. It is in the interest of capital to replace you if they are able to, not to complement you, and furthermore, the teeth of capital have gotten sharper over time, and its appetite more voracious.

    • > Even IF that were true (and I'd argue that it is NOT)

      Can you share what these "hard problems" are that > 1% of developers are working on?

    • Capital is also willing to have vastly lower quality and burden the remaining labor with more toil in exchange for even lower costs. Velocity will rise, quality will fall, toil will increase leading to more burnout but there will be more expendable bodies to cycle through the slop cleanup farm.

  • I've started to come to the conclusion that only greenfield projects consist of repetitive plumbing. Legacy software is like plumbing if all the pipes were tied into a knot. The edge cases, ambiguous naming, hacky solutions, etc. all make for a miserable experience, both for humans and AIs.

Curious to know what are those challenging programming problems are. Can you share some examples?

They're remarkably useless on stuff they've seen but not had up-weighted in the training set. Even the best ones (Opus 4 running hot, Qwen and K2 will surprise you fairly often) are a net liability in some obscure thing.

Probably the starkest example of this is build system stuff: it's really obvious which ones have seen a bunch of `nixpkgs`, and even the best ones seem to really struggle with Bazel and sometimes CMake!

The absolute prestige high-end ones running flat out burning 100+ dollars a day and it's a lift on pre-SEO Google/SO I think... but it's not like a blowout vs. a working search index. Back when all the source, all the docs, and all the troubleshooting for any topic on the whole Internet were all above the fold on Google? It was kinda like this: type a question in the magic box and working-ish code pops out. Same at a glory-days FAANG with the internal mega-grep.

I think there's a whole cohort or two who think that "type in the magic box and code comes out" is new. It's not new, we just didn't have it for 5-10 years.

I have similar issues with support form companies that heavily push AI and self-serve models and make human support hard. I'm very accomplished and highly capable. If I feel the need to turn to support, the chances the solution is in a KB is very slim, same with AI. It'll be a very specific situation with a very specific need.

  • There are a lot of internal KB's companies keep to themselves in their ticketing systems - would be interesting to estimate how much good data there is in there that could in the future be used to train more advanced (or maybe more niche or specific) AI models.

This has been my thought for a long time - unless there is some breakthrough in AI algo I feel like we are going to hit a "creativity wall" for coding (and some other tasks).

  • Any reason to think that the wall will be under the human level?

    • Off the thousands of responses I have read from the top LLMs in the last couple of years: never seen one that was creative. Throwing writing, coding, problem solving, mathematical questions and what not.

      It's somewhat easier to perceive the creativeless aspect with stable diffusion. I'm not talking about the missing limb or extra finger glitches. With a bit of experience looking through generated images our brain eventually perceives the absolute lack of creativity, an artist probably spot it without prior experience with generative AI pieces. With LLMs it takes a bit longer.

      Anecdotal, baseless I guess. Papers were published, some researchers in the fields of science couldn't get the best LLMs to solve any unsolved problem. I recently came across a paper stating bluntly that all LLMs tested were unable to conceptualize, nor derive laws that generalize whatsoever. E.g formulas.

      We are being duped, it doesn't help selling $200 monthly subscriptions - soon for even more - if marketers admitted there is absolutely zero reasoning going on with these stochastic machines on steroids.

      I deeply wish the circus ends soon, so that we can start focusing on what LLMs are excellent, well fitted to do better, faster than humans.

      Creative it is not.

      2 replies →