Comment by bigstrat2003

14 hours ago

> Coding AIs design software better than me, review code better than me, find hard-to-find bugs better than me, plan long-running projects better than me, make decisions based on research, literature, and also the state of our projects better than me.

That is just not true, assuming you have a modicum of competence (which I assume you do). AIs suck at all these tasks; they are not even as good as an inexperienced human.

For all we know, you both could comparing using a Nokia 3310 and a workstation PC based on the hardware, but you both just say "this computer is better than that computer".

There are a ton of models out there, ran in a ton of different ways, that can be used in different ways with different harnesses, and people use different workflows. There is just so many variables involved, that I don't think it's neither fair nor accurate for anyone to claim "This is obviously better" or "This is obviously impossible".

I've been in situations where I hit my head against some hard to find bug for days, then I put "AI" (but what? No one knows) to it and it solves it in 20 minutes. I've also asked "AI" to do trivial work that it still somehow fucked up, even if I could probably have asked a non-programmer friend to do it and they'd be able to.

The variance is great, and the fact that system/developer/user prompts matter a lot for what the responses you get, makes it even harder to fairly compare things like this without having the actual chat logs in front of you.

  • > The variance is great

    this strikes me as a very important thing to reflect on. when the automobile was invented, was the apparent benefit so incredibly variable?

    • > was the apparent benefit so incredibly variable?

      Yes, lots of people were very vocally against horseless-carriages, as they were called at the time. Safety and public nuisance concerns were widespread, the cars were very noisy, fast, smoky and unreliable. Old newspapers are filled with opinions about this, from people being afraid of horseless-carriages spooking other's horses and so on. The UK restricted the adoption of cars at one point, and some Canton in Switzerland even banned cars for a couple of decades.

      Horseless-carriages was commonly ridiculed for being just for "reckless rich hobbyists" and similar.

      I think the major difference is that cars produced immediate, visible externalities, so it was easy for opposition to focus on public safety in public spaces. In contrast, AI has less physically visible externalities, although they are as important, or maybe even more important, than the ones cars introduced.

    • Is this a trick question? Yes it was. A horse could go over any terrain while a car could only really go over very specific terrain designed for it. We had to terraform the world in order to make the automobile so beneficial. And it turned out that this terraforming had many unintended consequences. It's actually a pretty apt comparison to LLMs.

LLMs generate the most likely code given the problem they're presented and everything they've been trained on, they don't actually understand how (or even if) it works. I only ever get away with that when I'm writing a parser.

  • > they don't actually understand how

    but if it empirically works, does it matter if the "intelligence" doesn't "understand" it?

    Does a chess engine "understand" the moves it makes?

    • If it empirically works, then sure. If instead every single solution it provides beyond a few trivial lines falls somewhere between "just a little bit off" and "relies entirely on core library functionality that doesn't actually exist" then I'd say it does matter and it's only slightly better than an opaque box that spouts random nonsense (which will soon include ads).

      3 replies →

    • It matters if AGI is the goal. If it remains a tool to make workers more productive, then it doesn't need to truly understand, since the humans using the tools understand. I'm of the opinion AI should have stood for Augmented (Human) Intelligence outside of science fiction. I believe that's what early pioneers like Douglas Engalbert thought. Clearly that's what Steve Jobs and Alan Kay thought computing was for.

      2 replies →

Depends on how he defined "better". If he uses the word "better" to mean "good enough to not fail immediately, and done in 1/10th of the time", then he's correct.