← Back to context

Comment by bambax

6 months ago

The problem with LLM is when they're used for creativity or for thinking.

Just because LLMs are indeed useful in some (even many!) context, including coding, esp. to either get something started, or, like in your example, to transcode an existing code base to another platform, doesn't mean they will change everything.

It doesn't mean “AI is the new electricity.” (actual quote from Andrew Ng in the post).

More like AI is the new VBA. Same promise: everyone can code! Comparable excitement -- although the hype machine is orders of magnitude more efficient today than it was then.

I don't know about VBA, but spreadsheets actually delivered (to a large extent) on the promise that 'everyone can write simple programs'. So much so that people don't see creating a spreadsheet as coding.

Before spreadsheets you had to beg for months for the IT department to pick your request, and then you'd have to wait a quarter or two for them to implement a buggy version of your idea. After spreadsheets, you can hack together a buggy version of your idea yourself over a weekend.

  • Right. Spreadsheeds already delivered on their promise (and then some) decades ago, and the irony is, many people - especially software engineers - still don't see it.

    > Before spreadsheets you had to beg for months for the IT department to pick your request, and then you'd have to wait a quarter or two for them to implement a buggy version of your idea. After spreadsheets, you can hack together a buggy version of your idea yourself over a weekend.

    That is still the refrain of corporate IT. I see plenty of comments both here and on wider social media, showing that many in our field still just don't get why people resort to building Excel sheets instead of learning to code / asking your software department to make a tool for you.

    I guess those who do get it end up working on SaaS products targeting the "shadow IT" market :).

    • >> Before spreadsheets you had to beg for months for the IT department to pick your request, and then you'd have to wait a quarter or two for them to implement a buggy version of your idea. After spreadsheets, you can hack together a buggy version of your idea yourself over a weekend.

      > That is still the refrain of corporate IT. I see plenty of comments both here and on wider social media, showing that many in our field still just don't get why people resort to building Excel sheets instead of learning to code / asking your software department to make a tool for you.

      In retrospect, this is also a great description of why two of my employers ran low on investors' interest.

    • Software engineers definitely do understand that spreadsheets are widely used and useful. It's just that we also see the awful downsides of them - like no version control, being proprietary, and having to type obscure incantations into tiny cells - and realise that actual coding is just better.

      To bring this back on topic, software engineers see AI being a better search tool or a code suggestion tool on the one hand, but also having downsides (hallucinating, used by people to generate large amounts of slop that humans then have to sift through).

      6 replies →

  • People know which ingredients to use, the ratios, how long to bake and cook them but the design of the kitchen prevents them from cooking the meal? Professional cooks debate which gas tube to use with which adapter and how to organize all the adapters according to ISO standards while the various tubes lay on the floor all over the building. The stove switches off if you try to use the wrong brand of pots. The cupboard has a retina scanner. Eventually people go to the back of the garden and make a campfire. There is no fridge there and no way to wash dishes. They are even using the wrong utensils. The horror!

> It doesn't mean “AI is the new electricity.” (actual quote from Andrew Ng in the post).

I personally agree with Andrew Ng here (and I've literally arrived at the exact same formulation before becoming aware of Ng's words).

I take "new electricity" to mean, it'll touch everything people do, become part of every endeavor in some shape of form. Much like electricity. That doesn't mean taking over literally everything; there's plenty of things we don't use electricity for, because alternatives - usually much older alternatives - are still better.

There's still plenty of internal combustion engines on the ground, in the seas and in the skies, and many of them (mostly on extremely light and extremely heavy ends of the spectrum) are not going to be replaced by electric engines any time soon. Plenty of manufacturing and construction is still done by means of hydraulic and pneumatic power. We also sometimes sidestep electricity for heating purposes by going straight from sunlight to heat. Etc.

But even there, electricity-based technology is present in some form. The engine may be this humongous diesel-burning colossus, built from heat, metal, and a lot of pneumatics, positioned and held in place by hydraulics - but all the sensors on it are electric, where in the past some would be hydraulic and rest wouldn't even exist; it's controlled and operated by electricity-based computing network; it's been designed on computers, and so on.

In this sense, I think "AI is a new electricity" is believable. It's a qualitatively new approach to computing, that's directly or indirectly applicable everywhere, and that people already try to apply to literally everything[0]. And, much like with electricity, time and economics will tell which of those applications make sense, which were dead ends, and which were plain dumb in retrospect.

--

[0] - And they really did try to stuff electricity everywhere back when it was the new hot thing. Same with nuclear energy few decades later. We still laugh at how people 100 years ago imagined the future will look like... in between crying that we got short-changed by reality.

  • AI is not a fundamental physical element. AI is mostly closed and controlled by people who will inevitably use it to further their power and centralize wealth and control. We acted with this in mind to make electricity a publicly controlled service. There is absolutely no intention nor political strength around to do this with AI in the West.

    • There's a few levels of this:

      • That it is software means that any given model can be easily ordered nationalised or whatever.

      • Everyone quickly copying OpenAI, and specifically DeepSeek more recently, showed that once people know what kind of things actually work, it's not too hard to replicate it.

      • We've only got a handful of ideas about how to align* AI with any specific goal or value, and a lot of ways it does go wrong. So even if every model was put into public ownership, it's not going to help, not yet.

      That said, if the goal is to give everyone access to an AI that demands 375 W/capita 24/7, means the new servers double the global demand for electricity, with all that entails.

      * Last I heard (a while back now so may have changed): if you have two models, there isn't even a way to rank them as more-or-less aligned vs. anything. Despite all the active research in this area, we're all just vibing alignment, corporate interests included.

      2 replies →

    • Electricity here is meant as a technology (or a set of technologies) exploiting a particular physical phenomenon - not the phenomenon itself.

      (If it were the latter, then you could argue everything uses electricity if it relies in any way on matter being solid, because AFAIK the furthest we got on the question of "why I don't fall through the chair I'm sitting on" is.... "electromagnetism".)

      1 reply →

> everyone can code!

I work directly with marketers and even if you give them something like n8n, they find it hard to be precise. Programming teaches you a "precise mindset" that one doesn't have when they aren't really thinking about tech professionally.

I wonder if seasoned UX designers can code now. They do think professionally about software. I wonder if it's at a deep enough granularity such that they can simply use natural language to get something to work.

  • Can an LLM detect a lack of precision and point it to you ?

    • Sometimes, yes. Reliably, no.

      LLMs don't have enough of a model of the world to understand anything. There was a paper floating around recently about how someone trained an ML system on orbital dynamics. The result was a system that could calculate orbits correctly, but it completely failed to extract the underlying - simple - math. Instead it basically frankensteined together its own system of epicycles which solved a very narrow range of problems but lacked any generality.

      Any coding has the same problems. Sometimes you get lucky, sometimes you don't. And if you strap on an emulator and test rig and allow the machine to flail around inside it, sometimes working code falls out.

      But there's no abstracted model of software development as a process in there, either in theory or practise. And no understanding of vague goals with constraints and requirements that can be inferred creatively from outside the training data.

      1 reply →

    • An LLM can even ignore lack of precision and just guess what you wanted, usually correctly, unless what you want is very unusual.

    • It can! Though you might need to ask for it, otherwise it may take what it thinks you mean and run off with it, at which point you'll discover the lack of precision only later, when the LLM gets confused or the result is nothing like what you actually expected.

  • Our UX designers have been prototyping things they started in Figma with Windsurf. They seem pretty happy with it. Of course there's a big step in getting it production-ready but it really smooths the conversation with engineering.

    • Ah so while they can't make fully fledged products, they deepen their skills in making high fidelity prototypes more quickly.

      That's cool!

While I'd agree with your first line:

> The problem with LLM is when they're used for creativity or for thinking.

And while I also agree that it's currently closer to "AI is the new VBA" because of the current domain in which consumer AI* is most useful.

Despite that, I'd also aver that being useful in simply "many" contexts will make AI "the new electricity”. Electricity itself is (or recently was) only about 15% of global primary power, about 3 TW out of about 20 TW: https://en.wikipedia.org/wiki/World_energy_supply_and_consum...

Are LLMs 15% of all labour? Not just coding, but overall? No. The economic impact would be directly noticeable if it was that much.

Currently though, I agree. New VBA. Or new smartphone, in that we ~all have and use them, while society as a whole simultaneously cringes a bit at this.

* Narrower AI such as AlphaFold etc. would, in this analogy, be more like a Steam Age factory which had a massive custom steam engine in the middle distributing motive power to the equipment directly: it's fine at what it does, but you have to make it specifically for your goal and can't easily adapt it for something else later.

LLM is helpful for creativity and thinking When you run out of your ideas

  • I sometimes feel that a lot of people bringing up the topic of creativity have never spent much time thinking, studying and self-reflecting on what "creativity" actually is. It's a complex topic and one that's mixed up with many other complex topics ("originality", "intellectual property", "aesthetic value", "art vs engineering" etc etc)

    You see a lot of Motte and Bailey arguments in this discussion as people shift (often subconsciously) between different definitions of key terms and different historical perspectives.

    I'd recommend someone tries to gain at least a passing familiarity with art history and the social history of art/design etc. Reading a bit of Edward De Bono and Douglas Hofstadter isn't a bad shout either (although it's many years since I've read the former so I can't guarantee it will stand up as well as my teenage self thought it did)