← Back to context

Comment by goatlover

20 days ago

Computers were supposed to be bicycles for the mind, but increasingly we want them to think for us.

Well, I see an e-bike analogy around the corner. People dont want to invest the energy anmore, now that they can buy expensive batteries to help with the pedaling. That is pretty much the human nature.

They were, but that vision was killed as soon as the phrase you quote was spoken.

LLMs are, in fact, one of the few products in the past decades that - at least for now - align with this vision. That's because they empower the end users directly. Anyone can just go to chatgpt.com or claude.ai to access a tool that will understand their problem, no matter how clumsily formulated, and solve it, or teach them how to solve it, or otherwise address it in a useful fashion. That's pure and quite general force multiplier.

But don't you worry, plenty of corporations and countless startups are hard at work to, like with all computing before, strip down the bicycle and offer you Uber and theme park rides for your mind.

  • > LLMs are, in fact, one of the few products in the past decades that - at least for now - align with this vision. That's because they empower the end users directly.

    Oh BULLSHIT. Computer users have been empowered since the very first programming languages were invented. They simply chose not to engage with them.

    • Who says anything about programming languages? Unless all the bicycle is to you is servicing it?

      Even if, the last time what you said was true was somewhere in the 80s, maybe early 90s. Afterwards, "programming" was solidly a domain of professionals, not regular users. I don't know about MacOS, but Windows didn't even ship with anything resembling a programming environment until 2010s.

      Also, time and again with technology, the users didn't chose shit. Technology is thrust at them, it's first and foremost a supplier-driven phenomenon. It's the vendors that chose to gradually remove any ability of customization and end-user automation from software and devices. Initially it was under guise of UI/UX - simplify everything, avoid confusion (and making users engage with their brains). Nowadays, the software is as basic, dumb and functionality-free as it can possibly be, so the excuse shifted to security - everything an end-user can do an attacker can do, so let's take away every possible use that isn't authorized by application and OS vendors.

      What makes LLMs refreshing is that, for now, they're fully general. The main chat apps don't limit what you can talk about (beyond the usual ass-covering corporate prudishness) - hell, you can use them to work around bullshit limitations regular software has to stop you from harming vendors' profits. But again, only a matter of time - users will get disenfranchised again as near-raw access to LLMs gets replaced by "AI apps".

      And what do I mean limitations? Think of Copilot in Microsoft Office. If you used it in the past year, you definitely know its limitations. A monkey could hook up GPT-4 to VBA and get more functional Copilot than what Microsoft gave us. But it's not because they can't make powerful assistant - that's the easy part. The challenge they took is making as weak assistant as possible that does anything useful at all. That's the prevailing attitude in software industry, and it has been for good two decades now.

      2 replies →

    • Programming used to be hard, and not everybody is as smart as you are. Things change when difficult things become easy. Engaging with an LLM to have it generate code for you is so far removed from taking a physical book, looking in its index, then hoping it talks about your particular problem, that it's not bullshit.

      4 replies →

Full self driving Teslas for the mind

  • Lol

    I like the implication that they might drive you into the median or the side of a semi truck. Very apt analogy - we built it because we could, without asking whether we should

    • And just like the Tesla thing, accidents are mostly from people relying too heavily on the raw output, not checking or supervising enough, overestimating the system's reliability/consistency.

We have robots do physical chores for us: washing machine, robo-vac etc, so why can't we have robots that do mental chores for us? For most of us, our jobs aren't a pleasure, but a chore necessary to earn money to pay rent. How many factory workers do you think enjoy bolting the same car parts to a car over and over again till retirement?

So if I can outsource the mundane, annoying and repetitive parts of SW development (like typing the coding) to a machine, so that I can focus on the parts I enjoy (debugging, requirements gathering, customer interaction, architecture etc), what's wrong with that?

If the end product is good and fulfills the customers needs who cares if a large part of it was written by a machine and not by a human?

I also wish we can go back to the days we were coding in assembly in stead of say JavaScript, but that's not gonna happen professionally for 99% of jobs, you either use JS to ship quickly or get run over by the companies who use JS while you write assembly. ML assisted coding will be the next step.

  • > We have robots do physical chores for us: washing machine, robo-vac etc, so why can't we have robots that do mental chores for us?

    Sure, we can! That's in some sense what computers are. It's nice that they can quickly multiply two integers far faster than you can. Handing off that mental chore to the computer allows you to do your job better in every way.

    The difference (and yes, I know that I'm perhaps falling into the trap of "but this time it's different!") is that AI models are very often used in a completely different capacity. You inspect the plates, load up the dishwasher, run it, and inspect the results. You don't just wave your hand over the kitchen and say "this dirty, do fix", and then blindly trust you'll have clean cutlery in a few hours.

    Moreover, the menial tasks and assembly-line work that you describe are all repetitive. Most interesting coding isn't (since code has zero duplication cost, duplicate work is pointless – outside of the obvious things like fun and learning, but you want to keep those out of this discussion anyway).

    > So if I can outsource the mundane, annoying and repetitive parts of SW development (like typing the coding) to a machine, so that I can focus on the parts I enjoy (debugging, requirements gathering, customer interaction, architecture etc), what's wrong with that?

    Nothing is wrong with that. Except you'll still need to inspect the AI's output. And in order to do that, you'll need to have a good understanding of the problem and how it solved it. Maybe you do. That's excellent! This discussion is lamenting that, seemingly, more and more people don't.

  • There's middle ground between bolting same parts all day and completely avoiding anything difficult. Both body and mind atrophy when they aren't used and that necessarily includes some repetition.

  • > So if I can outsource the mundane, annoying and repetitive parts of SW development (like coding) to a machine, so that I can focus on the parts I enjoy (debugging, requirements gathering, customer interaction, etc), what's wrong with that?

    That's ok when you already understand programming and can guide the codegen and step in to correct when it generates bullshit. But you don't get to that level without learning programming yourself. Education is built from the ground up towards higher and higher levels of abstraction. You don't get to skip learning arithmetic on your way to learning quantum physics, just because numpy will do all your arithmetic once you get there. In other words, it's ok for people who don't like cooking to order takeout, but you don't become a professional cook this way.

    • >Education is built from the ground up towards higher and higher levels of abstraction.

      How many people who write SW professionally worldwide, know everything about the OS underneath, the sys-calls, disassembly, memory allocation, CPU architecture, network layers, internet routing, cloud and virtualization, etc?

      Most SW jobs are just routine plumbing, connecting one FOSS pipe to another in whatever way works for you, till you get the desired result which often is unoptimized slop but if it serves the business use case and makes money nobody but the cool-aid drinking stickler developers care that it's slop. It's not rocket science that requires you to know assembly or CPU architectures or linear algebra and optimize ever single bit to perfection, but low cost and time to market is more important.

      You can try to educate people about everything but not all jobs are gonna require you to know everything. In fact, jobs are being more and more specialized where you'll have one HW expert, one networking expert, one compiler expert, one typescript expert, one GoLang expert etc.

  • LLMs are narrative machines, not analysis machines.

    The article isn’t about code, and on HN we default to that all the time.

> but increasingly we want them to think for us

Which is understandable. All societies are constrained by lack of experts / intelligence. Think about how relatively inaccessible healthcare is, even in rich countries.

Unfortunately, they got batteries for greater mobility and are now more like e-bikes for the mind.