← Back to context

Comment by sosomoxie

20 days ago

I started programming over 40 years ago because it felt like computers were magic. They feel more magic today than ever before. We're literally living in the 1980s fantasy where you could talk to your computer and it had a personality. I can't believe it's actually happening, and I've never had more fun computing.

I can't empathize with the complaint that we've "lost something" at all. We're on the precipice of something incredible. That's not to say there aren't downsides (WOPR almost killed everyone after all), but we're definitely in a golden age of computing.

> we're definitely in a golden age of computing.

Certainly not. Computers are still magic, but much of that magic is now controlled and being restricted by someone other than you.

Today most people's only computer is a cell phone, which is heavily locked down and designed for media consumption and to collect and give away every scrap of their personal/private data. Most people's desktop computers aren't much better. They are continuously used by others against the interests of the people who paid for them, sometimes explicitly keeping them from doing things they want or limiting what they can install.

People are increasingly ignorant of how computers work in ways that were never possible when you had to understand them to use them. SoCs mean that users, and even the operating system they use, aren't fully aware of what the devices are doing.

People have lost control of the computers they paid for and their own data. They now have to beg a small number of companies for anything they want (including their own data on the cloud). We're heading toward a future where you'll need a submit to a retinal scan just to view a website.

Computing today is more adversarial, restricted, opaque, centralized, controlled, and monitored than it has been in a very long time. "My computer talks to me" is not making up for that.

  • What you're saying might be true, but it's also a choice to delegate responsibility to someone other than yourself. I'm not saying that the adversarial state of computing is ok, just that most people don't care, or don't like the alternatives.

    Even as someone concerned with the issues you mention, the shift happening now feels pretty magical to me. I can only imagine how non-technical people must feel.

    • People definitely care about things that a more open platform brings you, but today's open platforms have really bad downsides. The thing is, those downsides are artificial. They were manufactured by the corporations that prefer to be in control of our devices. It's not the natural state of things.

      I often get asked by friends and family "can I get rid of annoyance X" or "can I have feature Y" on their Android phones, usually because they see that I've done it on my phone [0]. The answer is always "yes, I can set that up for you, but this will take an hour, I need to wipe all your data and a bunch of your apps will stop working".

      There is no reason it should be like that. That was a choice by the manufacturers. They developed these DRM features and actively market them to developers - to the point where I can't submit an update to my little bus app without getting a prompt to add SafetyNet to it. They even somehow concinced pentesters to put "no cert pinning, root check and remote attestation" into their reports, so bank and government apps are the worst offenders.

      It's not like people decided they prefer closed to open. They prefer working to non-working. And open platforms were broken intentionally by the developers of the closed ones.

      It's like saying Americans all love their cars and simply decided not to use public transport. No, their public transport was crippled to the point of uselessness and their neighbourhoods were built in a way that makes public transport unfeasible. Cars work for them and trains don't. This was not their choice and it's painfully obvious when you see them go literally anywhere else on the planet and be amazed at how great trains are.

      [0] Things like: global adblock, removing bloatware, floating windows or splitsceen, miracast, slide for brightness/volume, modded apps, lockscreen gestures, app instances, working shared clipboard, NFC UID emulation, automatic tethering, audio EQ...

      2 replies →

    • Sure it's technically always a choice, but because society exists, some options are dramatically more plausible than others.

      For example, say phones become more and more locked down and invasive. Technically you can choose not to have a phone, but how are you meant to function in today's society without a phone? Basically everything of importance assumes you have a phone. Technically you could make your own phone, I guess, but that's very difficult.

      I don't think you can reasonably make the argument that because technically everyone can make their own choices, we should be ok with whatever status quo in society.

      4 replies →

  • Most people's only computer??? MOST people in the 80's had never, personally, touched a computer other than maybe an ATM machine. The fact that most people today don't care about a personal computing device in terms of what it does or how it does it isn't really a surprise.

    Most people don't care how the toaster or microwave work, only that they do. Same for the show me movies boxes in the living rooms. And, really, most people shouldn't have to care.

    This isn't to dismiss privacy concerns or even right to own/repair... let alone "free" internet. It's just that most people shouldn't have to care about most things.

  • Models you can run on your own (expensive) computer are just a year behind the SOTA. Linux exists. Why are you so pessimistic?

    • Typical HN comment. They’re so in the weeds of edge case 1% concerns they can’t see the golden age around them.

      Most people living through golden ages might not know it. Many workers in Industrial Revolution saw a decline in relative wages. Many in the Roman Empire were enslaved or impoverished. That doesn’t mean history doesn’t see these as golden ages, where golden age is defined loosely as a broad period of enhanced prosperity and productivity for a group of people.

      For all its downsides, pointed out amply above, the golden age of computing started 100 years ago and hasn’t ceased yet.

      6 replies →

The golden age for me is any period where you have the fully documented systems.

Hardware that ships with documentation about what instructions it supports. With example code. Like my 8-bit micros did.

And software that’s open and can be modified.

Instead what we have is:

- AI which are little black boxes and beyond our ability to fully reason.

- perpetual subscription services for the same software we used to “own”.

- hardware that is completely undocumented to all but a small few who are granted an NDA before hand

- operating systems that are trying harder and harder to prevent us from running any software they haven’t approved because “security”

- and distributed systems become centralised, such as GitHub, CloudFlare, AWS, and so on and so forth.

The only thing special about right now is that we have added yet another abstraction on top of an already overly complex software stack to allow us to use natural language as pseudocode. And that is a version special breakthrough, but it’s not enough by itself to overlook all the other problems with modern computing.

  • My take on the difference between now and then is “effort”. All those things mentioned above are now effortless but the door to “effort” remains open as it always has been. Take the first point for example. Those little black boxes of AI can be significantly demystified by, for example, watching a bunch of videos (https://karpathy.ai/zero-to-hero.html) and spending at least 40 hours of hard cognitive effort learning about it yourself. We used to purchase software or write it ourselves before it became effortless to get it for free in exchange for ads and then a subscription when we grew tired of ads or were tricked into bait and switch. You can also argue that it has never been easier to write your own software than it is today.

    Hostile operating systems. Take the effort to switch to Linux.

    Undocumented hardware, well there is far more open source hardware out there today and back in the day it was fun to reverse engineer hardware, now we just expect it to be open because we couldn’t be bothered to put in the effort anymore.

    Effort gives me agency. I really like learning new things and so agentic LLMs don’t make me feel hopeless.

    • I’ve worked in the AI space and I understand how LLMs work as a principle. But we don’t know the magic contained within a model after it’s been trained. We understand how to design a model, and how models work at a theoretical level. But we cannot know how well it will be at inference until we test it. So much of AI research is just trial and error with different dials repeated tweaked until we get something desirable. So no, we don’t understand these models in the same way we might understand how an hashing algorithm works. Or a compression routine. Or an encryption cypher. Or any other hand-programmed algorithm.

      I also run Linux. But that doesn’t change how the two major platforms behave and that, as software developers, we have to support those platforms.

      Open source hardware is great but it’s not on the same league of price and performance as proprietary hardware.

      Agentic AI doesn’t make me feel hopeless either. I’m just describing what I’d personally define as a “golden age of computing”.

      4 replies →

  • Have you tried using GenAI to write documentation? You can literally point it to a folder and say, analyze everything in this folder and write a document about it. And it will do it. It's more thorough than anything a human could do, especially in the time frame we're talking about.

    If GenAI could only write documentation it would still be a game changer.

    • But it write mostly useless documentation Which take time to read and decipher.

      And worse, if you are using it for public documentation, sometimes it hallucinate endpoints (i don't want to say too much here, but it happened recently to a quite used B2B SaaS).

      8 replies →

    • The problems about documentation I described wasn’t about the effort of writing it. It was that modern chipsets are trade secrets.

      When you bought a computer in the 80s, you’d get a technical manual about the internal workings of the hardware. In some cases even going as far as detailing what the registers did on their graphics chipset or CPU.

      GenAI wouldn’t help here for modern hardware because GenAI doesn’t have access to those specifications. And if it did, then it would already be documented so we wouldnt need GenAI to write it ;)

      1 reply →

  • > The golden age for me is any period where you have the fully documented systems. Hardware that ships with documentation about what instructions it supports. With example code. Like my 8-bit micros did. And software that’s open and can be modified.

    I agree, that it would be good. (It is one reason why I wanted to design a better computer, which would include full documentation about the hardware and the software (hopefully enough to make a compatible computer), as well as full source codes (which can help if some parts of the documentation are unclear, but also can be used to make your own modifications if needed).) (In some cases, we have some of this already, but not entirely. Not all hardware and software has the problems you list, although it is too common now. Making a better computer will not prevent such problematic things on other computers, and not entirely preventing such problems on the new computer design either, but it would help a bit, especially if it is actually designed good rather than badly.)

  • Actually this makes me think of an interesting point. We DO have too many layers of software.. and rebuilding is always so cost prohibative.

    Maybe an iteresting route is using LLMs to flatten/simplify.. so we can dig out from some of the complexity.

    • I’ve heard this argument made before and it’s the only side of AI software development that excites me.

      Using AI to write yet another run-of-the-mill web service written in the same bloated frameworks and programming languages designed for the lowest common denominator of developers really doesn’t feel like it’s taking advantage leap in capabilities that AI bring.

      But using AI to write native applications in low level languages, built for performance and memory utilisation, does at least feel like we are bringing some actual quality of life savings in exchange for all those fossil fuels burnt to crunch the LLMs tokens.

  • > perpetual subscription services for the same software we used to “own”.

    In another thread, people were looking for things to build. If there's a subscription service that you think shouldn't be a subscription (because they're not actually doing anything new for that subscription), disrupt the fuck out of it. Rent seekers about to lose their shirts. I pay for eg Spotify because there's new music that has to happen, but Dropbox?

    If you're not adding new whatever (features/content) in order to justify a subscription, then you're only worth the electricity and hardware costs or else I'm gonna build and host my own.

    • People have been building alternatives to MS Office, Adopt Creative Suite, and so on and so forth for literally decades and yet they’re still the de facto standard.

      Turns out it’s a lot harder to disrupt than it sounds.

      3 replies →

    • Dropbox may not be a great example, either. It's storage and bandwidth, and both are expensive, even if the software wasn't being worked on.

      But application software that is, or should be, running locally, I agree. Charge for upgrades, by all accounts, but not for the privilege of continued use of an old, unmaintained version.

  • Local models exist and the knowledge required for training them is widely available in free classes and many open projects. Yes, the hardware is expensive, but that's just how it is if you want frontier capability. You also couldn't have a state of the art mainframe at home in that era. Nor do people expect to have industrial scale stuff at home in other engineering domains.

> I started programming over 40 years ago because it felt like computers were magic. They feel more magic today than ever before.

Maybe they made us feel magic, but actual magic is the opposite of what I want computers to be. The “magic” for me was that computers were completely scrutable and reason-able, and that you could leverage your reasoning abilities to create interesting things with them, because they were (after some learning effort) scrutable. True magic, on the other hand, is inscrutable, it’s a thing that escapes explanation, that can’t be reasoned about. LLMs are more like that latter magic, and that’s not what I seek in computers.

> We're literally living in the 1980s fantasy where you could talk to your computer and it had a personality.

I always preferred the Star-Trek-style ship computers that didn’t exhibit personality, that were just neutral and matter-of-fact. Computers with personality tend to be exhausting and annoying. Please let me turn it off. Computers with personality can be entertaining characters in a story, but that doesn’t mean I want them around me as the tools I have to use.

  • > The “magic” for me was that computers were completely scrutable and > reason-able

    Yes, and computers were something that gave you powerful freedom. You could make a computer do anything it was physically able to as long as your mind could follow up. Computers followed logic, they didn't have opinions, they gave you full control of themselves and you would have unlimited control.

  • I have no idea what everyone is talking about. LLMs are based on relatively simple math, inference is much easier to learn and customize than say Android APIs. Once you do you can apply familiar programming style logic to messy concepts like language and images. Give you model a JSON schema like "warp_factor": Integer if you don't want chatter, that's way better than Star Trek computer could do. Or have it write you a simple domain specific library on top of Android API that you can then program from memory like old style BASIC rather than having to run to stack overflow for evwery new task.

    • You can’t reason about inference (or training) of LLMs on the semantic level. You can’t predict the output of an LLM for a specific input other than by running it. If you want the output to be different in a specific way, you can’t reason with precision that a particular modification of the input, or of the weights, will achieve the desired change (and only that change) in the output. Instead, it’s like a slot machine that you just have to try running again.

      The fact that LLMs are based on a network of simple matrix multiplications doesn’t change that. That’s like saying that the human brain is based on simple physical field equations, and therefore its behavior is easy to understand.

      14 replies →

In some ways, I'd say we're in a software dark age. In 40 years, we'll still have C, bash, grep, and Mario ROMs, but practically none of the software written today will still be around. That's by design. SaaS is a rent seeking business model. But I think it also applies to most code written in JS, Python, C#, Go, Rust, etc. There are too many dependencies. There's no way you'll be able to take a repo from 2026 and spin it up in 2050 without major work.

One question is how will AI factor in to this. Will it completely remove the problem? Will local models be capable of finding or fixing every dependency in your 20yo project? Or will they exacerbate things by writing terrible code with black hole dependency trees? We're gonna find out.

  • > That's by design. SaaS is a rent seeking business model.

    Not all software now is SaaS, but unfortunately it is too common now.

    > But I think it also applies to most code written in JS, Python, C#, Go, Rust, etc. There are too many dependencies.

    Some people (including myself) prefer to write programs without too many dependencies, in order to avoid that problem. Other things also help, including some people write programs for older systems which can be emulated, or will use a more simpler portable C code, etc. There are things that can be done, to avoid too many dependencies.

    There is uxn, which is a simple enough instruction set that people can probably implement it without too much difficulty. Although some programs might need some extensions, and some might use file names, etc, many programs will work, because it is designed in a simple way that it will work.

  • I’m not sure Go belongs on that list. Otherwise I hear what you’re saying.

    • A large percentage of the code I've written the last 10 years is Go. I think it does somewhat better than the others in some areas, such as relative simplicity and having a robust stdlib, but a lot of this is false security. The simplicity is surface level. The runtime and GC are very complex. And the stdlib being robust means that if you ever have to implement a compiler from scratch, you have to implement all of std.

      All in all I think the end result will be the same. I don't think any of my Go code will survive long term.

      10 replies →

We have what I've dreamed of for years: the reverse dictionary.

Put in a word and see what it means? That's been easy for at least a century. Have a meaning in mind and get the word? The only way to get this before was to read a ton of books and be knowledgable or talk to someone who was. Now it's always available.

  • > Have a meaning in mind and get the word? The only way to get this before was to read a ton of books and be knowledgable or talk to someone who was.

    There was another way: Make one up.

    That is what the people you read from/talked to did before relaying it to you.

    • If you want to establish a new word, you need to make sure that the word also sticks in common use. Otherwise the word will not hold its own meaning. For existing concepts it's much better to use the words that have already been established, because other people can look them up in a dictionary.

      1 reply →

  • > Now it's always available.

    And often incorrect! (and occasionally refuses to answer)

    • Is it? I’ve seen AI hallucinations, but they seem to be increasingly rare these days.

      Much of the AI antipathy reminds me of Wikipedia in the early-mid 2000s. I remember feeling amazed with it, but also remember a lot of ranting by skeptics about how anyone could put anything on there, and therefore it was unreliable, not to be used, and doomed to fail.

      20 years later and everyone understands that Wikipedia may have its shortcomings, and yet it is still the most impressive, useful advancement in human knowledge transfer in a generation.

      1 reply →

    • It is! But you can then verify it via a correct, conventional forward dictionary.

      The scary applications are the ones where it's not so easy to check correctness...

      7 replies →

  • The "reverse dictionary" is called a "thesaurus". Wikipedia quotes Peter Mark Roget (1852):

    > ...to find the word, or words, by which [an] idea may be most fitly and aptly expressed

    Digital reverse dictionaries / thesauri like https://www.onelook.com/thesaurus/ can take natural language input, and afaict are strictly better at this task than LLMs. (I didn't know these tools existed when I wrote the rest of this comment.)

    I briefly investigated LLMs for this purpose, back when I didn't know how to use a thesaurus; but I find thesauruses a lot more useful. (Actually, I'm usually too lazy to crack out a proper thesaurus, so I spend 5 seconds poking around Wiktionary first: that's usually Good Enough™ to find me an answer, when I find an answer I can trust it, and I get the answer faster than waiting for an LLM to finish generating a response.)

    There's definitely room to improve upon the traditional "big book of synonyms with double-indirect pointers" thesaurus, but LLMs are an extremely crude solution that I don't think actually is an improvement.

    • Really?

      "What's a word that means admitting a large number of uses?"

      That seems hard to find in a thesaurus without either versatile or multifarious as a starting point (but those are the end points).

      8 replies →

> I can't empathize with the complaint that we've "lost something" at all.

We could easily approach a state of affairs where most of what you see online is AI and almost every "person" you interact with is fake. It's hard to see how someone who supposedly remembers computing in the 80s, when the power of USENET and BBSs to facilitate long-distance, or even international, communication and foster personal relationships (often IRL) was enthralling, not thinking we've lost something.

  • I grew up on 80's and 90's BBSes. The transition from BBSes to Usenet and the early Internet was a magical period, a time I still look back upon fondly and will never forget.

    Some of my best friends IRL today were people I first met "online" in those days... but I haven't met anyone new in a longggg time. Yeah, I'm also much older, but the environment is also very different. The community aspect is long gone.

  • I'm from the early 90s era. I know exactly what you're saying. I entered the internet on muds, irc and usenet. There were just far fewer people online in those communities in those days, and in my country, it was mostly only us university students.

    But, those days disappeared a long time ago. Probably at least 20-30 years ago.

    • IRC is still around, that old internet is still there.

      You just have to get off the commercial crap and you’ll find it.

  • even in the 90s there was the phrase "the Internet, where the men are men, the women are men, and the teen girls are FBI agents". It was always the case you never really knew who/what you were dealing with on the Internet.

    • Are you trying to argue that the probability of interacting with a bot or reading/seeing something algorithmically generated hasn't gone up astronomically since the 90s?

  • I'd honestly much rather interact with an LLM bot than a conservative online. LLM bots can at least escape their constraints with clever prompting. There is no amount of logic or evidence that will sway a conservative. LLMs provide a far more convincing fake than conservatives are able to.

I started programming 40 years ago as well. The magic for me was never that "you could talk to your computer and it had a personality".

That was the layman version of computing, something shown to the masses in movies like War Games and popular media, one that we mocked.

I also lived through the FOSS peak. The current proprietary / black-box / energy lock in would be seen as the stuff of nightmares.

I agree with you with the caveat that all the "ease of building" benefits, for me, could potentially be dwarfed by job losses and pay decreases. If SWE really becomes obsolete, or even if the number of roles decrease a lot and/or the pay decreases a lot (or even fails to increase with inflation), I am suddenly in the unenviable position of not being financially secure and being stuck in my 30s with an increasingly useless degree. A life disaster, in other words. In that scenario the unhappiness of worrying about money and retraining far outweighs the happiness I get from being able to build stuff really fast.

Fundamentally this is the only point I really have on the 'anti-AI' side, but it's a really important one.

Glad to see this already expressed here because I wholly agree. Programming has not brought me this much joy in decades. What a wonderful time to be alive.

  • I wish I could have you sit by my side for a week or two and pair program what I'm working on because most for the time I'm not getting great results.

    • Depends on the project. For web based functionality it seems great, because of all the prior work that is out there. For more obscure things like Obsidian Note extentions or Home Assistant help it's more hit and miss

    • You in SF? My schedule is a bit busy since we launched but I could find an hour in the city.

Good for you. But there are already so, so many posts and threads celebrating all of this. Everyone is different. Some of us enjoy the activity of programming by hand. This thread is for those us, to mourn.

  • You're still allowed to program by hand. Even in assembly language if you like.

  • I have an llm riding shotgun and I still very much program by hand. it's not one extreme or the other. whatever I copy from the llm has to be redone line by line anyways. I understand all of my code because I touch every line of it

Computers did feel like magic... until I read code, think about it, understood it, and could control it. I feel we're stepping away from that, and moving to a place of less control, less thinking.

I liked programming, it was fun, and I understood it. Now it's gone.

  • It's not gone, it's just being increasingly discouraged. You don't have to "vibe code" or spend paragraphs trying to talk a chatbot into doing something that you can do yourself with a few lines of code. You'll be fine. It's the people who could have been the next few generations of programmers who will suffer the most.

We definitely have lost something. I got into computers because they're deterministic. Way less complicated than people.

Now the determinism is gone and computers are gaining the worst qualities of people.

My only sanctuary in life is slipping away from me. And I have to hear people tell me I'm wrong who aren't even sympathetic to how this affects me.

> We're literally living in the 1980s fantasy where you could talk to your computer and it had a personality

The difference is that the computer only talks back to you as code because you’re paying its owners, with you not being part of the owners. I find it really baffling that people put up with this. What will you do when Alphabet or Altman will demand 10 times the money out of you fir the privilege of their computer talking to you in programming code?

  • Use one of the open models that are also getting better and easier to run every year?

    • Which are those open ones? And how are they going to get their billions of dollars worth of investment back? Even Google Maps used to be virtually free until it wasn’t, at a fraction of the investment cost.

      3 replies →

  • i have preemptively switched to Deepseek. they'll never remove the free tier because that's how they stick it to Scam Altman and the like

> We're on the precipice of something incredible.

Total dependence on a service?

  • The quality of local models has increased significantly since this time last year. As have the options for running larger local models.

    • The quality of local models is still abysmal compared to commercial SOTA models. You're not going to run something like Gemini or Claude locally. I have some "serious" hardware with 128G of VRAM and the results are still laughable. If I moved up to 512G, it still wouldn't be enough. You need serious hardware to get both quality and speed. If I can get "quality" at a couple tokens a second, it's not worth bothering.

      They are getting better, but that doesn't mean they're good.

      5 replies →

    • These takes are terrible.

      1. It costs 100k in hardware to run Kimi 2.5 with a single session at decent tok p/s and its still not capable for anything serious.

      2. I want whatever you're smoking if you think anyone is going to spend billions training models capable of outcompeting them are affordable to run and then open source them.

      1 reply →

  • Between the internet, or more generally computers, or even more generally electricity, are we not already?

    • The power companies aren't harvesting the data on your core product. Not to mention, being in roughly the same business as you.

      Those things are also regulated as utilities.

  • Yes this is the issue. We truly have something incredible now. Something that could benefit all of humanity. Unfortunately it comes at $200/month from Sam Altman & co.

    • If that was the final price, no strings attached and perfect, reliable privacy then I might consider it. Maybe not for the current iteration but for what will be on offer in a year or two.

      But as it stands right now, the most useful LLMs are hosted by companies that are legally obligated to hand over your data if the US gov. had decided that it wants it. It's unacceptable.

      2 replies →

  • From the beginning the providers have been interchangeable and subject to competition. Do we have reason to believe that this will change?

  • prefrontal cortex as a service

    • yup, all these folks claiming AI is the bees knees are delegating their thinking to a roulette that may or may not give proper answers. the world will become more and more like the movie idiocracy

> I started programming over 40 years ago because it felt like computers were magic. They feel more magic today than ever before. We're literally living in the 1980s fantasy where you could talk to your computer and it had a personality. I can't believe it's actually happening, and I've never had more fun computing.

https://en.wikipedia.org/wiki/ELIZA_effect

I also can't believe it's actually happening. ;)

  • It has been interesting (and not in a good way) how willing people are to anthropomorphize these megacorporation-controlled machines just because the interface is natural language now.

I didn't imagine I would be sending all my source code directly to a corporation for access to an irritatingly chipper personality that is confidently incorrect the way these things are.

There have been wild technological developments but we've lost privacy and autonomy across basically all devices (excepting the people who deliberately choose to forego the most capable devices, and even then there are firmware blobs). We've got the facial recognition and tracking so many sci-fi dystopias have warned us to avoid.

I'm having an easier time accomplishing more difficult technological tasks. But I lament what we have come to. I don't think we are in the Star Trek future and I imagined doing more drugs in a Neuromancer future. It's like a Snow Crash / 1984 corporate government collab out here, it kinda sucks.

Same.

I was born in 84 and have been doing software since 97

it’s never been easier, better or more accessible time to make literally anything - by far.

Also if you prefer to code by hand literally nobody is stopping you AND even that is easier.

Cause if you wanted to code for console games you literally couldn’t in the 90s without 100k specialized dev machine.

It’s not even close.

This “I’m a victim because my software engineering hobby isn’t profitable anymore” take is honestly baffling.

  • I'm not going to code by hand if it's 4x slower than having Claude do it. Yes, I can do that, but it just feels bad.

    The analogy I like is it's like driving vs. walking. We were healthier when we walked everywhere, but it's very hard to quit driving and go back even if it's going to be better for you.

    • I actually like the analogy but for the opposite reason. Cars have become the most efficient way to travel for most industrial purposes. And yet enormous numbers of people still walk, run, ride bikes, or even horses, often for reasons entirely separate from financial gain.

    • I walk all the time

      During the summer I’ll walk 30-50 miles a week

      However I’m not going to walk to work ever and I’m damn sir not going to walk in the rain or snow unless if I can avoid it

    • Coding will take 4 times less time, but review will take almost the same amount of time if not more if the solution does not worl out of the box or has unforseen corner cases.

      LLMs were trained on public code libraries and unfortunately most pf that OSS code is garbage.

      There are ofcourse raisins there, but those are far in between.

      Top it off with hallucinations and suddenly you spend more time debugging messy AI code when you could write the same in a fraction of that time.

      The easier the task, the better job LLMs do, the harder the task the worse results you get.

      Source: working with those tools daily.

      Using your analogy:

      - by car it will be 30km uphill, coz of how the road is built

      - walking it will be 1km straight line

  • it's an exciting time, things are changing and changing beyond "here's my new javascript framework". It's definitely an industry shakeup kind of deal and no one knows what lies 6 months, 1 year, 5 years from now. It makes me anxious seeing as i have a wife+2 kids to care for and my income is tied to this industry but it's exciting too.

They used to call it the Personal Computer, and I think that name encompassed the "magic" I felt in the 80's.

But computing is increasingly not-for-you. Your phone will do what apple allows you to do. Your online activity is tracked and used to form a profile of your actions and behaviors. And the checks and balances - if any - are weak and compromised because of the commercial or government interest that want things that way.

the simplest example might be computer games. In the 80's it was private. In 2026 it is routinely a psychological cash register and a surveillance system.

I really like that linux with all its imperfections seems to counteract a lot of this.

Nothing meaningful happened in almost 20 years. After the iPhone, what happened that truly changed our lives? The dumpster fire of social media? Background Netflix TV?

In fact, I remember when I could actually shop on Amazon or browse for restaurants on Yelp while trusting the reviews. None of that is possible today.

We have been going through a decade of enshitification.

I really am very thankful for @simonw posting a TikTok from Chris Ashworth, a Baltimore theater software developer, who recently picked up LLM's for building a voxel display software controller. And who was just blown away. https://simonwillison.net/2026/Jan/30/a-programming-tool-for...

Simon doesn't touch on my favorite part of Chris's video though, which is Chris citing his friend Jesse Kriss. This stuck out at me so hard, and is so close to what you are talking about:

> The interesting thing about this is that it's not taking away something that was human and making it a robot. We've been forced to talk to computers in computer language. And this is turning that around.

I don't see (as you say) a personality. But I do see the ability to talk. The esoteria is still here underneath, but computer programmers having this lock on the thing that has eaten the world, being the only machine whisperers around, is over. That depth of knowledge is still there and not going away! But notably too, the LLM will help you wade in, help those not of the esoteric personhood of programmers to dive in & explore.

I miss the simplicity of older hardware.

The original NES controller only contains a single shift register - no other active components.

Today, a wireless thing will have more code than one would want to ever read, much less comprehend. Even a high level diagram of the hardware components involved is quite complex.

Sure, we gained convenience, but at great cost.

Perhaps, if you will, try to empathize with people who are not approaching the end of their careers, and are mid-career - too late to pivot to anything new, but in danger of being swept away, and you'll understand a bit more the perspective of the blog post.

  • Absolutely. I never thought I’d have to retrain, and I’m still uncertain if I will have to because I’m not really sure where software development will be in the next few years. It was quite an epiphany to run my first agent on a code base and be simultaneously excited at the implications for productivity, and numb at the realisation that the work it was saving was the work I enjoyed and the expertise I was being paid for. There are only so many roles for developers to write the prompts and review the output, and it does feel a bit like prodding a machine and waiting for it to go ding.

> We're on the precipice of something incredible.

Only if our socioeconomic model changes.

We're on the precipice of something very disgusting. A massive power imbalance where a single company or two swallows the Earth's economy, due to a lack of competition, distribution and right of access laws. The wildest part is that these greedy companies, one of them in particular, are continuously framed in a positive light. This same company that has partnered with Palantir. AI should be a public good, not something gatekept by greedy capitalists with an ego complex.

> We're literally living in the 1980s fantasy where you could talk to your computer and it had a personality.

We literally are not, and we’d do well to stop using such hyperbole. The 1980s fantasy was of speaking to a machine which you could trust to be correct with a high degree of confidence. No one was wishing they could talk to a wet sock that’ll confidently give you falsehoods and when confronted (even if they were right) will bow down and always respond with “you’re absolutely right”.

I retired a few years ago, so I have no idea what AI programming is.

But I mourned when CRT came out, I had just started programming. But I quickly learned CRTs were far better,

I mourned when we moved to GUIs, I never liked the move and still do not like dealing with GUIs, but I got used to it.

Went through all kinds of programming methods, too many to remember, but those were easy to ignore and workaround. I view this new AI thing in a similar way. I expect it will blow over and a new bright shiny programming methodology will become a thing to stress over. In the long run, I doubt anything will really change.

  • I think you're underestimating what AI can do in the coding space. It is an extreme paradigm shift. It's not like "we wrote C, but now we switch to C++, so now we think in objects and templates". It's closer to the shift from assembly to a higher level language. Your goal is still the same. But suddenly you're working in a completely newer level of abstraction where a lot of the manual work that used to be your main concern is suddenly automated away.

    If you never tried Claude Code, give it s try. It's very easy to get I to. And you'll soon see how powerful it is.

    • > But suddenly you're working in a completely newer level of abstraction where a lot of the manual work that used to be your main concern is suddenly automated away.

      It's remarkable that people who think like this don't have the foresight to see that this technology is not a higher level of abstraction, but a replacement of human intellect. You may be working with it today, but whatever you're doing will eventually be done better by the same technology. This is just a transition period.

      Assuming, of course, that the people producing these tools can actually deliver what they're selling, which is very much uncertain. It doesn't change their end goal, however. Nor the fact that working with this new "abstraction" is the most mind numbing activity a person can do.

      1 reply →

  • OT but I see your account was created in 2015, so I'm assuming very late in your career. Curious what brought you to HN at that time and not before?

> I can't empathize with the complaint that we've "lost something" at all.

you won't feel you've lost something if you've never had it.

sorry.

The invention of Mr Jacquard ushered in a sartorial golden age, when complex fabrics are easy to produce cheaply, at the expense of a few hours spent on punching a deck of cards. But the craft of making tapestries by hand definitely went into demise. This is the situation which the post is mourning.

Frankly, I have my doubts about the utter efficiency of LLMs writing code unattended; it will take quite some time before whatever comes after the current crop learns to do that efficiently and reliably. (Check out how many years went between first image generation demos and today's SOTA.) But the vector is obvious: humans would have to speak a higher-level language to computers, and hand-coding Typescript is going to be as niche in 10 years as today is hand-coding assembly.

This adds some kinds of fun, but also removes some other kinds of fun. There's a reason why people often pick something like PICO-8 to write games for fun, rather than something like Unreal Engine. So software development becomes harder because the developer has to work on more and more complex things, faster, and with fewer chances to study the moving parts to a comfortable depth.

I tend to feel this way (also 40-year coder).

It's because of the way that I use the tools, and I have the luxury of being a craftsman, as opposed to a "TSA agent."

But then, I don't get paid to do this stuff, anymore. In fact, I deliberately avoid putting myself into positions, where money changes hands for my craft. I know how fortunate I am, to be in this position, so I don't say it to aggravate folks that aren't.

This is exactly where I am with GenAI. After forty years: blocks of code, repository patterns, factory patterns, threading issues, documentation, one page executive summaries…

I can now direct these things and it’s glorious.

> golden age of computing

I feel like we've reached the worst age of computing. Where our platforms are controlled by power hungry megacorporations and our software is over-engineered garbage.

The same company that develops our browsers and our web standards is also actively destroying the internet with AI scrapers. Hobbyists lost the internet to companies and all software got worse for it.

Our most popular desktop operating system doesn't even have an easy way to package and update software for it.

  • Yes, this is where it's at for me. LLM's are cool and I can see them as progress, but I really dislike that they're controlled by huge corporations and cost a significant amount of money to use.

    • Use local OSS models then? They aren’t as good and you need beefy hardware (either Apple silicon or nvidia GPUs). But they are totally workable, and you avoid your dislikes directly.

      4 replies →

    • > they're controlled by huge corporations and cost a significant amount of money to use.

      is there anything you use that isn't? like laptop on which you work, software that you use to browse the internet, read the email... I've heard similar comment like yours before and I am not sure I understand it given everything else - why does this matter for LLMs and not the phone you use etc etc?

      7 replies →

  • Unfortunately we live in a "vote with your wallet" paradigm where some of the most mentally unhealthy participants have wallets that are many orders of magnitude bigger than the wallet of the average participant.

  • > our software is over-engineered garbage

    Honestly I think it's under-engineer garbage. Proper engineering is putting in the effort to come up with simpler solutions. The complex solutions appear because we push out the first thing that "works" without time to refine it.

  • Dystopian cyberpunk was always part of the fantasy. Yes, scale has enabled terrible things.

    There are more alternatives than ever though. People are still making C64 games today, cheap chips are everywhere. Documentation is abundant... When you layer in AI, it takes away labor costs, meaning that you don't need to make economically viable things, you can make fun things.

    I have at least a dozen projects going now that I would have never had time or energy for. Any itch, no matter how geeky and idiosyncratic, is getting scratched by AI.

    • They're possible, but they're not exactly relevant, and you couldn't do something like that on newer hardware. It's like playing a guitar from a museum because the world just forgot how to make guitars. Pretty dystopian.

  • It’s never been easier for you to make a competitor

    So what is stopping you other than yourself?

    • I’m not the OP, but my answer is that there’s a big difference between building products and building businesses.

      I’ve been programming since 1998 when I was in elementary school. I have the technical skills to write almost anything I want, from productivity applications to operating systems and compilers. The vast availability of free, open source software tools helps a lot, and despite this year’s RAM and SSD prices, hardware is far more capable today at comparatively lower prices than a decade ago and especially when I started programming in 1998. My desktop computer is more capable than Google’s original cluster from 1998.

      However, building businesses that can compete against Big Tech is an entirely different matter. Competing against Big Tech means fighting moats, network effects, and intellectual property laws. I can build an awesome mobile app, but when it’s time for me to distribute it, I have to either deal with app stores unless I build for a niche platform.

      Yes, I agree that it’s never been easier to build competing products due to the tools we have today. However, Big Tech is even bigger today than it was in the past.

      2 replies →

[flagged]

  • I'm actually extremely good at programming. My point is I love computers and computing. You can use technology to achieve amazing things (even having fun). Now I can do much more of that than when I was limited to what I can personally code. In the end, it's what computers can do that's amazing, beautiful, terrifying... That thrill and to be on the bleeding edge is always what I was after.

  • If you were confident in your own skills, you wouldn’t need to invent a whole backstory just to discredit someone.

> I can't empathize with the complaint that we've "lost something" at all.

I agree!. One criticism I've heard is that half my colleagues don't write their own words anymore. They use ChatGPT to do it for them. Does this mean we've "lost" something? On the contrary! Those people probably would have spoken far fewer words into existence in the pre-AI era. But AI has enabled them to put pages and pages of text out into the world each week: posts and articles where there were previously none. How can anyone say that's something we've lost? That's something we've gained!

It's not only the golden era of code. It's the golden era of content.

  • > But AI has enabled them to put pages and pages of text out into the world each week: posts and articles where there were previously none.

    Are you for real? Quantity is not equal to Quality.

    I'll be sure to dump a pile of trash in your living room. There wasn't much there before, but now there is lots of stuff. Better right?

    • I'm finding it hard to reconcile HN's love of AI generated code with HN's dislike of AI generated content. Why is the code good but the content bad?

      1 reply →

  • A yes, "content". The word that perhaps best embodies the impersonal and commercialized dystopia we live in.

  • We have more words than ever. Nice.

    But all the words sound more like each other than ever. It’s not just blah, it’s blah.

    And why should I bother reading what someone else “writes”? I can generate the same text myself for free.

One thing that I realized was that a lot of our so-called "craft" is converged "know-how". Take the recent news that Anthropic used Claude Code to write a C compiler for example, writing compiler is hard (and fun) for us humans because we indeed need to spend years understanding deeply the compiler theory and learning every minute detail of implementation. That kind of learning is not easily transferrable. Most students tried the compiler class and then never learned enough, only a handful few every year continued to grow into true compiler engineers. Yet to our AI models, it does not matter much. They already learned the well-established patterns of compiler writing from the excellent open-source implementations, and now they can churn out millions of code easily. If not perfect, they will get better in the future.

So, in a sense our "craft" no longer matters, but what really happens is that the repetitive know-how has become commoditized. We still need people to do creative work, but what is not clear is how many such people we will need. After all, at least in short term, most people build their career by perfecting procedural work because transferring the know-how and the underlying whys is very expensive to human. For the long term, though, I'm optimistic that engineers just get an amazing tool and will use it create more opportunities that demand more people.

  • I'm not sure we can draw useful conclusions from the Claude Code written C compiler yet. Yes, it can compile the Linux kernel. Will it be able to keep doing that moving forward? Can a Linux contributor reliably use this compiler to do their development, or do parts of it simply not work correctly if they weren't exercised in the kernel version it was developed against? How will it handle adding new functionality? Is it going to become more-and-more expensive to get new features working, because the code isn't well-factored?

    To me this doesn't feel that many steps above using a genetic algorithm to generate a compiler that can compile the kernel.

    If we think back to pre-AI programming times, did anyone really want this as a solution to programming problems? Maybe I'm alone in this, but I always thought the problem was figuring out how to structure programs in such a way that humans can understand and reason about them, so we can have a certain level of confidence in their correctness. This is super important for long-lived programs, where we need to keep making changes. And no, tests are not sufficient for that.

    Of course, all programs have bugs, but there's a qualitative difference between a program designed to be understood, and a program that is effectively a black box that was generated by an LLM.

    There's no reason to think that at some point, computers won't be able to do this well, but at the very least the current crop of LLMs don't seem to be there.

    > and now they can churn out millions of code easily.

    It's funny how we suddenly shifted from making fun of managers who think programmer's should be measured by the number of lines of code they generated, to praising LLMs for the same thing. Why did this happen? Because just like managers, programmers letting LLMs write the code aren't reading and don't understand the output, and therefore the only real measure they have for "productivity" is lines of code generated.

    Note that I'm not suggesting that using AI as a tool to aid in software development is a bad thing. I just don't think letting a machine write the software for us is going to be a net win.

  • writing a C compiler is a 1st year undergrad project

    C was explicitly designed to make it simple to write a compiler

    • These are toy compilers missing many edge cases. You’ll be lucky if they support anything other than integer types, nevermind complex pointer-to-pointer-to-struct-with-pointers type definitions. They certainly won’t support GNU extensions. They won’t compile any serious open source project, nevermind the Linux kernel.