← Back to context

Comment by nromiun

2 months ago

Funny how so many people in this comment section are saying Rob Pike is just feeling insecure about AI. Rob Pike created UTF-8, Go, Plan-9 etc. On the other hand I am trying hard to remember anything famous created by any LLM. Any famous tech product at all.

It is always the eternal tomorrow with AI.

Remember, gen AI produces so much value that companies like Microsoft are scaling back their expectations and struggling to find a valid use case for their AI products. In fact Gen AI is so useful people are complaining about all of the ways it's pushed upon them. After all, if something is truly useful nobody will use it unless the software they use imposes it upon them everywhere. Also look how it's affecting the economy - the same few companies keep trading the same few hundred billion around and you know that's an excellent marker for value.

  • Unfortunately, it’s also apparently so useful that numerous companies here in Europe are replacing entire departments of people like copywriters and other tasks with one person and an AI system.

> On the other hand I am trying hard to remember anything famous created by any LLM.

That's because the credit is taken by the person running the AI, and every problem is blamed on the AI. LLMs don't have rights.

  • Do you have any evidence that an LLM created something massive, but the person using it received all the praise?

    • Maybe not autonomously (that would be very close to economic AGI).

      But I don't think the big companies are lying about how much of their code is being written by AI. I think back of the napkin math will show the economic value of the output is already some definition of massive. And those companies are 100% taking the credit (and the money).

      Also, almost by definition, every incentive is aligned for people in charge to deny this.

      I hate to make this analogy but I think it's absurd to think "successful" slaveowners would defer the credit to their slaves. You can see where this would fall apart.

      1 reply →

  • So who has used LLMs to create anything as impressive as Rob Pike?

    • I would never talk down on Rob Pike.

      But I think in the aggregate ChatGPT has solved more problems, and created more things, than Rob Pike (the man) did -- and also created more problems, with a significantly worse ratio for sure, but the point still stands. I still think it counts as "impressive".

      Am I wrong on this? Or if this "doesn't count", why?

      I can understand visceral and ethically important reactions to any suggestions of AI superiority over people, but I don't understand the denialism I see around this.

      I honestly think the only reason you don't see this in the news all the time is because when someone uses ChatGPT to help them synthesize code, do engineering, design systems, get insights, or dare I say invent things -- they're not gonna say "don't thank (read: pay) me, thank ChatGPT!".

      Anyone that honest/noble/realistic will find that someone else is happy to take the credit (read: money) instead, while the person crediting the AI won't be able to pay for their internet/ChatGPT bill. You won't hear from them, and conclude that LLMs don't produce anything as impressive as Rob Pike. It's just Darwinian.

      1 reply →

  • You wish. AI has no shortage of people like you trying so hard to give it credit for anything. I mean, just ask yourself. You had to try so hard that you, in your other comment, ended up hallucinating achievements of a degree that Rob Pike can only dream of but yet so vague that you can't describe them in any detail whatsoever.

    > But I think in the aggregate ChatGPT has solved more problems, and created more things, than Rob Pike did

    Other people see that kind of statement for what it is and don't buy any of it.

He's also in his late 60's. And he's probably done career's worth of work every other year. I very much would not blame him for checking out and enjoying his retirement. Hope to have even 1% of that energy when/if I get to that age

> It is always the eternal tomorrow with AI.

ChatGPT is only 3 years old. Having LLMs create grand novel things and synthesize knowledge autonomously is still very rare.

I would argue that 2025 has been the year in which the entire world has been starting to make that happen. Many devs now have workflows where small novel things are created by LLMs. Google, OpenAI and the other large AI shops have been working on LLM-based AI researchers that synthesize knowledge this year.

Your phrasing seems overly pessimistic and premature.

Is this https://en.wikipedia.org/wiki/Argument_from_authority

  • Argument from authority is a formal fallacy. But humans rarely use pure deductive reasoning in our lives. When I go to a doctor and ask for their advice with a medical issue, nobody says "ugh look at this argument from authority, you should demand that the doctor show you the reasoning from first principles."

    • > But humans rarely use pure deductive reasoning in our lives

      The sensible ones do.

      > nobody says "ugh look at this argument from authority, you should demand that the doctor show you the reasoning from first principles."

      I think you're mixing up assertions with arguments. Most people don't care to hear a doctor's arguments and I know many people who have been burned from accepting assertions at face value without a second opinion (especially for serious medical concerns).

If you think about economic value, you’re comparing a few large-impact projects (and the impact of plan9 is debatable) versus a multitude of useful but low impact projects (edit: low impact because their scope is often local to some company).

I did code a few internal tools with aid by llms and they are delivering business value. If you account for all the instances of these kind of applications of llms, the value create by AI is at least comparable (if not greater) by the value created by Rob Pike.

  • One difference is that Rob Pike did it without all the negative externalities of gen ai.

    But more broadly this is like a version of the negligibility problem. If you provide every company 1 second of additional productivity, while summation of that would appear to be significant, it would actually make no economic difference. I'm not entirely convinced that many low impact (and often flawed) projects realistically provide business value at scale an can even be compared to a single high impact project.

    • If ChatGPT deserves credit for things it is used to write, then every good thing ever done in Go accrues partly to Rob.

  • All those amazing tools are internal and nobody can check them out. How convenient.

    And guys don't forget that nobody created one off internal tools before GPT.

    • > All those amazing tools are internal and nobody can check them out. How convenient.

      i might open source one of those i wrote, sooner or later. it's a simple bridge/connector thingy to make it easier for two different systems to work together and many internal users are loving it. this one in particular might be useful to people outside my current employer.

      > And guys don't forget that nobody created one off internal tools before GPT.

      moot point. i did this kind of one-off developments before chatgpt as well, but it was much slower work. the example from above took me a couple of afternoons, from idea to deployment.

      1 reply →

> I am trying hard to remember anything famous created by any LLM.

not sure how you missed Microsoft introducing a loading screen when right-clicking on the desktop...