← Back to context

Comment by nomel

18 days ago

The negativity around the lack of perfection for something that was literal fiction fiction just some years ago is amazing.

If more people are able to step back and think about the potential growth for the next 5-10 years, then I think the discussion would be very different.

I am grateful to be able to witness all these amazing progress play out, but am also concerned about the wide ranging implications.

  • > think about the potential growth for the next 5-10 years,

    I thought about it and it doesn't seem that bright. The problem is not that LLMs generate inferior code faster, is that at some point some people will be convinced that this code is good enough and can be used in production. At that point, the programming skills of the population will devolve and less people will understand what's going on. Human programmers will only work in financial institutions etc., the rest will be a mess. Why? Because generated code is starting to be a commodity and the buyer doesn't understand how bad it it.

    So we're at the stage when global companies decided it's a fantastic idea to outsource the production of everything to China, and individuals are buying Chinese plastic gadgets en masse. Why? Because it's very cheap when compared to the real thing.

  • This is what the kids call “cope”, but it comes from a very real place of fear and insecurity.

    Not the kind of insecurity you get from your parents mind you, but the kind where you’re not sure you’re going to be able to preserve your way of life.

    • > Not the kind of insecurity you get from your parents mind you

      I don't get this part. At least my experience is the opposite: it's basically the basic function of parents to give their child the sense of security.

      1 reply →

    • Sorry but I think you have it the other way around.

      The ones against it understand fully what the tech means for them and their loved ones. Even if the tech doesn't deliver on all of its original promises (which is looking more and more unlikely), it still has enough capabilities to severely affect the lives of a large portion of the population.

      I would argue that the ones who are inhaling "copium" are the ones who are hyping the tech. They are coping/hoping that if the tech partially delivers what it promises, they get to continue to live their lives the same way, or even an improved version. Unless they already have underground private bunkers with a self-sustained ecosystem, they are in for a rude awakening. Because at some point they are going to need to go out and go grocery shopping.

    • My hot take is that portions of both the pro- and anti- factions are indulging in the copium. That LLMs can regurgitate a functioning compiler means that it has exceeded the abilities of many developers and whether they wholeheartedly embrace LLMs or reject LLMs isn't going to save those that have been exceeded from being devalued.

      The only safety lies in staying ahead of LLMs or migrating to a field that's out of reach of them.

There is a massive difference between a result like this when it's a research project and when it's being pushed by billion dollar companies as the solution to all of humanities problems.

In business, as a product, results are all that matter.

As a research and development efforts it's exciting and interesting as a milestone on the path to something revolutionary.

But I don't think it's ready to deliver value. Building a compiler that almost works is of no business value.

Noone can correctly quantify what these models can and can't do. That leads to the people in charge completely overselling them (automating all white collar jobs, doing all software engineering, etc) and the people threatened by those statements firing back when these models inevitably fail at doing what was promised.

They are very capable but it's very hard to explain to what degree. It is even harder to quantify what they will be able to do in the future and what inherent limits exist. Again leading to the people benefiting from it to claim that there are no limits.

Truth is that we just don't know. And there are too few good folks out there that are actually reasonable about it because the ones that know are working on the tech and benefit from more hype. Karpathy is one of the few that left the rocket and gives a still optimistic but reasonable perspective.

It’s a fear response.

  • It could also be that, so often, the claims of what LLMs are achieve are so, so overstated that people feel the need to take it down a notch.

    I think lofty claims ultimately hurt the perception of AI. If I wanted to believe AI was going nowhere, I would listen to people like Sam Altman, who seem to believe in something more akin to a religion than a pragmatic approach. That, to me, does not breed confidence. Surely, if the product is good, it would not require evangelism or outright deceit? For example, claiming this implementation was 'clean room'. Words have meaning.

    This feat was very impressive, no doubt. But with each exaggeration, people lose faith. They begin to wonder - what is true, and what is marketing? What is real, and what is a cheap attempt for companies to rake in whatever cold hard AI cash they can? Is this opportunistic, like viral pneumonia, or something we should really be looking at?

  • No.

    While there are many comments which are in reaction to other comments:

    Some people hype up LLMs without admitting any downsides. So, naturally, others get irritated with that.

    Some people anti-hype LLMs without admitting any upsides. So, naturally, others get irritated with that.

    I want people to write comments which are measured and reasonable.

  • This reply is argumentum ad personam. We could reverse it and say GenAI companies push this hype down our throats because of fear that they are burning cash with no moat but these kinds of discussions lead nowhere. It's better to focus on core arguments.

How does a statistical model become "perfect" instead of merely approaching it? What do you even mean by "perfect"?

We already have determinism in all machines without this wasteful layer of slop and indirection, and we're all sick and tired of the armchair philosophy.

It's very clear where LLMs will be used and it's not as a compiler. All disagreements with that are either made in bad faith or deeply ignorant.

  • > All disagreements with that are either made in bad faith or deeply ignorant.

    Declaring an opinion and then making discussion about it impossible isn't a useful way to communicate or reason about things.

    • Sure it is. I haven't made discussion impossible. If you choose to reply to my line of discussion, I eliminated an entire category of what I think are trivial arguments that miss the point. Yes indeed calling that stuff trivial is my opinion, but I think you were trying to say something else.

      You found room by claiming I have some other opinions. In fact, I originally asked some questions you chose not to answer.

      That all begs some more questions: what about my statements isn't factual? What about your statements isn't factual?

      I have a few guesses. You may think AI can write a better compiler. You may think AI has already written a better compiler. You may think humans shouldn't write code anymore.

      All of those are examples of opinions you might declare, but maybe you meant to say something factual. If those really are the only things you meant to debate, I have to agree I didn't think they were going anywhere and have been done to death. I thought maybe you had something else in mind.

      1 reply →

I think it’s a good antidote to the hype train. These things are impressive but still limited, solely hearing about the hype is also a problem.