Comment by Aurornis

6 days ago

> I like Thomas, but I find his arguments include the same fundamental mistake I see made elsewhere

Some of the arguments in the article are so bizarre that I can’t believe they’re anything other than engagement bait.

Claiming that IP rights shouldn’t matter because some developers pirate TV shows? Blaming LLM hallucinations on the programming language?

I agree with the general sentiment of the article, but it feels like the author decided to go full ragebait/engagement bait mode with the article instead of trying to have a real discussion. It’s weird to see this language on a company blog.

I think he knows that he’s ignoring the more complex and nuanced debates about LLMs because that’s not what the article is about. It’s written in inflammatory style that sets up straw man talking points and then sort of knocks them down while giving weird excuses for why certain arguments should be ignored.

They are not engagement bait. That argument, in particular, survived multiple rounds of reviews with friends outside my team who do not fully agree with me about this stuff. It's a deeply sincere, and, I would say for myself, earned take on this.

A lot of people are misunderstanding the goal of the post, which is not necessarily to persuade them, but rather to disrupt a static, unproductive equilibrium of uninformed arguments about how this stuff works. The commentary I've read today has to my mind vindicated that premise.

  • > That argument, in particular, survived multiple rounds of reviews with friends outside my team who do not fully agree with me about this stuff. It's a deeply sincere, and, I would say for myself, earned take on this.

    Which argument? The one dismissing all arguments about IP on the grounds that some software engineers are pirates?

    That argument is not only unpersuasive, it does a disservice to the rest of the post and weakens its contribution by making you as the author come off as willfully inflammatory and intentionally blind to nuance, which does the opposite of breaking the unproductive equilibrium. It feeds the sense that those in the skeptics camp have that AI adopters are intellectually unserious.

    I know that you know that the law and ethics of IP are complicated, that the "profession" is diverse and can't be lumped into a cohesive unit for summary dismissal, and that there are entirely coherent ethical stances that would call for both piracy in some circumstances and condemnation of IP theft in others. I've seen enough of your work to know that dismissing all that nuance with a flippant call to "shove this concern up your ass" is beneath you.

    • > The one dismissing all arguments about IP on the grounds that some software engineers are pirates?

      Yeah... this was a really, incredibly horseshit argument. I'm all for a good rant, but goddamn, man, this one wasn't good. I would say "I hope the reputational damage was worth whatever he got out of it", but I figure he's been able to retire at any time for a while now, so that sort of stuff just doesn't matter anymore to him.

      16 replies →

  • What really resonated with me was your repeated calls for us at least to be arguing about the same thing, to get on the same page.

    Everything about LLMs and generative AI is getting so mushed up by people pulling it in several directions at once, marketing clouding the water, and the massive hyperbole on both sides, it's nearly impossible to understand if we're even talking about the same thing!

  • It's a good post and I strongly agree with the part about level setting. You see the same tired arguments basically every day here and subreddits like /r/ExperiencedDevs. I read a few today and my favorites are:

    - It cannot write tests because it doesn't understand intent

    - Actually it can write them, but they are "worthless"

    - It's just predicting the next token, so it has no way of writing code well

    - It tries to guess what code means and will be wrong

    - It can't write anything novel because it can only write things it's seen

    - It's faster to do all of the above by hand

    I'm not sure if it's the issue where they tried copilot with gpt 3.5 or something, but anyone who uses cursor daily knows all of the above is false, I make it do these things every day and it works great. There was another comment I saw here or on reddit about how everyone needs to spend a day with cursor and get good at understanding how prompting + context works. That is a big ask but I think the savings are worth it when you get the hang of it.

    • Yes. It's this "next token" stuff that is a total tell we're not all having the same conversation, because what serious LLM-driven developers are doing differently today than they were a year ago has not much at all to do with the evolution of the SOTA models themselves. If you get what's going on, the "next token" thing has nothing at all to do with this. It's not about the model, it's about the agent.

>> Blaming LLM hallucinations on the programming language?

My favorite was suggesting that people select the programming language based of which ones LLMs are best at. People who need an LLM to write code might do that, but no experienced developer would. There are too many other legitimate considerations.

  • If an LLM improves coding productivity, and it is better at one language than another, then at the margin it will affect which language you may choose.

    At the margin means that both languages, or frameworks or whatever, are reasonably appropriate for the task at hand. If you are writing firmware for a robot, then the LLM will be less helpful, and a language such as Python or JS which the LLM is good at is useless.

    But Thomas's point is that arguing that LLMs are not useful for all languages is not the same as saying that are not useful for any language.

    If you believe that LLM competencies are not actually becoming drivers in what web frameworks people are using, for example, you need to open your eyes and recognize what is happening instead of what you think should be happening.

    (I write this as someone who prefers SvelteJS over React - but LLM's React output is much better. This has become kind of an issue over the last few years.)

    • I'm a little (not a lot) concerned that this will accelerate the adoption of languages and frameworks based on their popularity and bury away interesting new abstractions and approaches from unknown languages and frameworks.

      Taking your react example, then if we we're a couple years ahead on LLMs, jQuery might now be the preferred tool due to AI adoption through consumption.

      You can apply this to other fields too. It's quite possible that AIs will make movies, but the only reliably well produced ones will be superhero movies... (I'm exaggerating for effect)

      Could AI be the next Cavendish banana? I'm probably being a bit silly though...

      10 replies →

  • People make productivity arguments for using various languages all the time. Let's use an example near and dear to my heart: "Rust is not as productive as X, therefore, you should use X unless you must use Rust." If using LLMs makes Rust more productive than X, that changes this equation.

    Feel free to substitute Y instead of Rust if you want, just I know that many people argue Rust is hard to use, so I feel the concreteness is a good place to start.

  • Maybe they don’t today, or up until recently, but I’d believe it will be a consideration for new projects.

    Is certainly true that at least some projects choose languages based on or at least influenced by how easy it is to hire developers fluent in that language.

I see no straw men in his arguments: what i see are pretty much daily direct quotes pasted in from HN comments.

I am squarely in the bucket of AI skeptic—an old-school, code-craftsman type of personality, exactly the type of persona this article is framed again, and yet my read is nothing like yours. I believe he's hitting these talking points to be comprehensive, but with nothing approaching the importance and weightiness you are implying. For example:

> Claiming that IP rights shouldn’t matter because some developers pirate TV shows?

I didn't see him claiming that IP rights shouldn't matter, but rather that IP rights don't matter in the face of this type of progress, they never have since the industrial revolution. It's hypocritical (and ultimately ineffectual) for software people to get up on a high horse about that now just to protect their own jobs.

And lest you think he is an amoral capitalist, note the opening statement of the section: "Artificial intelligence is profoundly — and probably unfairly — threatening to visual artists in ways that might be hard to appreciate if you don’t work in the arts.", indicating that he does understand and empathize with the most material of harms that the AI revolution is bringing. Software engineers aren't on that same spectrum because the vast majority of programming is not artisinal creative work, it's about precise automation of something as cheaply as possible.

Or this one:

> Blaming LLM hallucinations on the programming language?

Was he "blaming"? Or was he just pointing out that LLMs are better at some languages than others? He even says:

> People say “LLMs can’t code” when what they really mean is “LLMs can’t write Rust”. Fair enough!

Which seems very truthy and in no way is blaming LLMs. Your interpretation is taking a some kind of logical / ethical leap that is not present in the text (as far as I can tell).

  • > Software engineers aren't on that same spectrum because the vast majority of programming is not artisinal creative work...

    That's irrelevant. Copyright and software licensing terms are still enforced in the US. Unless the software license permits it, or it's for one of a few protected activities, verbatim reproduction of nontrivial parts of source code is not legal.

    Whether the inhalation of much (most? nearly all?) of the source code available on the Internet for the purpose of making a series of programming machines that bring in lots and lots of revenue for the companies that own those machines is either fair use or it's infringing commercial use has yet to be determined. Scale is important when determining whether or not something should be prohibited or permitted... which is something that many folks seem to forget.