Comment by mjburgess

7 months ago

It feels, more and more, that LLMs will be another technology that society will inoculate itself against. It's already starting to happen in education: teachers conversing with students, observing them learn, observing them demonstrate their skills. In business, quickly I'd say, we will realise that the vast majority of worthwhile communication necessarily must be produced by people -- as authors of what they want to say. Authoring is two-thirds of the point of most communication.

Before this, of course, will be a dramatic "shallowness of thinking" shock that will have to occur before its ill-effects are properly inoculated against. It seems part of the expert aversion to LLMs -- against the credulous lovers of "mediocrity" (cf. https://fly.io/blog/youre-all-nuts/) -- is an early experience of inoculation:

Any "macroscopic usage" of LLMs has, in any of my projects, dramatically impaired my own thinking, stolen decisions-making, and worsened my readiness for necessary adaptions later-on. LLMs are a strictly microscopic fill-in system for me, in anything that matters.

This isn't like calculators: my favourite algorithms for by-hand computation arent being "taken away". This is a system for substituting thinking itself with non-thinking, and radically impairs your readiness (, depth, adaptability, ownership) wherever it is used, on whatever domain you use it on.

> In business, quickly I'd say, we will realise that the vast majority of worthwhile communication necessarily must be produced by people

I believe that one of the most underappreciated skills in business is the ability to string a coherent narrative together. I attend many meetings with extremely-talented engineers who are incapable of presenting their arguments in a manner that others (both technical and non-technical) can follow them. There is an artistry to writing and speaking that I am only now in my late forties beginning to truly appreciate. Language is a powerful tool, the choice of a single word can sometimes make or break an argument.

I don't see how LLMs can do anything but significantly worsen this situation overall.

  • > I believe that one of the most underappreciated skills in business is the ability to string a coherent narrative together. I attend many meetings with extremely-talented engineers who are incapable of presenting their arguments in a manner that others (both technical and non-technical) can follow them.

    Yes, but the arguments they need to present are not necessarily the ones they used to convince themselves, or their own reasoning history that made them arrive at their proposal. Usually that is an overly boring graph search like "we could do X but that would require Y which has disadvantage Z that theoretically could be salvaged by W, but we've seen W fail in project Q and especially Y would make such a failure more likely due to reason T, so Y isn't viable and therefore X is not a good choice even if some people argue that Y isn't a strict requirement, but actually it is if we think in a timeline of several years and blabla" especially if the decision makers have no time and no understanding of what the words X, Y, Z, W, Q, T etc. truly mean. Especially if the true reason also involves some kind of unspeakable office politics like wanting to push the tools developed by a particular team as opposed to another or wanting to use some tech for CV reasons.

    The narrative to be crafted has to be tailored for the point of view of the decision maker. How can you make your proposal look attractive relative to their incentives, their career goals, how will it make them look good and avoid risks of trouble or bad optics. Is it faster? Is it allowing them to use sexy buzzwords? Does it line up nicely with the corporate slogan this quarter? For these you have to understand their context as well. People rarely announce these things, and a clueless engineer can step over people's toes, who will not squarely explain the real reason for their pushback, they will make up some nonsense, and the clueless guy will think the other person is just too dumb to follow the reasoning.

    It's not simply about language use skills, as in wordsmithing, it's also strategizing and putting yourself in other people's shoes, trying to understand social dynamics and how it interacts with the detailed technical aspects.

    • To give a brief example of this -- a college asked why an exec had listened to my argument but not theirs recently, despite "saying the same thing". I explained that my argument contained actual impacts: actual delays, actual costs, an actual timeline when the impact would occur -- rather than nebulous "there will be problems".

      Everyone comes to execs with hypothetical problems that all sound like people dressing up minor issues -- unless you can give specific details, justifications, etc. they're not going to parse properly.

      This would be one case where a person asking an LLM for help is not even aware of the information they lack about the person they're trying to talk to.

      We could define expertise this way: that knowledge/skill you need to have to formulate problems (, questions) from a vague or unknown starting point.

      Under that definition, it becomes clear why LLMs "in the large" pose problems.

      5 replies →

  • This entire thread of comments is all circling around but does not now how to articulate the omnipresent communication issues within tech, because the concept of effective communications is not taught in tech, not taught in the entire science, engineering, math and technology series. The only communications training people receive is how to sell, how to do lite presentations.

    There absolutely is a great way to use LLMs when writing, but not to write! Have them critique what you wrote, but not write for you. Create a writing professor persona, create a writing critique, and make them offer Socratic advice where they draw you to make the connection, they don't think for you, but teach you.

    There has been a massive disservice to the entire tech series of professions by ignoring the communications, interpersonal and group communication dynamics of technology development. It is not understood, and not respected. (Many developers will deny communication skills utility! They argue against being understood; "that is someone else's job") Fact of the matter: a quality communicator leads, simply because no one else conveys understanding; without the skills they leave a wake of confusion and disgruntled staff. Competent communicators know how to write to inform, know how to debate to shared understanding, and they know how to diffuse excited emotion, they know how to give bad news and be thanked for the insight.

    Seriously, effective communications is a glaring hole in your tech stack.

  • I find that LLM extremely good in training such language skills by using following process:

    a) write a draft yourself.

    b) ask the LLM to correct your draft and make it better.

    c) newer LLMs will explicitly mention the things they corrected (otherwise ask for being explict about the changes)

    d) walk through each of the changes and apply the ones you feel that make the text better

    This helped me improving my writing skills drastically (in multiple languages) compared to the times where I didn't have access to LLMs.

    • Done this as well. But after the initial "wow!" moment, the "make it better" part became a "actually I don't like how you wrote it, it doesn't sound like me".

      There is a thin line between enhancing and taking over, and IMO the current LLMs cross it most of the time.

      1 reply →

    • I use Grammarly sometimes to check my more serious texts, but there's a gotcha. If you allow all of its stylistic choices, your writing becomes very sterile and devoid of any soul.

      Your word and structural choices adds a flair of its own, makes something truly yours and unique. Don't let the tool kill that.

It's all already there. When you converse with a junior-engineer about their latest and greatest idea (over a chat platform), and they start giving you real-time responses which are a page long and structured into bullet points...it's not even that they are using chatgpt to avoid thinking, it is the fact that they think either no-one will notice, or that this is how grown-ups actually converse with each other, is terrifying.

  • I haven't encountered that (yet), but I can't think of a faster way to get me to stop paying attention to them. I'm more interested in their analysis, not the analysis of a machine I can just use myself directly.

    • On the one hand you're right. On the other hand, some , let's be honest - moron - bends the ear of some clueless manager with a too-good-to-be-true plan...And suddenly you have to explain it to two clueless people. And I used "junior" in the sense of my own assesment - I've definitely talked to some people who had a "senior" title, but that was perhaps reflective more of their accumulated years of experience, rather than actual knowledge...It's all so easy in their world - for the ChatGPT tells'em so ;)

  • This is the worst thing ever. I have coworkers who literally cannot string three words together without making a grammar mistake, but recently they've been texting me with more-than-perfect grammar and a vast vocabulary.

    These people are trying to fool everyone else making them think they are smarter/more educated than they actually are. They aren't fooling me, I've seen their real writing, I know it's not actually their text and thoughts, it really disgusts me.

    • I know exactly what you mean. I once tried - albeit very implicitly - hinting that I knew they were using a chatbot to generate answers, but unfortunately it did not help. Perhaps I should just have openly told them it not only insults the counterparts intelligence, but that it was also disrespectful.

> another technology that society will inoculate itself against

I like the optimism. We haven't developed herd immunity to the 2010s social media technologies yet, but I'll take it.

  • The obesity rate is still rising in most areas worldwide. I'd argue we still haven't developed herd immunity to gas-powered automobiles invented early to mid 1800s.

> I'd say, we will realise that the vast majority of worthwhile communication necessarily must be produced by people

Now one like me might go and ask how much of communication is actually worthwhile? Sometimes I consider that there is lot of communication that might not actually be. It is still done, but if no one actually reads it, why not automate generation.

Not to say there is not significant amount of stuff you actually want to get right.

  • It's not about getting it right, its about having thought about it. Authoring means thinking-thru, owning, etc.

    There's a tremendous hollowing-out of our mental capacities caused by the computer science framing of activities in terms of input->output, as-if the point is to obtain the output "by any means".

    It would not matter if the LLM gave exactly the same output as you had written, and always did. Because you still have to act in the world with thoughts that you needed have when authoring it.

    • > It's not about getting it right, its about having thought about it. Authoring means thinking-thru, owning, etc.

      So much this.

      At my current workplace, I was asked to write up a design doc for a software system. The contents of the document itself weren't very relevant as the design deviated significantly based on constraints and feedback that could be discovered only after beginning the implementation, but it was the act of putting together that document, thinking about the various cases, etc. that lead to the formation of a mental model that helped me work towards delivering that system.

  • > It is still done, but if no one actually reads it, why not automate generation.

    There's a reason the real-estate industry has been able to go all-in on using AI to write property listings with almost no consumer kickback (except when those listings include hallucinated schools).

    We're already used to treating them with skepticism, and nobody takes them at face value.

> In business, quickly I'd say, we will realise that the vast majority of worthwhile communication necessarily must be produced by people -- as authors of what they want to say.

But what fraction of communication is "worthwhile"?

I'm an academic, which in theory, should be one of the jobs that requires the most thinking. And still, I find that over half of the writing I do are things like all sorts of reports, grant applications, ethics/data management applications, recommendation letters, bureaucratic forms, etc. Which I wouldn't class as "worthwhile" in the sense that they don't require useful thinking, and I don't care one iota whether the text sounds like me or not as long as I get the silly requirement done. For these purposes, LLMs are a godsend and probably actually help me think more because I can devote more time to actual research and teaching, which I do in person.

  • Well if you want a rant about academia, I have many well prepared.

    I think in the cases you describe the "thinking" was already purely performative, and what LLMs are doing is a kind of accelerationist project of undermining the performance by automating it.

    I'm somewhat optimistic about this kind of self-destructive LLM use:

    There are a few institutions where these purely-performative pseudo-thinking processes exist, ones insensitive to "existential feedback loops" which otherwise burn them down. I'm hopefully LLMs become a wildfire of destruction in these institutions and, absent external pressures, they return to actual thinking over the performative.

One of the effects on software development is: the fact that you submitted a PR with any LoC count doesn't mean that you did any work. You need to explain your solution and answer questions to prove that.

  • The next stage of this issue is: how do you explain something you didn't write?

    The LLM-optimist view at the moment, which takes on board the need to review LLMs, assumes that this review capability will exist. I cannot review LLM output on areas outside of my expertise. I cannot develop the expertise I need if I use an LLM in-the-large.

    I first encountered this issue ~year-ago when using an LLM to prototype a programming language compiler (a field I knew quite well anyway) -- but realised that very large decisions about the language were being forced by LLM implementation.

    Then, over the last three weeks, I've had to refresh my expertise in some areas of statistics and realised much of my note taking with LLMs has completely undermined this process -- the effective actions have been, in follow on, traditional methods: reading books, watching lectures, taking notes. The LLM is only a small time saver, "in the small" once I'm an expert. It's absolutely disabling as a route back to expertise.

    • IMO we are likely in a golden era of coding LLM productivity, one in which the people using them are also experts. Once there are no coding experts left, will we still see better productivity?

      2 replies →

  • Explanation that the smartpants and some management are already totally willing to outsource to an LLM as well...

I see it as more of a calibration, revolving around understanding what an AI is inherently not able to do – decide what YOU want – and stopping to be weird about that. If you chose to stop being involved in a process and mold it, then your relationship to that process and the outcome will necessarily change. Why would we be surprised by that?

As soon as we stop treating AI like mind readers things will level out.

> This is a system for substituting thinking itself with non-thinking

One of my favorite developments on the internet in the past few years is the rise of the “I don’t think/won’t think/didn’t think” brag posts

On its own it would be a funny internet culture phenomenon but paired with the fact that you can’t confidently assume that anybody even wrote what you’re reading it is hilarious

  • > One of my favorite developments on the internet in the past few years is the rise of the “I don’t think/won’t think/didn’t think” brag posts

    Sorry, I can't immediately think of what you're talking about. Could you link to an example so I can get a feel for it?

It’s been my experience that most people opinions on AI is inversely proportional to the timescale they have been using it.

Using AI is kind of like having a Monika Closet. You just push all the stuff you don’t know to the side until it’s out of view. You then think everything is clean, and can fool yourself into thinking so for a while.

But then you need to find something in that closet and just weep for days.

What you say might be true for the current crop of LLMs. But it's rather unlikely their progress will stop here.

  • Well, why would the "progress" continue? Most stats I've seen seem to point to diminishing returns for scale of models.

> This is a system for substituting thinking itself with non-thinking

I haven’t personally felt this to be the case. It feels more like going from thinking about nitty gritty details to thinking more like the manager of unreasoning savants. I still do a lot of thinking— about organization, phrasing (of the code), and architecture. Conversations with AI agents help me tease out my thinking, but they aren’t a substitute for actual thought.

> against the credulous lovers of "mediocrity" (cf. https://fly.io/blog/youre-all-nuts/)

I read that article when it was posted on HN, and it's full of bad faith interpretations of the various objections to using LLM-assisted coding.

Given that the article comes from a person whose expertise and viewpoints I respected, I had to run it through a friend; who suggested a more cynical interpretation that the article might have been written to serve his selfish interests. Given the number of bugs that LLMs often put in, it's not difficult to see why a skilled security researcher might be willing to encourage people to generate code in ways that lead to cognitive atrophy, and therefore increase his business through security audits.

  • More charitably, it's a person yet to feel the disabling phase of using an LLM.

    If he's a security researcher, then I'd imagine much of his LLM use is outside his area of expertise. He's probably not using it to replace his security research.

    I think the revulsion to LLMs by experts is during that phase when its clearly mentally disabling you.

  • Now I'm a fairly cynical person by trade but that feels like it's straying into conspiracy theory territory.

    And of course the key point is that the author of that article isn't (IMO) working in the security research field any more, they work at fly.io on the security of that platform.

    • Less conspiracy, more negligence: “it is difficult to get a man to understand something when his salary depends upon his not understanding it”

Sad reality is that most people are not smart. They’re not creative, original, or profound. Think back to all the empty and pointless convos you had prior to AI or the web.

  • I don't see it as sad, it's perfectly fine to be mediocre. You can have a full, rich life without being or doing anything extraordinary. I am mediocre and most of the people I know are mediocre - at least mediocre in the sense that there will be no Wikipedia page under my name.

  • I strongly disagree with this idea.

    If you evaluate a fish by asking it to climb a tree, it'll look dumb.

    If you evaluate a cat by asking it to navigate an ocean to find its birthplace, it'll look dumb, too.

Shallow take. LLMs are like food for thought -- the right use in the right amounts is empowering, but too much (or uncritical use) and you get fat and lazy, metaphorically speaking.

You wouldn't go around crusading against food because you're obese.

Another neat analogy is to children who are too dependent on their parents. Parents are great and definitely help a child learn and grow but children who rely on their parents for everything rather than trying to explore their limits end up being weak humans.

  • > You wouldn't go around crusading against food because you're obese.

    My eateries I step into are met with revulsion at the temples to sugary carbohydrates they've become.

    > about 40.3% of US adults aged 20 and older were obese between 2021 and 2023

    Prey your analogy to food does not hold, or else, we're on track for 40% of americans to acquiring mental disabilities.

    • Oh, we for sure are, because much like America's social structure pushes people to obesity with overwork and constant stress, that same social structure will push people to use AI blindly to to keep up with brutal quotas set by their employers.

      1 reply →

  • Shallow take.

    Your analogies only work if you don't take in to account there are different degrees of utility/quality/usefulness of the product.

    People absolutely crusade against dangerous food, or even just food that has no nutritious benefit.

    The parent analogy also only holds up on your happy path.

    • You've just framed your own argument. In order to be intellectually consistent, you can't crusade against AI in general, but rather bad uses of AI, which (even as an AI supporter) is all I've asked anti-AI folks to do all along.

      2 replies →