← Back to context

Comment by james_marks

1 day ago

I find OP's communication style abrasive and off-putting, which tracks with them saying they've been coached on this, and found that advice lacking.

Maybe it's still insufficient advice, but it hasn't worked for them at least in part because they haven't figured out how to apply it.

From the post, I see low empathy and an air of superiority, (perhaps earned by genuinely being smarter than their peers-- doesn't make it more attractive).

That's going to cause friction because a team is a _social_ construct.

That's because it was generated by an LLM.

  • I simply cannot believe people in this post are discussing this as anything other than a complete bot job. Pure clanker vomit.

    • I realize it's been "written" by an LLM, but the content could have been written by someone I know. It's eerie how this person thinks exactly the same way. It's never their fault, always the others', and they are always obviously right and no amount of arguing can change their mind.

      2 replies →

Yeah, a lot of the examples made me think "wait, there's something else going on there, right?", which would make sense if the author has difficulty communicating or negotiating their proposals.

In the first example, for example, they suggested a new metric to track added warnings in the build, and then there was a disagreement in the team, and then as a footnote someone went and fixed the warnings anyway? That sounds like the author might be missing something from their story.

  • IRL I’ve seen similar discussions devolve into an hour long bike shedding meeting about how to define thresholds for warnings, track new ones, etc.

    Before the end, I had them all fixed. Zero is far easier to deal with…

  • > In the first example, for example, they suggested a new metric to track added warnings in the build, and then there was a disagreement in the team, and then as a footnote someone went and fixed the warnings anyway? That sounds like the author might be missing something from their story.

    I do not find anything missing here. This is how things often plays out in reality. Both your retelling of it and what was actually written in the article.

    Your retelling: Some people agree and some disagree with new metric. That is completely normal. Then someone who agree or want to achieve the peace or just temporary does not feel like doing "real jira" tasks fixes warnings. Team moves on.

    Actual article: the warnings get solved when it becomes apparent one of them caused production issue. That is when "this new process step matters" side wins.

    • I'm referencing the footnote where the author says that the discussion caused one team member to go and fix the issue. The warnings causing a production issue is, I think, a complete hypothetical.

      What this story is missing is an explanation for why people were disagreeing. Like, why is someone not looking at warnings? Is it that the warnings are less important than the author understands? Is it that the warnings come from something that the team have little control over? And the solution the author suggests - would it really have changed anything if they already weren't looking at warnings? The author writes as if their proposal would have fixed things, but that's not really clear to me, because it's basically just a view into whether the problem is getting worse, which can be ignored just as easily as the problem itself.

      1 reply →

The first two sentences

> Organizations don't optimize for correctness. They optimize for comfort

...do I need to say it?

  • > One number, never measured before. It doesn't change rules or add warnings, just makes the existing count visible.

    Stopped here. That pattern.

    I recognize this pattern from this AI "companion" my mate showed me over Christmas. It told a bunch of crazy stories using this "seize the day" vibe.

    It had an animated, anthropomorphized animal avatar. And that animal was an f'ing RACCOON.

    • LLMs originally learned these patterns from LinkedIn and the “$1000 for my newsletter” SEO pillions. Both accomplish a goal. Now that's become a loop.

      There is a delayed but direct association between RLHF results we see in LLM responses and volume of LinkedIn-spiration generated by humans disrupting ${trend.hireable} from coffee shops and couches.

      // from my couch above a coffee shop, disrupting cowork on HN. no avatars. no stories. just skills.md

  • You are absolutely right!

    - It is not X. It is Y.

    - X [negate action] Y. X [action] Z.

    The titles are giveaways too: Comfort Over Correctness, Consensus As Veto, The Nuance, Responsibility Without Authority, What Changes It. Has that bot taste.

    If you want I can compile a list of cases where this doesn't happen. Do you want me to do that?

> I find OP's communication style abrasive and off-putting

Your comment is hilarious on a meta-level: it's an example of exactly the sort of socially-mediated gatekeeping the author of the article (machine or human, I don't care) criticizes. It is, in fact, essential to match authority and responsibility to achieve excellence in any endeavor, and it's a truth universally acknowledged that vague consensus requirements are tools socially adept cowards use to undermine excellence.

Competent dictatorship is effective. Look at how much progress Python made under GVR. People who rail against hierarchy and authority, even when deployed correctly, are exactly the sort of people who should be nowhere near anything that requires progress.

Imagine running a military campaign by seeking consensus among the soldiers.

  • Consensus works in a Democracy because the best thing the government can do to help people is usually nothing.

  • > Look at how much progress Python made under GVR.

    Or, you know, Linus Fucking Torvalds. If you were carrying the success or failure of most of the world's digital infrastructure on your shoulders, you also might be grating to some.