← Back to context

Comment by KirinDave

5 years ago

This idea that replacing unsafe code with safe code is "spitting" is unhealthy in the extreme.

I don't understand why the author felt so defensive about accepting packages. As far as I can tell, they've always had this attitude.

Why even make a project open source if you don't want to consider patches? The whole idea is that even if you think a thing is boring, someone else may not and they'll do that work for the community.

Weirdly, this is now some kind of grave affront to the maintainer who appears to take the idea that a compiler check could be added to their general framework as an insult.

> Why even make a project open source if you don't want to consider patches? The whole idea is that even if you think a thing is boring, someone else may not and they'll do that work for the community.

This is a false dilemma - making a project open source can have plenty of motivations besides "I want to be a project manager for free!". Some other motivations:

* Someone may find this useful even if I don't ever touch it again.

* This could show others an example of a different way to solve a problem.

* This is cool, look what I did!

* Github is a free host, and there's no reason to keep this private.

* I just need some code I wrote in a public place for hiring managers to peruse.

  • > This is a false dilemma - making a project open source can have plenty of motivations besides "I want to be a project manager for free!". Some other motivations:

    But isn't it also a false dilemma to suggest that the only lens by which you accept PRs is in the role of "project manager?"

    > Other reasons offered

    Sure but those are hypothetical. That's not what was going on here. The author of this diatribe was actively soliciting patches and maintaining the project publicly.

    We certainly aren't coercing him to take a role he didn't want (at least initially).

I know why I might.

Here's an analogy for you. When you pour yourself a glass of pop, and you over-pour - do you then pour the drink back into the bottle?

I know some people that would return that drink to the bottle, and some that would rather pour that little bit into the sink. Those that would "backwash" believe that the bottle will be fine - sure, you've maybe transferred a little bacteria in, but it probably won't cause any problems. Those that throw it away believe that the hygiene of the bottle - even from a clean glass - would otherwise be compromised.

So if you come to my battle-tested codebase, and tell me "Hey, I've made you code better. It now has a theoretical metric of cleanness, instead of your proven metric of cleanness" - you may well have just introduced a bug. I now need to test your code, ensure it meets my real-world standards of cleanness. And maybe it does! Maybe your code has fixed a glacially-slow memory leak, but I won't see that. Maybe your code has introduced a complex interaction that none of my examples exploit, but other people's examples blow up because of it.

Maybe the bottle of pop will be fine. Maybe in a couple of days time, when I have guests over, the unhealthy layer of scum floating in my mother-in-law's glass will make me look bad.

Either way, it's a lot of work, and little certainty, and only theoretical benefit - vs no work, and full certainty in the real-world correctness of my code.

And it's a question of which do I value more - my big bottle of pop, or the dribble you'd rather give me back.

  • So uh, I gotta be honest with you: I can barely follow your post. It makes very little sense to me. But the parts I can figure out, I disagree strongly with.

    > So if you come to my battle-tested codebase, and tell me "Hey, I've made you code better. It now has a theoretical metric of cleanness, instead of your proven metric of cleanness"

    The flaw with this is that just running code in production for a modest amout of time proves nothing. That is in fact the opposite of proof, it's an anecdote. Safe code from a sound compiler can be literally proven to hold certain properties, with a margin of certainty that depends on then compiler.

    Now of course, the proof guarantees of the specific code in question are weak compared to what you can do with something like Liquid Haskell or Idris or Coq, but they're definitely more than this pride-based programming you're holding up.

    > Maybe your code has introduced a complex interaction that none of my examples exploit, but other people's examples blow up because of it.

    And maybe your code already had that. That's why I trust compiler checks a hell of a lot more than people. And that's why I find this entire movement of pride-based programming that you and the subject of this thread so problematic. You're just holding up your haphazard experience as evidence. And honestly, not many people's experience would move me that way. Maybe if you're running it as Google or Facebook's frontend I'd find that reassuring, but short of that...? No.

    Have you load tested your software? How did you simulate it? Have you validated your algorithm with a tool like TLA+? Have you watched your memory use carefully? Have you measured the latency variance? Under what kinds of loads? Is your data real or a projection using something like USL4J?

    > Either way, it's a lot of work, and little certainty, and only theoretical benefit - vs no work, and full certainty in the real-world correctness of my code.

    Pull requests on GitHub are generally the click of a button. If they're not good on grounds of completeness, then reject them on those grounds. Not, "Oh here we go again this code is so much more mechanically verified than mine, don't start this again."

    • > pride-based programming

      I'm absolutely not talking about pride. That's a straw man - nowhere in my answer did I discuss a person's pride. I'm a big proponent of egoless programming (as far as any human being can) and while I have no idea whether ego came into this, pride is not the only reason a person might reject these kinds of patches.

      That said, ego is a great reason for you to dismiss valid arguments, so I'm guessing you've made up your mind.

      > The flaw with this is that just running code in production for a modest amount of time proves nothing. That is in fact the opposite of proof, it's an anecdote. [...] proven to hold certain properties [...]

      You've just made exactly my point, but from the opposite side. You are saying that Safety, and those "proven properties" are of higher importance than being bug-free in the real world.

      The issue is that it is impossible to prove that code is correct via tools. You can prove some properties, but you cannot prove fitness for purpose.

      > Have you load tested your software? How did you simulate it? Have you validated your algorithm with a tool like TLA+? Have you watched your memory use carefully? Have you measured the latency variance? Under what kinds of loads? Is your data real or a projection using something like USL4J?

      Generally speaking, I have no idea. In this scenario, I know the project owner has done some performance testing, so maybe. But honestly, I've no idea what you're trying to argue here.

      > Pull requests on GitHub are generally the click of a button

      This seems like a big flip from the previous statement. No, Pull requests on GitHub for a trusted and depended-on project are NOT a single click. If that is how you run your open source PR pipeline, very quickly your users will not trust you.

      3 replies →