Comment by d-us-vb

19 days ago

> No, the best thing you can do for simplicity is to not conflate concepts.

This presumes the framework in which one is working. The type of map is and always will be the same as the type of function. This is a simple fact of type theory, so it is worthwhile to ponder the value of providing a language mechanism to coerce one into another.

> This is cleverness over craftsmanship. Keeping data and execution as separate as possible is what leads to simplicity and modularity.

No, this is research and experimentation. Why are you so negative about someone’s thoughtful blog post about the implications of formal type theory?

This presumes the framework in which one is working.

One doesn't have to presume anything, there are general principles that people eventually find are true after plenty of experience.

The type of map is and always will be the same as the type of function. This is a simple fact of type theory, so it is worthwhile to ponder the value of providing a language mechanism to coerce one into another.

It isn't worthwhile to ponder because this doesn't contradict or even confront what I'm saying.

No, this is research and experimentation.

It might be personal research, but people have been programming for decades and this stuff has been tried over and over. There is a constant cycle where someone thinks of mixing and conflating concepts together, eventually gets burned by it and goes back to something simple and straightforward. What are you saying 'no' to here? You didn't address what I said.

You're mentioning things that you expect to be self evident, but I don't see an explanation of why this simplifies programs at all.

  • > One doesn't have to presume anything, there are general principles that people eventually find are true after plenty of experience.

    I guess I just disagree with you here. Plenty of programmers with decades of experience have found no such general principle. There is a time and place for everything and dogmatic notions about "never conflate X and Y" because they're "fundamentally different" will always fall flat due to the lack of proof that they are in fact fundamentally different. It depends on the framework in which you're analyzing it.

    > It isn't worthwhile to ponder because this doesn't contradict or even confront what I'm saying.

    This is a non sequitur. What is worthwhile to ponder has no bearing on what you say. How arrogant can one person be?

    > It might be personal research, but people have been programming for decades and this stuff has been tried over and over.

    Decades? You think that decades is long enough to get down to the fundamentals of a domain? People have been doing physics for 3 centuries and they're still discovering more. People have been doing mathematics for 3 millennia and they're still discovering more. Let the cycle happen. Don't discourage it. What's it you?

    > You're mentioning things that you expect to be self evident, but I don't see an explanation of why this simplifies programs at all.

    It may not simplify programs, but it allows for other avenues of formal verification and proof of correctness.

    ----

    Do you have other examples of where concepts were conflated that ended up "burning" the programmer?

    • What is worthwhile to ponder has no bearing on what you say.

      Ponder all you want, but what you said wasn't a reply to what I said.

      Decades? You think that decades is long enough to get down to the fundamentals of a domain?

      It is enough for this because people have been going around in circles constantly the entire time. It isn't the same people, it is new people coming in, thinking up something 'clever' like conflating execution and data, then eventually getting burned by it when it all turns into a quagmire. Some people never realize why their projects turned into a mess that can't move forward quickly without breaking or can't be learned without huge effort of edge cases.

      It depends on the framework in which you're analyzing it.

      No it doesn't. There are a bunch of fundamentals that are already universal that apply.

      First is edge cases. If you make something like an array start acting like a function, you are creating an edge case where the same thing acts differently depending on context. That context is complexity and a dependency you have to remember. This increases the mental load you need to get something correct.

      Second is dependencies. Instead of two separate things you now have two things that can't work right because they depend on each other. This increases complexity and mental load while decreasing modularity.

      Third is that execution is always more complicated than data. Now instead of something simple like data (which is simple because it is static and self evident) you have it mixed with something complicated that can't be observed unless it runs and all of the states at each line or fragment are observed. Execution is largely a black box, data is clear. Mixing them makes the data opaque again.

      4 replies →