← Back to context

Comment by hamstergene

8 days ago

Reminds me of another recurring idea of replacing code with flowcharts. First I've seen that idea coming from some unknown Soviet professor from 80s, and then again and again from different people from different countries in different contexts. Every time it is sold as a total breakthrough in simplicity and also every time it proves to be a bloat of complexity and a productivity killer instead.

Or weak typing. How many languages thought that simplifying strings and integers and other types into "scalar", and making any operation between any operands meaningful, would simplify the language? Yet every single one ended up becoming a total mess instead.

Or constraint-based UI layout. Looks so simple, so intuitive on simple examples, yet totally failing to scale to even a dozen of basic controls. Yet the idea keeps reappearing from time to time.

Or an attempt at dependency management by making some form of symlink to another repository e.g. git modules, or CMake's FetchContent/ExternalProject? Yeah, good luck scaling that.

Maybe software engineering should have some sort of "Hall of Ideas That Definitely Don't Work", so that young people entering the field could save their time on implementing one more incarnation of an already known not good idea.

> Maybe software engineering should have some sort of "Hall of Ideas That Definitely Don't Work", so that young people entering the field could save their time on implementing one more incarnation of an already known not good idea.

I'm deeply curious to know how you could easily and definitively work out what is and is not an idea that "Definitely Don't Work"

Mathematics and Computer Science seem to be littered with unworkable ideas that have made a comeback when someone figured out how to make them work.

  • Well, "Hall of Ideas That Are So Difficult To Make Work Well That They May Not In Fact Be Much Use" doesn't roll off the tongue as smoothly.

    What this Hall could contain, for each idea, is a list of reasons why the idea has failed in the past. That would at least give future Quixotes something to measure their efforts by.

    • Ok, so better documentation about what was tried, why, how it failed so as to make obvious if it's viable to try again or not.

      I can get behind that :)...

Constraint-based layout works, but you need a serious constraint engine, such as the one in the sketch editors of Autodesk Inventor or Fusion 360, along with a GUI to talk to it. Those systems can solve hard geometry problems involving curves, because you need that when designing parts.

Flowchart-based programming scales badly. Blender's game engine (abandoned) and Unreal Engine's "blueprints" (used only for simple cases) are examples.

Not sure if you’re talking about DRAKON here, but I love it for documentation of process flows.

It doesn’t really get complicated, but you can very quickly end up with drawings with very high square footage.

As a tool for planning, it’s not ideal, because “big-picture” is hard to see. As a user following a DRAKON chart though, it’s very, very simple and usable.

Link for the uninitiated: https://en.m.wikipedia.org/wiki/DRAKON

For young engineers, it is a good thing to spend time implementing what you call "bad ideas". In the worst-case, they learn from their mistake and gain valuable insight into the pitfalls of such ideas. In the best case, you can have a technological breakthrough as someone finds a way to make such an idea work.

Of course, it's best that such learning happens before one has mandate to derail the whole project.

> Maybe software engineering should have some sort of "Hall of Ideas That Definitely Don't Work", so that young people entering the field could save their time on implementing one more incarnation of an already known not good idea.

FWIW, neural networks would be in that pool until relatively recently.

  • If we change "definitely don't work" to "have the following so far insurmountable challenges", it addresses cases like this. Hardware scaling limitations on neural networks have been known to be a limitation for a long time - Minsky and Papert touched in this in Perceptrons in 1969.

    The Hall would then end up containing a spectrum ranging from useless ideas to hard problems. Distinguishing between the two based on documented challenges would likely be possible in many cases.

Most popular dependency management systems literally linking to a git sha commit (tag), see locks file that npm/rebar/other tool gives you. Just in a recursive way.

  • They do way more than that. For example they won't allow you to have Foo-1 that depends on Qux-1 and Bar-1 that depends on Qux-2 where Qux-1 and Qux-2 are incompatible and can't be mixed within the same static library or assembly. But may allow it if mixing static-private Qux inside dynamic Foo and Bar and the dependency manager is aware of that.

    A native submodule approach would fail at link time or runtime due to attempt to mix incompatible files in the same build run. Or, in some build systems, simply due to duplicate symbols.

    That "just in a recursive way" addition hides a lot of important design decisions that separate having dependency manager vs. not having any.

    • They do way less then that. They just form a final list of locks and download that at the build time. Of course you have to also "recursively" go though all your dep tree and add submodules for each of subdependencies (recommend to add them in the main repo). Then you will have do waste infinite amount of time setting include dirs or something. If you have two libs that require a specific version of a shared lib, no dep manager would help you. Using submodules is questionable practice though. Useful for simple stuff, like 10 deps in total in the final project.

> Or weak typing. How many languages thought that simplifying strings and integers and other types into "scalar", and making any operation between any operands meaningful, would simplify the language? Yet every single one ended up becoming a total mess instead.

Yet JavaScript and Python are the most widely used programming languages [1]. Which suggests your analysis is mistaken here.

[1] https://www.statista.com/statistics/793628/worldwide-develop...

  • Python went through a massive effort to add support for type annotations due to user demand.

    Similarly, there's great demand for a typed layer on top of Javascript:

    - Macromedia: (2000) ActionScript

    - Google: (2006) GWT [Compiling Java to JS], and (2011) Dart

    - Microsoft: (2012) Typescript

    • You’re talking about static typing, the opposite of which is dynamic typing. User hamstergene is talking about weak vs. strong typing, which is another thing entirely. Python has always been strongly typed, while JavaScript is weakly typed. Many early languages with dynamic types also experimented with weak typing, but this is now, as hamstergene points out, considered a bad idea, and virtually all modern languages, including Python, are strongly typed.

This is recurring topic indeed. I remember it was hot topic at least two times, when ALM tools were introduced (e.g. Borland ALM suite - https://www.qast.com/eng/product/develop/borland/index.htm), next when BPML language become popular - processes were described by the "marketing" and the software was, you know, generated automatically.

All this went out of fashion, leaving some good stuff that was built at that time (remaining 95% was crap).

Today's "vibe coding" ends when Chat GPT and alikes want to call on some object a method that does not exist (but existed in 1000s of other objects LLM was trained with, so should work here). Again, we will be left with the good parts, the rest will be forgotten and we will move to next big thing.