← Back to context

Comment by samsquire

1 year ago

I greatly enjoyed this comment of yours daltonpinto. Thank you.

I do not rejoice with code bases where every file has no logic or code in it but there are hundreds of methods and files with everything spread out. I have no idea how those projects fit together because the actual logic is spread out.

For my personal side project hobby work it's all in one file.

> I have no idea how those projects fit together because the actual logic is spread out.

There are two kind of mazes:

- those that are so tangled up that you can't make out any kind of structure.

- those that are so regular that you can't make out any kind of structure.

  • Exactly!

    Not because it is split it means that was well split.

    If you can reason where something should be, it may be badly organized.

    Even single file projects can be organized.

    If is small enough and makes sense, why not?

    But even in a small file, how is it organized? Utility functions maybe at bottom so you keep business logic together and visible? Maybe some other logic to make it cleaner? The way functions are named follow some convention? What about variables?

    I once read one opinion that software developers had more in common to artists and writers than engineers. Writing code is like an art of expressing yourself well to your peers in a way that a computer will also execute what you meant, not the opposite.

    The computer can understand whatever entangled mess you write, other people (which may include your future self) may not.

    • A word cloud could be a useful navigation technique if the words were clickable and the sizes of words could be customized.

      They could also link to other word clouds

      Generated and updated automatically

> For my personal side project hobby work it's all in one file.

Ok, so this is work, but:

    find . -name '*.cs' -type f -print0 | wc --files0-from=-
    1035617 3438912 47446211 total

Not many editors are comfortable with a million line document that's 47MB. And that doesn't include generated code (which I very rarely need to look at, but is right there under F12 if I do)

  • A 100G file was ok in those editors even with syntax highlighting. That is an extreme because saving did take some time but there are ways to optimize for that would it ever become popular. IMHO 640K per file is enough for everybody.

      dd if=/dev/urandom bs=1M count=100k| tr -dc 'A-Za-z0-9\n' | fold -w 130 > largefile.c

  • Thanks for the hint about `--files0-from`, BTW. I thought I needed `xargs` for this.

  • We need tools that process command line output and large files and create tables of contents dynamically based on rules of architecture or comments and allow efficient navigation. I need overlapping buckets or tags.

    I've been thinking about it lately but what I've seen in IDEs is not what I want

    I liked OpenGrok

    • Are you willing to do that by hand? Have you heard of the author of _The Art of Computer Programming_ and the technique he created: Literate Programming?

The bane of "functional. No such thing as objects or classes, just a bunch of shotgun spread state fragments.

Cargo cult runneth strong on web.