← Back to context

Comment by jart

4 years ago

TCC is non-optimizing and its 64-bit implementation was contributed by someone else and never finished. Borland Pascal has an IDE. Their executable has 7.141258 bits per byte of entropy. That's even higher than SectorLISP, a 436 byte development environment, which only has 6.300498 bits per byte of entropy. I wish I could travel back in time and the people who wrote Turbo Pascal could probably teach me a thing or two about size optimization tricks.

> I wish I could travel back in time and the people who wrote Turbo Pascal could probably teach me a thing or two about size optimization tricks.

Uh you’re doing fine. You are a complete beast and deeply inspiring.

P.S. At Microsoft I worked with Anders Hejlsberg, who created Turbo Pascal (and C#, typescript, etc.) I took full advantage of my position and peppered him with questions about compiler writing, which he endured with grace and good humor.

  • Amazing opportunity. That software is just crazily good.

    The question is even with him can we do that for other software.

How do you even define “entropy” of a program? It’s a byte stream where each byte is important.

  • Shannon entropy. https://github.com/jart/cosmopolitan/blob/master/libc/rand/m... You can even measure a program in terms of physics constants too.

    • Not trying to be rude here; Just trying to understand…

      My issue is: sure, you can measure it, but does the result even make sense? Like, OK, we can compute how “random” an instruction stream is, but what purpose does it serve?

      ISAs aren’t designed in a random manner, so what’s the point in comparing the entropy values of two different programs? The only thing I can think of is determining how compressible a given program is. But, that doesn’t tell us anything about performance/instruction (or byte).[a]

      [a] For example, AES-512 can pack massive amounts of functionality into 5 or so bytes

      1 reply →