As interesting as Lisp-family languages are, I still think it's better to use something with more traditional syntax and start with parsing, because that both reaches a much wider audience and gives a very early introduction to thinking recursively --- the latter being particularly important for understanding of the process in general. A simple expression evaluator, that you can later turn into a JIT and then a compiler, is always a good first exercise.
> a very early introduction to thinking recursively
Not everything needs to be on a so-you-have-never-programmed-before level. This series explicitly assumes "some knowledge of native-code build processes, Lisp, C, and x86 assembly language". People who know Lisp should have had an introduction to thinking recursively already.
Parsing would be a useless distraction for someone interested in writing a Scheme compiler in Scheme.
I think I partially agree, but that it mainly depends on the audience. I've found that parsing and recursion appear in more areas than just compilers, to the point where it'd be relatively difficult to avoid them. Machine code generation and the underlying implementation of high-level primitives (closures, tail-call recursion, and the like) hasn't, at least in my experience, been "naturally-occurring" to the same extent.
In terms of creating a "learn compilers from scratch" resource, Crenshaw's approach is definitely better. The trade-off would be that it'd take longer to get past the "writing a recursive descent/LR parser" phase, and it might never get to higher-level language features at all depending on the input language you go with.
I've worked my way through quite a few tutorials on implementing Scheme I think you're absolutely correct that detailed material on implementing high level features, especially in machine code, is lacking. I'm very grateful for this tutorial. Can't wait for the next part!
I do like the simplicity, but the lack of proportional fonts for the text makes reading much more effort than it should be. There's a reason printers have used proportion and serifs for centuries.
I have attempted following Ghuloum's paper, too. One difference is I wanted to make it as self-contained as possible and didn't want to depend on a C compiler or binutils. So I wrote a simple assembler. Here it is, all in Clojure:
When I went over the tiger book back at the university, our teacher had a cool approach to overcome that.
Generate bytecode, but in a form that could be easily mapped to macros on a Macro Assembler, thus we only needed to write such macros for each target platform.
From performance point of view it was quite bad, but we got complete AOT static binaries out of it anyway.
it puzzles me sometimes why we programmers are so fascinated by compilers, interpreters, VMs, runtimes, etc. many of these will never make it to the level of, say, a production c++ compiler or a Java VM. and yet we keep building small compilers.
I did the Nand2Tetris course which includes building a basic compiler.
It just helps fully understand how you go from words in a file to actually doing computations and how purely abstract ideas like a 'class' are implemented.
To be fair, I studied Physics and not CS so I didn't have the opportunity to study Compilers at University.
All craftsmen take an interest in their tools, and in software the tools are made with the same processes we use on a day-to-day basis. It’s the same with blacksmiths and woodworkers to varying degrees.
Not all those who consider themselves programmers necessarily come from a CS background and so don't learn about concepts like these. To them, walkthroughs like these are fascinating.
Is it not a rite of passage to design & implement your own language, distilling your two years of knowledge and arrogance into one pathetic failure of a design? And from then on appreciating some sense of the difficulty of constructing and maintaining such things?
Perhaps not always languages, but such experiences are vital to our industry!
We live in a world were ever PC has at least 2 Gbyte of ram and most CPU's are 64 bits, the section 'Data Representation' begins to explain how 'everything' can be stored in a single 32-bit integer, if we limit integers to 30 bits. What the hack?
Generating 64-bit code would be simpler in many ways, but I decided not to go into it until the basic language features are put in place. The extent of the changes involved in switching to amd64 will help directly show why having an intermediate representation for the generated code would be valuable.
Besides, if you're looking for a high-performance Scheme that fully utilizes all available system resources, there are definitely better options available. :)
For your initial design you could also have chosen to use an additional byte to represent the type of the value. As representing the type would only require 2 or 3 bits, some bits will be unused (probably some more, due to alignment requirements), but maybe later on in the development of the compiler, those bits could be used to store some additional information. That would have made your code a lot simpler.
As you probably want to combine these valuse together into some structures representing the various language constructs, an additional byte to represent the type of the structure, and thus the type of its elements, would also be needed. Than you could do away with the extra bits representing the type.
I just think this is premature optimization and making things unneccessary complex especially for your readers who might want to learn something from it.
This looks like something similar to Crenshaw's excellent tutorial of the same name:
https://compilers.iecc.com/crenshaw/
and its x86 port: https://github.com/lotabout/Let-s-build-a-compiler
As interesting as Lisp-family languages are, I still think it's better to use something with more traditional syntax and start with parsing, because that both reaches a much wider audience and gives a very early introduction to thinking recursively --- the latter being particularly important for understanding of the process in general. A simple expression evaluator, that you can later turn into a JIT and then a compiler, is always a good first exercise.
> a very early introduction to thinking recursively
Not everything needs to be on a so-you-have-never-programmed-before level. This series explicitly assumes "some knowledge of native-code build processes, Lisp, C, and x86 assembly language". People who know Lisp should have had an introduction to thinking recursively already.
Parsing would be a useless distraction for someone interested in writing a Scheme compiler in Scheme.
I think I partially agree, but that it mainly depends on the audience. I've found that parsing and recursion appear in more areas than just compilers, to the point where it'd be relatively difficult to avoid them. Machine code generation and the underlying implementation of high-level primitives (closures, tail-call recursion, and the like) hasn't, at least in my experience, been "naturally-occurring" to the same extent.
In terms of creating a "learn compilers from scratch" resource, Crenshaw's approach is definitely better. The trade-off would be that it'd take longer to get past the "writing a recursive descent/LR parser" phase, and it might never get to higher-level language features at all depending on the input language you go with.
I've worked my way through quite a few tutorials on implementing Scheme I think you're absolutely correct that detailed material on implementing high level features, especially in machine code, is lacking. I'm very grateful for this tutorial. Can't wait for the next part!
Haven't started reading yet, but I'm really digging this trend of syntax-highlighted bare text blogs[0]. Is this a template or is it hand-crafted?
[0]: https://christine.website/blog/h-language-2019-06-30
Currently I'm using a modified version of the after-dark[0] theme for Zola.
[0] https://www.getzola.org/themes/after-dark/
Not related to this site, but my site is "hand-crafted':
http://www.oilshell.org/site.html
http://www.oilshell.org/blog/
And yes one of the main things I had to integrate myself was syntax highlighting (via pygments).
FWIW, I think proportional fonts for text and fixed width for code is more readable.
A few people asked me about the tools, and I dumped them here (but they are not supported, may not be runnable):
https://github.com/oilshell/blog-code/tree/master/tools-snap...
I do like the simplicity, but the lack of proportional fonts for the text makes reading much more effort than it should be. There's a reason printers have used proportion and serifs for centuries.
The website seems to be using a template [0], and the CSS is a framework [1].
[0]: https://github.com/getzola/after-dark [1]: https://github.com/egoist/hack
I have attempted following Ghuloum's paper, too. One difference is I wanted to make it as self-contained as possible and didn't want to depend on a C compiler or binutils. So I wrote a simple assembler. Here it is, all in Clojure:
https://github.com/nathell/lithium
It's dormant – I was stuck on implementing environments around step 7 of 24 – but someday I will return to it and make progress.
When I went over the tiger book back at the university, our teacher had a cool approach to overcome that.
Generate bytecode, but in a form that could be easily mapped to macros on a Macro Assembler, thus we only needed to write such macros for each target platform.
From performance point of view it was quite bad, but we got complete AOT static binaries out of it anyway.
Similar recent thread
"My first 15 compilers" https://news.ycombinator.com/item?id=20408011
In the save vein, I'm also going to recommend http://www.craftinginterpreters.com/
It's a work in progress, but very well made. I think Bob (who is writing the book) is a great educator.
it puzzles me sometimes why we programmers are so fascinated by compilers, interpreters, VMs, runtimes, etc. many of these will never make it to the level of, say, a production c++ compiler or a Java VM. and yet we keep building small compilers.
I did the Nand2Tetris course which includes building a basic compiler.
It just helps fully understand how you go from words in a file to actually doing computations and how purely abstract ideas like a 'class' are implemented.
To be fair, I studied Physics and not CS so I didn't have the opportunity to study Compilers at University.
> so I didn't have the opportunity to study Compilers at University.
Lots of CS people haven't either. My university moved compiler theory to the Masters program.
1 reply →
All craftsmen take an interest in their tools, and in software the tools are made with the same processes we use on a day-to-day basis. It’s the same with blacksmiths and woodworkers to varying degrees.
Not all those who consider themselves programmers necessarily come from a CS background and so don't learn about concepts like these. To them, walkthroughs like these are fascinating.
Programmers with CS backgrounds also delight in these concepts and things.
Is it not a rite of passage to design & implement your own language, distilling your two years of knowledge and arrogance into one pathetic failure of a design? And from then on appreciating some sense of the difficulty of constructing and maintaining such things?
Perhaps not always languages, but such experiences are vital to our industry!
Relevant Steve Yegge blog post: https://steve-yegge.blogspot.com/2007/06/rich-programmer-foo...
As a side note, is there something similar for building a distributed application (could be a very simple NoSQL DB or maybe some stream processor).
Props for not being yet another C based tutorial. Quite interesting read.
We live in a world were ever PC has at least 2 Gbyte of ram and most CPU's are 64 bits, the section 'Data Representation' begins to explain how 'everything' can be stored in a single 32-bit integer, if we limit integers to 30 bits. What the hack?
Generating 64-bit code would be simpler in many ways, but I decided not to go into it until the basic language features are put in place. The extent of the changes involved in switching to amd64 will help directly show why having an intermediate representation for the generated code would be valuable.
Besides, if you're looking for a high-performance Scheme that fully utilizes all available system resources, there are definitely better options available. :)
For your initial design you could also have chosen to use an additional byte to represent the type of the value. As representing the type would only require 2 or 3 bits, some bits will be unused (probably some more, due to alignment requirements), but maybe later on in the development of the compiler, those bits could be used to store some additional information. That would have made your code a lot simpler.
As you probably want to combine these valuse together into some structures representing the various language constructs, an additional byte to represent the type of the structure, and thus the type of its elements, would also be needed. Than you could do away with the extra bits representing the type.
I just think this is premature optimization and making things unneccessary complex especially for your readers who might want to learn something from it.
3 replies →
Keeping values small is still useful for the sake of CPU cache
Yes, RAM is aplenty, lets have every program use it to its fullest :-P
Also known as "unused RAM is wasted RAM".
https://cr.yp.to/bib/1995/wirth.pdf
Most CPUs are 8-bit with 32-bit beginning to take over. They usually don't have more than 256K.
Most CPUs aren't in self-hosting systems.
4 replies →
It's premature optimisation... Java has conditioned everyone to believe that boxing variables is slow.
As Bill Gates was alleged to have said "640k ought to be enough for anybody."
He did not say that, sorry.
4 replies →