Comment by mojuba
18 years ago
With that approach you won't be able to write a compiler for anything other than a (suboptimal) Lisp derivative, which itself is just a small part of computing world. Take a look at the GNU Compiler Collection and see how far it is to write a working optimizing compiler from the Lisp macro paradigm.
I don't claim that this is the only way to learn how to write compilers, it's just one surprisingly effective way.
And I actually think skill in generalised, simple, macro-type compilers is more useful, generally, than knowing the ins and outs of optimizing compilers, but that's just me.
For the sake of discussion, what are some of the important things for optimizing compilers that you can't do with this approach? I'm having trouble finding any -- both in a literal sense of possibility, and from a practical standpoint.
I may be underestimating macros but I can't imagine any optimization technique that can be done with them when generating low-level output, be it native or virtual machine code. Can you demonstrate (just theoretically, of course) copy propagation, removing loop invariants, automatic inlining of functions, to name a few?
It's not hard to see how macros could do this. Macros just transform code into other code, which is what optimizations do too. For example, if you have an s-expr representing a loop, imagine a function which accepts that s-expr and returns a new one with invariants moved outside the loop body.
5 replies →
You're right that if you keep the implementations of Lisp compile time macros and use only that, you have as hard time reducing code that's already been macroexpanded, or applying global effects.
But when you might macros and the language itself, what you find is that you already have the pieces of what you need for a serious compiler: a symbol table, built in, a way to manipulate the parse tree, an easy way to do local expansions, and most importantly, a fully featured language.
You can do this with existing Lisps by a a few methods: making first class runtime macros, for example, or by saving the source code and working over it in passes.
Does this explain it?
And even if writing a more complete compiler wasn't difficult enough, there's more much to it. There are complex details like exception handling (stack unwinding, signals...), graphical debuggers, interface with GUI libraries, threads, etc.
The fact that only commercial (and expensive) Lisp implementations have all these features is a hint that they're not trivial.
I'm not sure that's true. Right now I am working on GNU CLISP with bindings to Oracle and a GUI (via Ltk). Sure it's not trivial, but it's not impossible either.
That could fill the requirements for "enterprise" software, the kind of work that people happily convert to web apps. Making desktop software can be much much more demanding.
And in fact what we all need is a good, commercial grade open-source Lisp compiler + tools.
I'm sorry, but I strongly disagree. Lisp's problems are not technical, they're social. We have a commercial-grade open-source lisp compiler. It's SBCL. And we have really excellent models for tools. If someone would extract SLIME from emacs and into an editor with less history behind it and more popular appeal, you'd have most of the tools you need.
Lisp is suffering because its community is fragmented and it has no leaders. Name a popular language that doesn't have an iconic corporation or person behind it, rallying and focusing the community? That condition is actually quite rare.
10 replies →