The Definitive ANTLR Reference 95
Joe Kauzlarich writes "Finally, someone has done us all the great service of publishing a
book about the second most well-known compiler compiler, Terence
Parr's Antlr, and it was written, moreover, by Parr himself and
published as part of the somewhat-usually-reliable Pragmatic Bookshelf
series. Take note, while it requires a JVM to run, Antlr is not just
for Java developers; it generates compilers in Python, Ruby, C, C++,
C# and Objective-C. Also note that this book is more than just an
elaborated man-page; it is also an excellent introduction to the
concepts of compiler and parser design." Keep reading for the rest of Joe's review.
First off, I have no preference between Yacc-style parsers, JavaCC and
Antlr; I've never used Yacc, have used JavaCC in college and have
since played with Antlr and am just as ignorant in the use of them
all. The fundamental difference is that Antlr is a top-down LL(*)
(simply-put, variable-lookahead) parser generator while Yacc is a
bottom-up LR parser generator. JavaCC is also top-down, but employs a
different parsing strategy. The book describes the meanings of these
terms in simple detail.
The Definitive ANTLR Reference | |
author | Terrance Parr |
pages | 361 |
publisher | Pragmatic Bookshelf |
rating | 9 |
reviewer | Joe Kauzlarich |
ISBN | 978-0-9787392-5-6 |
summary | introduction to parser/compiler design using ANTLR |
I happen to have learned in my experience that good documentation for any of these products is hard to come by and difficult to follow, simply because the subject matter is obtuse and few, until now, have ventured to write expository literature to explain the myriad concepts to the non-academician. Of the three mentioned above, Antlr appears to be the more 'modern' and can also generate lexers from within the same grammar definition file, so the notions are integrated. Antlr also has a useful IDE called AntlrWorks with visualization features, causing grammar construction to be far simpler for a beginner.
That said, I don't wish to use this review to push Antlr over its alternatives, but only to press the point that this book serves not only to introduce Antlr to the average programmer, but the concepts of parser design as well. The concepts become necessary to understand while writing and debugging grammars, as not everything written in Backus-Naur Form will produce a working parser, and this holds true for any parser generator. Learning what works and what doesn't, as well as what workarounds are available, is key to becoming proficient in Antlr, Yacc or JavaCC. Once proficiency is acheived, you'll have the valuable skill of producing domain-specific languages on demand.
Terence Parr, as mentioned before, is not only the author and maintainer of Antlr, but he wrote the book as well. Antlr is on its long-awaited third version and has been maintained by Parr throughout the project's lifetime. He is a university professor and himself developed the path-breaking LL(*) parsing strategy employed by Antlr.
Parr begins with a one chapter background in computer language design before diving into a simple example of a parser for basic integer expressions. Part II is the meat of the book, describing various aspects of writing grammars for Antlr. Generally speaking, he covers the basic semantics of grammar writing, the many optimization, supplementary and 'workaround' options provided by Antlr, grammar actions and attributes, syntax trees, error reporting and related practical topics.
The third part, Understanding Predicated LL(*) Grammars, is the valuable 'textbook' portion of the book. It gives readers a short and comprehensible introduction to exactly what predicated-LL(*) means as well as a look at how competing parser generators work in contrast.
Both of the second and third parts are scattered with theoretical tidbits to help language designers better understand why grammars must work as they do. Those who can't pick their nose without a rudimentary theoretical overview of the subject can enjoy a few casual browsings through the book before even sitting in front of a computer. It works *almost* that well as a textbook, though it still doesn't approach such classics as Aho, et al's, Compilers: Principles, Techniques, and Tools (if you want to get seriously involved in compiler design). Take it for what it is though, as a chance to learn a tool of possible value without having to dig through old mailing lists and last-minute README's on the one hand, as was much the case a year ago, and on the other hand, devoting pain-staking class and study time to a lot of theory you won't find of practical value.
So I'll recommend this book on the basis that there's nothing else like it available; and don't wait until a project comes along that requires knowledge of compiler design, because there's a heck of a learning curve (I'm still on the very low end and I wrote a compiler in college). If you think compiler or parser design is interesting or may conceivably write a domain-specific language for your workplace, the Definitive Antlr Reference is not only a good place to start, but one of the only places to start short of signing up for a university course.
You can purchase The Definitive ANTLR Reference from amazon.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.
What are the most "standard" parser generators? (Score:5, Interesting)
I recently ran across this problem at my job: we maintain compilers for several(!) in-house languages, and I recently re-wrote the one for the most simple of them, changing it from a collection of three separate utilities (the most complicated of which was written in FORTRAN, which is generally horrible for manipulating text) into a Lex/Yacc/C (or rather, Flex/Bison/C) compiler.
I chose Lex and Yacc not because they were good, but because they're (in my opinion) very likely to be around 50 years from now. Are there any other compiler generators (such as possibly ANTLR) that might also meet this criteria, and would have been a better choice?
Good stuff... (Score:5, Interesting)
They've drastically reduced the freely available documentation on their web page, so you are essentially forced to buy it.
A new framework comparable to ANTLR: Gazelle (Score:5, Interesting)
The primary thing I am trying to deliver is reusability of parsers. The open-source community should be able to cooperate to write canonical parsers for all popular languages, but this goal is hampered by the fact that almost all parsing tools (ANTLR included) encourages you to write non-reusable grammars by virtue of the fact that you embed actions into the grammar.
Gazelle also takes a interpreter+JIT approach instead of emitting code in an imperative language. So for example, if you want a really fast HTTP parser from Ruby (which is precisely the raison d'etre for Mongrel), you can use the HTTP Gazelle parser from Ruby, but since the parsing is actually performed by the interpreter+JIT (written in C), you get extremely fast parsing without writing a line of C.
Gazelle is still very immature and not ready for people to try out, but I would encourage anyone who's interested to follow the Gazelle category on my blog [reverberate.org].
You can also check out:
- the current draft of the manual [reverberate.org], which will give you a better idea of the specifics of where I'm going with this.
- a graphical dump of the grammar for JSON [reverberate.org], which the current development code is capable of generating.
I really like ANTLR. (Score:4, Interesting)
I also like Sorcerer. In PCCTS 1.3, Sorcerer is a kind of tree traversal grammar tool. You create ASTs, with ANTLR, and Sorcerer creates a program which will traverse them, and call action routines where you specify. It's really pretty neat.
I'm thinking about something else though. I'm thinking we should really think about programming with grammars more than we do. Say, for example, you have a user interface of some kind. It gets certain events, its state changes, and it reacts to the environment. A good fraction of the set of state changes can be captured with some kind of finite state machine. But a context free grammar is equivalent to a finite state machine with a pushdown list to hold context. So, it seems very likely to me that a good way to build user interfaces is to somehow compose grammars. The tokens are events, and the action routines are the non-FSM state changes.
So, why is this interesting in this discussion? Well, ANTLR from PCCTS 1.3 is a recursive descent parser, and YACC/Bison are bottom up parsers. This means that the pushdown list for ANTLR is the execution stack, and the pushdown list for YACC/Bison is in data space. It's hard to see how one would maintain multiple ANTLR-style automata concurrently, but that's what you want to do for this style of programming.
Generally YACC/Bison pushdown lists and other parsing data are kept in global variables, but there is a way to make Bison generate C++ useable grammars where the parsing data are saved in a heap allocated object. This means they have a fixed size, which may be a problem. But it would not take a lot of work to change the parsing algorithm for Bison to make the pushdown list a linked list, and that might make things easier.
So, in short, I think it's pretty interesting to look at parsing, even if you're not writing compilers.
Comment removed (Score:5, Interesting)
Re:ANTLR vs Gold Parser (Score:3, Interesting)
I am posting this simply so that others can see a different view and judge for themselves.
I have used ANTLR for years (not version 3) and have had no trouble getting it to do what I want. I have not tried to get it to interpret COBOL code, however. I have even used ANTLR in .NET and found it to be easy, breezy.
Keep in mind that this is no drag-and-drop technology for light weights. You are really going to have to know your compiler and formal language theory and be willing to study some sample grammars [antlr2.org]. You should also be comfortable with BNF and prior experience with YACC is a plus.