Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Book Reviews Books Media

The Definitive ANTLR Reference 95

Joe Kauzlarich writes "Finally, someone has done us all the great service of publishing a book about the second most well-known compiler compiler, Terence Parr's Antlr, and it was written, moreover, by Parr himself and published as part of the somewhat-usually-reliable Pragmatic Bookshelf series. Take note, while it requires a JVM to run, Antlr is not just for Java developers; it generates compilers in Python, Ruby, C, C++, C# and Objective-C. Also note that this book is more than just an elaborated man-page; it is also an excellent introduction to the concepts of compiler and parser design." Keep reading for the rest of Joe's review.
The Definitive ANTLR Reference
author Terrance Parr
pages 361
publisher Pragmatic Bookshelf
rating 9
reviewer Joe Kauzlarich
ISBN 978-0-9787392-5-6
summary introduction to parser/compiler design using ANTLR
First off, I have no preference between Yacc-style parsers, JavaCC and Antlr; I've never used Yacc, have used JavaCC in college and have since played with Antlr and am just as ignorant in the use of them all. The fundamental difference is that Antlr is a top-down LL(*) (simply-put, variable-lookahead) parser generator while Yacc is a bottom-up LR parser generator. JavaCC is also top-down, but employs a different parsing strategy. The book describes the meanings of these terms in simple detail.

I happen to have learned in my experience that good documentation for any of these products is hard to come by and difficult to follow, simply because the subject matter is obtuse and few, until now, have ventured to write expository literature to explain the myriad concepts to the non-academician. Of the three mentioned above, Antlr appears to be the more 'modern' and can also generate lexers from within the same grammar definition file, so the notions are integrated. Antlr also has a useful IDE called AntlrWorks with visualization features, causing grammar construction to be far simpler for a beginner.

That said, I don't wish to use this review to push Antlr over its alternatives, but only to press the point that this book serves not only to introduce Antlr to the average programmer, but the concepts of parser design as well. The concepts become necessary to understand while writing and debugging grammars, as not everything written in Backus-Naur Form will produce a working parser, and this holds true for any parser generator. Learning what works and what doesn't, as well as what workarounds are available, is key to becoming proficient in Antlr, Yacc or JavaCC. Once proficiency is acheived, you'll have the valuable skill of producing domain-specific languages on demand.

Terence Parr, as mentioned before, is not only the author and maintainer of Antlr, but he wrote the book as well. Antlr is on its long-awaited third version and has been maintained by Parr throughout the project's lifetime. He is a university professor and himself developed the path-breaking LL(*) parsing strategy employed by Antlr.

Parr begins with a one chapter background in computer language design before diving into a simple example of a parser for basic integer expressions. Part II is the meat of the book, describing various aspects of writing grammars for Antlr. Generally speaking, he covers the basic semantics of grammar writing, the many optimization, supplementary and 'workaround' options provided by Antlr, grammar actions and attributes, syntax trees, error reporting and related practical topics.

The third part, Understanding Predicated LL(*) Grammars, is the valuable 'textbook' portion of the book. It gives readers a short and comprehensible introduction to exactly what predicated-LL(*) means as well as a look at how competing parser generators work in contrast.

Both of the second and third parts are scattered with theoretical tidbits to help language designers better understand why grammars must work as they do. Those who can't pick their nose without a rudimentary theoretical overview of the subject can enjoy a few casual browsings through the book before even sitting in front of a computer. It works *almost* that well as a textbook, though it still doesn't approach such classics as Aho, et al's, Compilers: Principles, Techniques, and Tools (if you want to get seriously involved in compiler design). Take it for what it is though, as a chance to learn a tool of possible value without having to dig through old mailing lists and last-minute README's on the one hand, as was much the case a year ago, and on the other hand, devoting pain-staking class and study time to a lot of theory you won't find of practical value.

So I'll recommend this book on the basis that there's nothing else like it available; and don't wait until a project comes along that requires knowledge of compiler design, because there's a heck of a learning curve (I'm still on the very low end and I wrote a compiler in college). If you think compiler or parser design is interesting or may conceivably write a domain-specific language for your workplace, the Definitive Antlr Reference is not only a good place to start, but one of the only places to start short of signing up for a university course.

You can purchase The Definitive ANTLR Reference from amazon.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.
This discussion has been archived. No new comments can be posted.

The Definitive ANTLR Reference

Comments Filter:
  • Why is it called ANTLR when it's an LL parser? Personally I prefer LR parsers, even though you can't parse Java with it (as far as I know).
    • by JonTurner ( 178845 ) on Wednesday May 28, 2008 @02:48PM (#23574295) Journal
      Gotta look at the prefix: "ant" meaning "not".
      IOW, AntLR is a "not LR" parser.

      Hey, it made sense when I wrote it.
    • Re: (Score:3, Informative)

      by compro01 ( 777531 )
      I believe the name means ANother Tool for Language Recognition. Nothing to do with LL vs. LR.
    • I had a compiler class a few years back, we wrote a java compiler that produced MIPS code..

      sure, we skimped on the java implementation.. but it was able to handle simple functions like factorial, and sorting algorithm, Objects.. I didnt manage to get inheritance to work, but that's my goof.

      Storm

    • Re: (Score:3, Funny)

      by DrEasy ( 559739 )
      He should have called it Parrser, named after the author, plus it has a nice pirate-y ring to it.
    • 1. Yes, ANTLR is an LL parser. There is a java parser implementation in ANTLR as well.

      2. The set of languages parseable by an LR parser is a superset of the set parseable by an LL parser.

      Ergo, yes, you can indeed parse Java with an LR parser.
  • by mrchaotica ( 681592 ) * on Wednesday May 28, 2008 @02:50PM (#23574321)

    I recently ran across this problem at my job: we maintain compilers for several(!) in-house languages, and I recently re-wrote the one for the most simple of them, changing it from a collection of three separate utilities (the most complicated of which was written in FORTRAN, which is generally horrible for manipulating text) into a Lex/Yacc/C (or rather, Flex/Bison/C) compiler.

    I chose Lex and Yacc not because they were good, but because they're (in my opinion) very likely to be around 50 years from now. Are there any other compiler generators (such as possibly ANTLR) that might also meet this criteria, and would have been a better choice?

  • If you think compiler or parser design is interesting or may conceivably write a domain-specific language for your workplace, the Definitive Antlr Reference is not only a good place to start, but one of the only places to start short of signing up for a university course.

    I am currently reading Programming Language Pragmatics. It's pretty good I think, but then, I have nothing to compare with. I'll probably pick up the Antlr book too.
  • Antlr? Never heard of it. Now Lex/YACC, I have.
    • Re: (Score:3, Insightful)

      by OrangeTide ( 124937 )
      I prefer the LEMON parser generator [hwaci.com] over yacc/bison. You can use LEMON with lex/flex or just roll your own scanner by hand (which is usually pretty easy anyways).
      • Seconded. I am currently in the middle of writing Smalltalk JIT using LLVM and am using Lemon for the front end because. It is very simple to use and, being public domain, has no license issues to think about. I wrote my own scanner, but it's only about a dozen lines of code.

        I've not benchmarked the code it produces though, so I can't comment on its output.

    • by Haeleth ( 414428 )
      Hence "second most well-known". As in, less well-known than YACC and its clones, but better-known than any of the other million parser generators.

      I've no idea whether that's true, of course.
      • I've never heard of it either. Bison, YACC, CUP, LEMON, a couple other C-based ones, but not ANTLR.
        • Re: (Score:1, Funny)

          You mean to say the submitter didn't consult with you first before making that statement? The nerve of Slashdotters these days...
    • by quarrel ( 194077 )
      A common problem, that a book like this should help correct (NB: I haven't read it).

      Back when we were kids, the compiler courses taught us about limitations to LL(k) grammars, that it turns out, aren't true (ok, ok - the theory was in fact correct, but the practical implications they passed on were in fact incorrect).

      Enter ANTLR - it changed the game - and you should get to know it and why it it did. Generic LL(k) grammars at your fingertips.

      I thought I understood this stuff, because of so called "definitiv
      • Indeed... arbitrary lookahead in a parser is a godsend, and makes much more interesting grammars possible. Plus, the top-down parsing strategy makes it a *lot* easier to generate sensible error messages, since the parsing context is readily available.
  • One thing I really don't like is that a ANTLR grammar is limited to a single target language. You can use the same grammar to produce both a Java and C# parser. You need to make a few tedious changes to the grammar file.
    We currently have a grammar that needs to be preprocessed a bit before we feed it to ANTLR to produce the parser. It's only about 5 lines in the grammar that need to be changed.
    • by parrt ( 157907 )
      ANTLR can generate 5 targets at the moment. perhaps you are referring to the fact that actions embedded within the grammar are in a particular language?
      • The problem is a simple as this:


        grammar Cps;

        options {
        k = 1;
        output = AST;
        superClass = CpsParserBase;
        language = Java; /* but I also need CSharp
        and there is no way to set the target
        outside of the grammar */
        }

        [...] /* For Java, not ignored when CSharp is the target */
        @head
  • Good stuff... (Score:5, Interesting)

    by Rocky ( 56404 ) on Wednesday May 28, 2008 @03:54PM (#23575269)
    ..problem is, you can't really do anything non-trivial in ANTLR 3.0 without buying the book.

    They've drastically reduced the freely available documentation on their web page, so you are essentially forced to buy it.
    • ..problem is, you can't really do anything non-trivial in ANTLR 3.0 without buying the book. They've drastically reduced the freely available documentation on their web page, so you are essentially forced to buy it.

      I don't know where you get that idea from, other than a few idiot postings on the ANTLR mailing list perhaps, or maybe you were one of those and are just trolling. Anyway, it is bollocks - you think anyone working for free on a free open source project can be bothered to go DELETE documentation? Nothing has ever been removed from the web site unless it was wrong, example grammars are all over the place and there is a wiki that tells you how to do anything the book does and is added to pretty regularly.

      • Re: (Score:2, Informative)

        by JeroenFM ( 1259708 )
        As a person who owns the book and has tried working without it, I have to agree with grandparent here. The book is a must-have if you want to do serious work with version 3 of ANTLR - the v3 documentation or Wiki might contain some of the information you need for a serious grammar, but it's not presented in a consistent or useful manner. Sure, you can write a grammar without the book but unless you're intimately familiar with ANTLR, much of the online documentation just isn't all that helpful. The book on
      • Re: (Score:2, Informative)

        by ghettoimp ( 876408 )
        After using ANTLR for a class long ago and being so impressed with it, I just returned to ANTLR today. I was shocked at the lack of documentation on the web site. I eventually typed "antlr reference" into google and found the following PDF: http://www.antlr.org/share/1084743321127/ANTLR_Reference_Manual.pdf [antlr.org] It's outdated and had many no-longer-supported constructs, but paired with the changes from 2.x to 3.0 it was adequate for what I needed to do. I can see nothing comparable linked from the ANTLR hom
    • Re:Good stuff... (Score:5, Informative)

      by parrt ( 157907 ) on Wednesday May 28, 2008 @06:08PM (#23577513) Homepage
      howdy. I never deleted anything from the documentation. v3 was completely new, I simply didn't provide as much documentation as some would like. I had a simple choice to make: (1) write some free documentation for which I would not be very motivated (after doing the 5 years of 7 day/week coding effort for v3) or (2) use cash to motivate myself to write decent documentation (side benefit is that I could use the book towards getting tenure at the University of San Francisco whereas documentation does not count as a publication). Obviously I chose (2), but I understand your frustration completely. It is only like 20 bucks at Amazon though ;)
      • The fact that you directly benefit from the book is a plus if it means there's more of an incentive for you to work on ANTLR, so I've just been to Amazon to order a copy.

      • Anyone that would complain about what Dr. Parr has contributed to the "compiler compiler" crowd simply hasn't tried to do work like this themselves. It is time consuming and draining to work on a project of this magnitude in addition to a job, and on top of that write decent documentation.

        I couldn't quite get Antlr 2.0 to work for my Domain Specific Language application (a Decision Table based Rules Engine), and that mostly because digging through all the online documentation answered my questions at simpl
      • by Rocky ( 56404 )
        I have no problem with why you did it - money and pubs are great motivators for doing something as jejune as documentation.

        I just thought that you should know that it made development "interesting" (read:very trial-and-error) until someone from the lab bought the book. It might also make it more difficult for beginners to get into using the tool, although I suppose you could make it a required text if you were teaching a class.

        P.S.: Love the interpreter - that by itself saves a bunch of time!
  • by CoughDropAddict ( 40792 ) * on Wednesday May 28, 2008 @04:09PM (#23575515) Homepage
    I would encourage anyone who is interested in parsing or ANTLR to follow my project Gazelle [reverberate.org]. It is intended to be a next-gen parsing framework that builds on the ideas set forth in ANTLR but packages them in a significantly different way, which offers a lot of benefits (which I list in detail on the website).

    The primary thing I am trying to deliver is reusability of parsers. The open-source community should be able to cooperate to write canonical parsers for all popular languages, but this goal is hampered by the fact that almost all parsing tools (ANTLR included) encourages you to write non-reusable grammars by virtue of the fact that you embed actions into the grammar.

    Gazelle also takes a interpreter+JIT approach instead of emitting code in an imperative language. So for example, if you want a really fast HTTP parser from Ruby (which is precisely the raison d'etre for Mongrel), you can use the HTTP Gazelle parser from Ruby, but since the parsing is actually performed by the interpreter+JIT (written in C), you get extremely fast parsing without writing a line of C.

    Gazelle is still very immature and not ready for people to try out, but I would encourage anyone who's interested to follow the Gazelle category on my blog [reverberate.org].

    You can also check out:
    - the current draft of the manual [reverberate.org], which will give you a better idea of the specifics of where I'm going with this.
    - a graphical dump of the grammar for JSON [reverberate.org], which the current development code is capable of generating.
  • I really like ANTLR. (Score:4, Interesting)

    by BillWhite ( 14288 ) on Wednesday May 28, 2008 @04:58PM (#23576359) Homepage
    I've used both ANTLR from PCCTS 1.3 and Bison pretty extensively. We have multiple bison and ANTLR grammars in our product. I like ANTLR generally better than bison. The extended BNF is really useful. And when you get used to writing top-down grammars, they are not so odd. In fact, with eBNF notation, alot of the peculiarity is taken away.

    I also like Sorcerer. In PCCTS 1.3, Sorcerer is a kind of tree traversal grammar tool. You create ASTs, with ANTLR, and Sorcerer creates a program which will traverse them, and call action routines where you specify. It's really pretty neat.

    I'm thinking about something else though. I'm thinking we should really think about programming with grammars more than we do. Say, for example, you have a user interface of some kind. It gets certain events, its state changes, and it reacts to the environment. A good fraction of the set of state changes can be captured with some kind of finite state machine. But a context free grammar is equivalent to a finite state machine with a pushdown list to hold context. So, it seems very likely to me that a good way to build user interfaces is to somehow compose grammars. The tokens are events, and the action routines are the non-FSM state changes.

    So, why is this interesting in this discussion? Well, ANTLR from PCCTS 1.3 is a recursive descent parser, and YACC/Bison are bottom up parsers. This means that the pushdown list for ANTLR is the execution stack, and the pushdown list for YACC/Bison is in data space. It's hard to see how one would maintain multiple ANTLR-style automata concurrently, but that's what you want to do for this style of programming.

    Generally YACC/Bison pushdown lists and other parsing data are kept in global variables, but there is a way to make Bison generate C++ useable grammars where the parsing data are saved in a heap allocated object. This means they have a fixed size, which may be a problem. But it would not take a lot of work to change the parsing algorithm for Bison to make the pushdown list a linked list, and that might make things easier.

    So, in short, I think it's pretty interesting to look at parsing, even if you're not writing compilers.
    • Ditto. I am also a big fan of embedding mini-languages in bigger systems and of ANTLR for all the reasons you state plus a few more [transitionchoices.com].

      Three cheers to Terence Parr for this remarkable technology.

      I have not taken the step to upgrade to version 3. I hear that the grammar specifications are significantly different. I have a question. Is it worth all the rewriting of grammar and migration of scripts to upgrade? Has anyone here used ANTLRWorks? I am really pleased with version 2.7.5 so it is hard to get moti

  • Some years ago, I wrote an interpretting parser, which simply loads a grammer (in extended BNF) and next parses a string according to it. The lexical analyses needs to be hand coded, but examples for the most common literals are included. The interesting bit is that the parser controls the lexer, which simply gives you context sensitive lexing. The whole thing is rather small. The nice thing is that it doesn't generate code. I once tried to make it generate code, but the produced code was actually slower th
    • by parrt ( 157907 )
      ANTLR v3 has an interpreter also for ANTLR grammars; does everything except execute actions (of course) and doesn't yet deal with syntactic predicates.
  • Is the saying "Painstaking" hyphenated as Pain-staking or Pains-taking?

    I always thought of it as Pains-taking.

    IE:

    I have taken great pains to get this right.

    Just curious if i have been using it correctly.
  • About seven years ago, I needed to write a DSL. I used ANTLR to do it, and it was a pleasure - even though I didn't know Java! I wrote out the grammar, ANTLR wrote all the code for me, everyone was happy. And that was long before there was anything as amazing as ANTLRWorks.

    But lately I've been wondering: Do we really need parser generators anymore? In Ruby, if you're writing a DSL, you usually implement it in terms of the Ruby language itself. I imagine that's true for other dynamic languages too. Lo
    • Crude regular expressions only work for crude regular languages. You mention parsing XHTML, but what evidence do we have that regex don't choke on html just as often as XML parsers? HTML is complicated and often broken enough in distribution that I'm not sure you can define a universal accepted "well formed-ness" checker. And it's not that regex waste CPU -- you can implement them in time linear to the input, it's that they break quite easily if there's any variety in your input. I recall someone complainin
    • Re: (Score:3, Insightful)

      by blitz487 ( 606553 )
      Regular expressions cannot handle recursive grammars.
  • the subject matter is obtuse

    No, it's not obtuse, it's obscure.

    Obtuse is the antonym of acute. In geometry, an obtuse angle is one greater than 90 degrees. Metaphorically, an obtuse person is one who is not sharp.

    Obscure, on the other hand, from the Latin word for "dark", means difficult to perceive or understand.

    • by ratbag ( 65209 )
      "Abstruse" (from the Latin abstrudere, to conceal, would also be appropriate in this case, since it means "difficult to understand".
  • Comment removed (Score:5, Interesting)

    by account_deleted ( 4530225 ) on Wednesday May 28, 2008 @09:26PM (#23579953)
    Comment removed based on user account deletion
    • Re: (Score:3, Interesting)

      I am posting this simply so that others can see a different view and judge for themselves.

      I have used ANTLR for years (not version 3) and have had no trouble getting it to do what I want. I have not tried to get it to interpret COBOL code, however. I have even used ANTLR in .NET and found it to be easy, breezy.

      Keep in mind that this is no drag-and-drop technology for light weights. You are really going to have to know your compiler and formal language theory and be willing to study some sample gramma [antlr2.org]

      • Comment removed based on user account deletion
        • Comment removed based on user account deletion
        • by drew ( 2081 )
          Try changing antlr2.org to antlr.org in the URL. It seems that in the changeover from version 2 to version 3, a lot changed around on the website, and there are now a lot of broken links on both versions of the site. It's unfortunate, because it does reflect poorly on what otherwise seems to be a pretty good project. I've not actually used ANTLR yet myself, although I may be using it in the near future. (But thanks to the pointer to GOLD, I'll look into that as well.) The site has been very valuable to
    • by dread ( 3500 )
      Interesting. I wrote two grammars without much hassle after buying the book last spring and they are now part of our commercial product. Sure, the book isn't exactly Dr Seuss but I much prefer a book by someone who is actually enthusiastic about his/her own subject and that goes into core concepts. And considering that I had very little previous knowledge about formal languages and/or compiler theory I find your comments about the "overly complicated application" and "arcane drivel" to be off the mark by qu
    • On a project I was on, I needed to parse 50+ COBOL copybooks in .NET so that we could use those data definitions to whittle down a 600MB flat file full of nightly data for a data warehouse. I tried ANTLR, and I wound up abandoning it.

      COBOL is pretty straightforward. I did something similar with awk. One script of about 150 lines handled most of the essentials. Output files were another awk script with field offsets and lengths for usage in the converted extract, a file with extraction commands for a main

  • > the second most well-known compiler compiler, Terence Parr's Antlr

    I was pretty sure the two best-known compiler compilers were yacc and bison, though I couldn't have told you which one is the most well-known and which one the second-most well-known these days. (I know which is older and which is newer... but that isn't necessarily the same thing.) I've never heard of Terence Parr's Antlr before. (I _have_ heard of PGE, but only because I read Perl-related news occasionally.)
  • by erc ( 38443 )
    How is ANTLR better than YACC? Since the reviewer isn't familiar with the grand-daddy of compiler-compiler tools, how can one take the review seriously? As for the statement "there's nothing else like it out there", that's just plain fiction - there are a number of compiler-compiler books out there, especially dealing with YACC.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...