Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Book Reviews Books Media

Regular Expression Pocket Reference 144

Michael J. Ross writes "When software developers need to manipulate text programmatically — such as finding all substrings within some text that match a particular pattern — the most concise and flexible solution is to use "regular expressions," which are strings of characters and symbols that can look anything but regular. Nonetheless, they can be invaluable for locating text that matches a pattern (the "expression"), and optionally replacing the matched text with new text. Regular expressions have proven so popular that they have been incorporated into most if not all major programming languages and editors, and even at least one Web server. But each one implements regular expressions in its own way — which is reason enough for programmers to appreciate the latest edition of Regular Expression Pocket Reference, by Tony Stubblebine." Read below for the rest of Michael's review.
Regular Expression Pocket Reference, Second Edition
author Tony Stubblebine
pages 126
publisher O'Reilly Media
rating 9/10
reviewer Michael J. Ross
ISBN 0596514271
summary A pithy guide to regular expressions in many languages.
The second edition of the book was published by O'Reilly Media on 18 July 2007, under the ISBNs 0596514271 and 978-0596514273. On the book's Web page, the publisher makes available the book's table of contents and index, as well as links for providing feedback and any errata. As of this writing, there are no unconfirmed errata (those submitted by readers but not yet checked by the author to see whether they are valid), and no confirmed ones, either. In fact, in my review of the first edition, published in 2004, it was noted that there were no unconfirmed errata, despite the book being out for some time prior to that review. The most likely explanation is that the author — in addition to any technical reviewers — did a thorough job of checking all of the regular expressions in the book, along with the sample code that make use of them. These efforts have paid off with the apparent absence of any errors in this new edition — something unseen in any other technical book with which I am familiar.

Before discussing this particular book, it may be of value to briefly discuss the essential concept of regular expressions, for the benefit of any readers who are not familiar with them. As noted earlier, a regular expression (frequently termed a "regex") is a string of characters intended for matching substrings in a block of text. A regex pattern can match literally, such as the pattern "book" matching both "book" and "bookshelf." A pattern can also use special characters and character combinations — often termed metasymbols and metasequences — such as \w to indicate a single word character (A-Z, a-z, 0-9, or '_'). Thus, the regex "b\w\wk" would match "book," but not "brook."

Here is a simple example to show the use of regexes in code, written in Perl: The statement "$text =~ m/book/;" would find the first instance of the string "book" inside the scalar variable $text, which presumably contains some text. To substitute all instances of the string with the word "publication," you could use the statement "$text =~ s/book/publication/g;" ('g' for globally search) or use "$text =~ s/bo{2}k/publication/g;". In this simplistic example, the second statement makes use of a quantifier, {2}, indicating two of the preceding letter.

These examples employ only one metacharacter (\w) and one quantifier ({2}). The total number of metacharacters, metasymbols, quantifiers, character classes, and assertions (to say nothing of capturing, clustering, and alternation) that are available, in most regex-enabled languages, is tremendous. However, the same cannot be said for the readability of all but the simplest regular expressions — especially lengthy ones not improved by whitespace and comments. As a consequence, when using regexes in their code, many programmers find themselves repeatedly consulting reference materials that do not focus on regular expressions. These resources comprise convoluted Perl books, incomplete tutorials on the Internet, and confusing discussions in technical newsgroups. For too many years, there was no published book providing the details of regexes for the various languages that utilize them, in addition to a clear explanation of how to use regexes wisely.

Fortunately, O'Reilly Media offers two titles in hopes of meeting that need: Mastering Regular Expressions, by Jeffrey Friedl, and Regular Expression Pocket Reference, by Tony Stubblebine. In several respects, the books are related — particularly in that Stubblebine bases his slender monograph upon Friedl's larger and more extensive title, justifiably characterized by Stubblebine as "the definitive work on the subject." In addition, Stubblebine's book follows the structure of Friedl's book, and contains page references to the same. Another major difference is that Regular Expression Pocket Reference is, just as the title indicates, for reference purposes only, and not intended as a tutorial.

At first glance, it is clear that Stubblebine's book packs a great deal of information into its modest 126 pages. That may partly be a result of the terseness of most, if not all, of the regular expression syntax; a metasymbol of more than two characters would be considered long-winded! Yet the high information density is likely also due to the manner in which Stubblebine has distilled the operators and rules, as well as the meaning and usage thereof, down to the bare bones. But this does not imply that the book is bereft of examples. Most of the sections contain at least one, and sometimes several, code fragments that illustrate the regex elements under discussion.

The book begins with a brief introduction to regexes and pattern matching, followed by an even briefer cookbook section, with Perl-style regexes for a dozen commonly-needed tasks, e.g., validating dates. The bulk of the book's material is divided into 11 sections, each one devoted to the usage of regexes within a particular language, application, or library: Perl 5.8, Java,.NET and C#, PHP, Python, Ruby, JavaScript, PCRE, the Apache Web server, the vi programmer's editor, and shell tools.

Each of these sections begins with a brief overview of how regexes fit into the overall language covered in that section. Following this is a subsection listing all of the supported metacharacters, with a summary of their meanings, in tabular format. In most cases, this is followed by a subsection showing the usage of those metacharacters — either in the form of operators or pattern-matching functions, depending upon how regular expressions are used within that language. Next is a subsection providing several examples, which is often the first material that most programmers turn to when trying to quickly figure out how to use one aspect of a language. Each section concludes with a short listing of other resources related to regexes for that particular language.

There are no glaring problems in this book, and I can only assume that all of the regular expressions themselves have been tested by the author and by previous readers. However, there is a minor weakness that should be pointed out, and could be corrected in the next edition. In most of the sections' examples, Stubblebine wisely formats the code so that every left brace ("{") is on the same line as the beginning of the statement that uses that brace, and each closing brace ("}") is lined up directly underneath the first character of the statement. This format saves space and makes it easier to match up the statement with its corresponding close brace. However, in the.NET / C# and PCRE library sections, the open braces consume their own lines, and also are indented inconsistently, as are the close braces, which makes the code less readable, as well as less consistent among the sections.

Some readers may fault the book's sparse index. Admittedly, an inadequate index in any sizable programming book can make it difficult if not impossible to find what one is looking for. As a result, one ends up flipping through the book's pages hoping to luckily spot the desired topic. This is the rather unpleasant method to which a reader must resort when a technical book has no index, or one that is inadequate — which is far too often the case. Stubblebine's index offers only several dozen entries for all the letters of the alphabet, and only two symbols. Some readers might demand that all of the metacharacters and metasequences be listed in the index, so they can be found even faster than otherwise. But given the large number of metacharacters and metasequences, as well as method names, module functions, and everything else relevant, creating an exhaustive index would almost double the size of the book, and be largely redundant with the language-specific sections. Within each language, there is typically a limited enough number of pages that scanning through them to find a particular topic, would not be onerous. On the other hand, some of the index's inclusions and omissions are odd. For instance, two symbols are listed, and yet no others; why bother with those two? Also, a few key concepts are missing, such as grouping and capturing.

Yet aside from these minor blemishes, Regular Expression Pocket Reference is a concise, well-written, and information-rich resource that should be kept on hand by any busy software developer.

Michael J. Ross is a Web developer, writer, and freelance editor.

You can purchase Regular Expression Pocket Reference, Second Edition from amazon.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.
This discussion has been archived. No new comments can be posted.

Regular Expression Pocket Reference

Comments Filter:
  • Comment removed based on user account deletion
    • Re: (Score:2, Informative)

      by Swizec ( 978239 )
      You can always try php.net [php.net]. I find that it's a fairly good introductory tutorial into regular expressions going through all the basics and such. It might be a tad specific, but the general science behind them is there and should allow you to quickly learn them in any language.
    • Comment removed based on user account deletion
      • by wol ( 10606 ) on Monday March 24, 2008 @02:38PM (#22849068)
        You mean like:

        http://www.regular-expressions.info/ [regular-expressions.info]
      • by cmacb ( 547347 )
        Does the book, or any other reference explain why we need such an obtuse mechanism for parsing strings in the first place? Most of the things I read about people doing with regular expressions could be done with much more intuitive string handling methods that have been around since at least the 70s. There may be things that can be done with regex that couldn't be done with (for example) the "parse" statement in Rexx, but it would be a very small percentage of the examples I've seen.
        • Re: (Score:2, Insightful)

          by Anonymous Coward

          Does the book, or any other reference explain why we need such an obtuse mechanism for parsing strings in the first place?

          What's obtuse about them? They're a straightforward and direct way of describing text patterns, and perfectly intuitive if you have an analytical mind (and if you don't, you shouldn't be programming in the first place).

          Here's a REXX example from Wikipedia:

          myVar = "(202) 123-1234"
          parse var MyVar 2 AreaCode 5 7 SubNumber
          say "Area code is:" AreaCode
          say "Subscriber number is:" SubNumber

          T

        • Does the book, or any other reference explain why we need such an obtuse mechanism for parsing strings in the first place? Most of the things I read about people doing with regular expressions could be done with much more intuitive string handling methods that have been around since at least the 70s. There may be things that can be done with regex that couldn't be done with (for example) the "parse" statement in Rexx, but it would be a very small percentage of the examples I've seen.

          If a person is using a regular expression when they really only direct string parsing, that's the fault of the person, not regular expressions. The annoying details of finite state machines can be ignored if you're just using regular expressions in programming, but if you try to just use conditionals and substrings for all of your text parsing, eventually you'll have a case where you end up essentially writing your own finite state machine.

        • There may be things that can be done with regex that couldn't be done with (for example) the "parse" statement in Rexx, but it would be a very small percentage of the examples I've seen.

          I don't think you understand the difference between "possible" and "easy". Using regular expressions to parse text is (really!) easy. Writing a 100% CSS 3, XHTML 1.1 and Javascript 1.7 compliant web browser entirely in x86 assembly by hand (on paper) in 24 hours or less is "possible".

    • Re: (Score:1, Funny)

      by Anonymous Coward

      Is there any introduction to regular expressions for total beginners, perhaps teaching through examples and including exercises?
      This book [amazon.com] IS for total beginners, literally.
    • Re: (Score:1, Informative)

      by Anonymous Coward
      Try the free sample chapter for the book Pro Perl Parsing [apress.com] from Apress. It provides a nice walk through of Regex usage and how Regexs work.

    • by jtev ( 133871 )
      Well, depending on how much mathmatical background you have, I would say that a good place to start on regular expressions is to pick up a book on discrete mathmatics. Once you've mastered the concepts contained in it, you might want to move on to something that is more detailed about automota theory. Unfortunatly regular languages (the set of languages that can be expressed as regular expressions) require a bit of background to truly understand. That said, the description of them is rather simple, and i
      • by jandrese ( 485 ) <kensama@vt.edu> on Monday March 24, 2008 @03:46PM (#22849828) Homepage Journal
        That and getting into that kind of depth is usually a good way to find the bugs in your regular expression library. It's also an easy way to write code that will drive maintainers crazy.

        Unless you're a hard core mathhead, that's probably not a good place to start with regexes IMHO. That's just going to scare people off from a highly useful tool. One generally does not need to rigorously prove that his regexes are going to work to use them. One does not have to use every feature of a language to make good use of it.
        • by jtev ( 133871 )
          Yes, but most books on regular expressions expect you to know what a regular expression is. And the depth to which regular expressions are covered in the discrete math book I used freshman year was shallow enough to give someone a broad overview without swamping them, assuming they have the mathmatical rigor to get that far. If they don't, then what they can gleam from a book on REs without knowing even that level of depth will be adequate for 99% of what REs are used for.
          • by jandrese ( 485 )
            I've never understood why people find them so confusing in the first place. The concept is dirt simple: You tell the computer to look for X (usually the example here is a fixed string match) in your data. When it finds X, it tells you where it is. Magic!

            Then you go on and explain wildcards, character classes, and subexpressions and you've covered 95% of what a regular person will use in day to day life, all in the span of about 5 minutes. The hardest part about using regular expressions is usually se
            • Actually, I find the easiest way to understand regexes is to understand their underlying representation: the finite state machine. Understanding this not only helps to illuminate how regexes work, it also highlights their limitations (eg, counting).

              'course, taking a course in formal language theory is even better (and should be a required course as part of a computing science degree, IMHO). :)
            • I've never understood why people find them so confusing in the first place.

              Same here.. people here advocating all sorts of wierd stuff like advanced maths theory*, when anyone could work out regular expressions by looking at them for a few minutes. Of course visualisers help for the really complex stuff (which nobody ever uses anyway).

              PCRE is actually quite nice - you only have to bother with the setup once. just make a class that you can chuck a regexp string at and reuse it. Depends on the data set I gu
              • I've got bad news for you, Tony. Computer Science IS math, and if you're good at it, you'd probably be good at math if you applied yourself. Understand that regexes are just a kind of function that takes symbols rather than digits, and returns either true or false.
    • I've found that the gold standard O'Reilly Book Learning Perl - Chapter 7 - Regular Expressions, is a fantastic beginners reference for regular expressions, how to use them, and the power of their usage.
    • I read Mastering Regular Expressions, cover to, cover. I find that it started off very easily and even having no Regex knowledge outside of using *.* on the command line, I was able to pick up Regex using just this book pretty well. Sure you can't just read the book, and master regular expressions, but what programming concept can be mastered from simply reading a book? It's a really good starter, and a really good reference. Everything else you'll figure out from experimentation, and just using it.
      • I read Mastering Regular Expressions, cover to, cover. I find that it started off very easily and even having no Regex knowledge outside of using *.* on the command line...

        Actually, that's globbing [wikipedia.org] that the shell does for you, not regex.

        • Yes, I realize it's not exactly the same as regular expressions, but it's kind of the same thing. Look for files that have such and such in the name. Mastering Regular Expressions even brings this up as an example, because just about everybody who would be reading the book has probably used this concept at some point in their lives.
  • You need help from the book in order to find the best way to search for its ebook on the internet
  • Now that would be an interesting pair of authors ;)

  • a google search for "regex [your fav language goes here]"?
    • That's kind of what I was thinking.

      Pair up Google with something like Kodos [sourceforge.net] and you are all set. I still struggle with them sometimes, but nothing like before I had the debugger.
  • by Armakuni ( 1091299 ) on Monday March 24, 2008 @02:33PM (#22849002) Homepage
    ...I have a pocket reference to regular expressions.
  • by stokessd ( 89903 ) on Monday March 24, 2008 @02:42PM (#22849104) Homepage
    Here's the regular expression that I found most useful in childhood:

    "Hello, I'm a smart geeky person, please to not beat me up and take my lunch money. I can help you with your math homework"

    Sheldon
  • by Jerry Coffin ( 824726 ) on Monday March 24, 2008 @02:47PM (#22849158)

    However, there is a minor weakness that should be pointed out, and could be corrected in the next edition. In most of the sections' examples, Stubblebine wisely formats the code so that every left brace ("{") is on the same line as the beginning of the statement that uses that brace, and each closing brace ("}") is lined up directly underneath the first character of the statement. This format saves space and makes it easier to match up the statement with its corresponding close brace. However, in the.NET / C# and PCRE library sections, the open braces consume their own lines, and also are indented inconsistently, as are the close braces, which makes the code less readable, as well as less consistent among the sections.


    A minor correction:
    However, there is a minor weakness that should be pointed out, and could be corrected in the next edition. Specifically, the book includes a section on .NET/C# and PCRE. By the time the next edition is needed, Microsoft will undoubtedly have moved on to new languages running in a new environment, as well as "enhanced" regular expressions "to provide better security and a syntax that is more approachable by beginners."
    • by sconeu ( 64226 )
      PCRE isn't an MS technology.
    • by Shados ( 741919 )
      Jokes aside, while it virtually implements the entire standard (and in some cases, more so than basically all other implementations), .NET's regexes actually DO have a few "extensions" to them, like special syntax to handle dynamic amounts of matching pairs more easily. The syntax is hell though, so not more approachable to beginners :)
  • ObJWZ (Score:5, Funny)

    by Minwee ( 522556 ) <dcr@neverwhen.org> on Monday March 24, 2008 @02:48PM (#22849182) Homepage

    Because you just can't discuss regular expressions without bringing up this quote [regex.info]:

    Some people, when confronted with a problem, think "I know, I'll use regular expressions."
    Now they have two problems.

    -- Jamie Zawinski, 1997, in alt.religion.emacs

    • May I not-so-humbly submit my own revision to Zawinski's quote? Mine reads:

      Some people, when confronted with a problem, think "I know, I'll use regular expressions."
      Now they have ^[2-9]\d*$ problems.
    • by jandrese ( 485 )
      To be fair, regular expressions in emacs lisp in 1997 were not exactly something for the faint of heart. Heck, the POSIX C regular expression library is a nightmare syntactically (all of the support code you need for a single expression is unbelievable). Hard as it is to believe, Perl actually made the syntax cleaner.

      Despite being less self-documenting, I have never met a person who prefers to type [[:alnum:]] over \w.
  • already built in (Score:3, Informative)

    by Fujisawa Sensei ( 207127 ) on Monday March 24, 2008 @02:54PM (#22849246) Journal

    There's already a built in regular expression tutorial:

    man perlretut
    • Comment removed based on user account deletion
    • There's already a built in regular expression tutorial:

      man perlretut
      And man perlrequick [perl.org] for regex noobs.
    • There's already a built in regular expression tutorial: man perlretut

      Not to be pedantic, but that's neither "built in" nor a "regular expression" tutorial. That's one of the many manpages for Perl that gets installed when you install Perl, and describes, in a friendly format, using Perl and Perl regular expressions.

      Which is different than using Perl-compatible regular expressions as described in pcre(3).

      Which is different than using Posix regular expressions as described in re_format(7) or grep(1).

      So, by a
  • I use grep regularly enough to know generally how to build an expression, but not often enough to know each (I use grep in 3-4 different editors) application's quirks/implementation details off the top of my head, so I end up having to look up something regularly. I always use the application's Help file rather than the grep manual I've got laying around somewhere.
    Opening the Help file for the app and using its search function is a lot quicker than having to leaf through a book (worse when the book has a ba
  • Problems (Score:3, Interesting)

    by Peaker ( 72084 ) <gnupeaker AT yahoo DOT com> on Monday March 24, 2008 @03:24PM (#22849554) Homepage
    I'll start with an Obligatory quote.

    Some people, when confronted with a problem, think "I know, I'll use regular expressions." Now they have two problems. --Jamie Zawinski, in comp.lang.emacs

    I'll close with a somewhat depressing fact: Regular expression and string processing can be done quickly and efficiently (and was done that way back decades ago, with grep and awk), but is actually done in a horribly inefficient way [swtch.com] in all modern/popular programming language regexp engines.
    • Re:Problems (Score:5, Interesting)

      by Abcd1234 ( 188840 ) on Monday March 24, 2008 @03:36PM (#22849692) Homepage
      First off, Mr. Zawinski is recorded as being rather prejudiced against Perl [regex.info], so I'd take any comments he's made regarding regex's with a massive grain of salt. In fact, I'd probably just ignore him altogether. Besides, his comments are focused almost entirely on the *mis*uses of regexes, not their appropriate application.

      As for your second complaint... uhh, who cares? Premature optimization is the devil. So if regex's allow you to cleanly implement a simple solution to a problem (and regexes *are* very well suited to certain tasks, even if they do tend to be misused, particularly in languages such as Perl where they're very tightly integrated), it would be foolish to move to another technique based solely on performance concerns without first profiling the code.

      'course, the real irony, on the performance front, is that Mr. Zawinski himself said "The heavy use of regexps in Emacs is due almost entirely to performance issues: because of implementation details, Emacs code that uses regexps will almost always run faster than code that uses more traditional control structures." So maybe they aren't so evil or slow after all?
      • Perl, regexps (Score:3, Interesting)

        by Peaker ( 72084 )
        If you read the link I posted, you will see that they are indeed evil and slow - and not for any good reason. The implementation of good regular expression engines is not difficult and known in CS theory for many decades.

        "Premature optimization" is a nice slogan - but the regexp performance problems are real, and I have encountered them before (I was extremely surprised to see that the regexp matching is scaling far worse than O(N) as it was clear to me that matching that regexp should be at worst O(N)).

        The
        • Re: (Score:3, Interesting)

          by Abcd1234 ( 188840 )
          but the regexp performance problems are real, and I have encountered them before

          That's all well and good, but unless you're parsing extremely large volumes of text, the issues are probably unimportant. Which is, of course, why profiling is so important. Throwing out a perfectly valid solution simply because it is, in theory (or even in practice) slow, is ridiculous if you have other performance problems elsewhere, or if the code is running at a speed that is sufficient for the problem at hand.

          Put another
          • by cp.tar ( 871488 )

            but the regexp performance problems are real, and I have encountered them before

            That's all well and good, but unless you're parsing extremely large volumes of text, the issues are probably unimportant. Which is, of course, why profiling is so important. Throwing out a perfectly valid solution simply because it is, in theory (or even in practice) slow, is ridiculous if you have other performance problems elsewhere, or if the code is running at a speed that is sufficient for the problem at hand.

            Then again, I'm a linguistics student. And we do quite a bit of work with corpora.
            Until now, most of the work has been done in Perl (and some in Intex, Unitex or Nooj); recently some started doing things in C++.
            Having read the article above, I think I'll start learning awk. Because we do have major performance issues.

            And let me just say: damn. Studying is easy.
            If I hope to get a job in hat department, I'll actually have to get something done ;)

          • That's all well and good, but unless you're parsing extremely large volumes of text, the issues are probably unimportant.

            If you trigger Perl's worst-case regex performance, it can take over a minute to match a 30 character string. That's what the graph at the top of the referenced article illustrates.

            Try it for yourself:

            $ time perl -e '$x= "a" x 30; $x =~ /(a?){30}a{30}/'

            real 3m20.283s
            user 3m19.583s
            sys 0m0.086s

            Will you run into this worst-case performance? Probably not, as long as you write good regexes. Would it behoove you to understand that yes, Perl's regexes can have serious performance issues even with sma

        • If you read the link I posted, you will see that they are indeed evil and slow - and not for any good reason.
          Actually there are very good reasons. Just because that paper doesn't address them doesn't mean that they don't exist.
      • by bit01 ( 644603 )

        Premature optimization is the devil.

        Wrong. Premature peephole optimization is the devil.

        At the design stage choosing a good algorithm that scales is entirely appropriate. This is particularly true when you don't know how much data you'll be working with. Like any scripting language.

        Performance criteria are always part of a design and cruddy programmers who hide their incompetence with the above mantra should be fired. See dailywtf [thedailywtf.com] for examples.

        ---

        Don't be a programmer-bureaucrat; someone who su

    • I keep wondering about this myself all the time given that we wrote a regex engine with NFA-to-DFA conversion in 3rd semester CS way back when. That was kind of enlightening after weeks of DFA, NFA, formal languages, grammar and Chomsky hierarchy tedium.
    • I'll close with a somewhat depressing fact: Regular expression and string processing can be done quickly and efficiently (and was done that way back decades ago, with grep and awk), but is actually done in a horribly inefficient way [swtch.com] in all modern/popular programming language regexp engines.

      I think you'll find that the regex algorithms used in the likes of Perl were chosen for a very good reason - not just because the implementers were lazy or stupid. The author of the article never addresses the fundamental differences in semantics between Posix regular expressions (such as grep and awk implement) and Perl regular expressions semantics. In the Posix case you must find the longest match, a requirement that the Thompson NFA approach handles easily. In the Perl case you must find the first matc

    • I'll close with a somewhat depressing fact: Regular expression and string processing can be done quickly and efficiently (and was done that way back decades ago, with grep and awk), but is actually done in a horribly inefficient way [swtch.com] in all modern/popular programming language regexp engines.

      That's not true. To get the exponential runtime from your regexps in a pcre-style engine, you have to write some wicked bad regular expressions. In Real Life(tm) backtracking engines are just as good as NFA's. Plus, backreferences are hard to implement using NFA's so you must resort to backtracking them anway. Which is why the authors of Perl's, Python's and PHP:s regular expression libraries have choosen to use recursive backgracking -- it is much simpler and you get the same performance for non-pathologi

      • by Peaker ( 72084 )

        it is much simpler and you get the same performance for non-pathological cases.

        Its not that much simpler, as the NFA approach is quite simple. And they indeed speak of the backtracking required in some cases in the article. For backtracking regexps, use this approach, sure. But many (perhaps a majority) of regexps ARE regular and don't need to backtrack.

        Claiming that "real world regexps" are not pathological cases may be true - but there is a middle-ground. We have hit, in my workplace, cases of regular expressions scaling much worse than O(N) on the text - and they were completely

        • by TheLink ( 130905 )
          Well it's the job of the language people to do that NFA stuff.

          As long as they do it in a backward compatible way I don't care.
          • Well it's the job of the language people to do that NFA stuff.

            As long as they do it in a backward compatible way I don't care.

            They can't. Otherwise they would have. This is the point that the OP seems to have missed - language implementers aren't just a pack of idiots as the OP seems to believe. Non-backtracking NFAs can handle a certain subset of the requirements very efficiently, but can't handle the rest of the requirements at all. Back references are one thing they struggle with. Another is the requirement that many languages (such as Perl) impose to return the first match, not just any match or the longest match.

            • by Peaker ( 72084 )
              You can fall back to backtracking when the obscure backtracking features are used - and use the regular engine when they are not.

              The majority of regexps ARE regular and there is no reason for them to pay the price of rare and obscure features that they do not use.

              Appearently the regexp implementors are a bunch of idiots after all :-)
              • You can fall back to backtracking when the obscure backtracking features are used - and use the regular engine when they are not.

                There's nothing obscure about requiring the regex to return the first match. That is simply the semantics that most of these languages have chosen.

                Appearently the regexp implementors are a bunch of idiots after all :-)

                You seem to prefer to believe that every modern regex implementer is an idiot rather than recognize the fact that the Thompson NFA approach is not suited to the regex semantics most languages now employ. I think that's an extremely arrogant attitude. But hey, if you're so sure you're right then why not produce an implementation that proves it? Perl has pluggabl

                • by Peaker ( 72084 )
                  NFA supports returning the first-match, and other trivialities that the Perl implementors thought it doesn't. It may take actually understanding the algorithm involved, however.

                  I would take up your challenge, if I wasn't deeply involved with many others already. However, others I have mentioned these problems to have said that they intend to use this as a project.
                  • by TheLink ( 130905 )
                    Sounds great to me. I hope you are right :). Then my stuff will just run faster.

                    Aside: I have noticed that many versions of grep have become extremely slow after they introduced the i18n stuff, so much so that even perl is faster in many common cases.

                    Switching to the C locale restores performance.
  • Well to apply a common saying to something in need. You can't grep dead wood.
  • Once I found the functions startswith and endswith, my need to use for regular expressions dropped away, fast. Occasionally, I'd have to pony up for a more complex pattern match, still a pain though even though I'd "cracked" regex. I wonder if the rest of regex could be done away with in a similar fashion?

    • If that's all you were using regexes for, you were probably misusing them in the first place. Try doing any kind of complex text file parsing and you'll understand why regexes have their place.
  • by HangingChad ( 677530 ) on Monday March 24, 2008 @03:38PM (#22849726) Homepage

    I'd rather stick knitting needles in my eyes than debug a regular expression.

    The only cure for that is getting a good reference and having a go at some tutorials until you get good enough to slay the beast. Then you'll be everyone's buddy at the office, because a lot of people feel the same way.

    Or you could just stick knitting needles in your eyes and slash your face with a razor and then everyone will leave you alone.

  • How do you pronounce "regex"? I see four possibilities:
    1) "regh-ex" (hard 'g', like 'ghost')
    2) "rej-ex" (soft 'g', like 'gerbil')
    3) "re-gex" (hard 'g')
    4) "re-jex" (soft 'g')

    I use the first one, since those are the two initial syllables of 'regular' and 'expression', but I can see arguments for the others.
    • by Xtravar ( 725372 )
      It's so weird how you never think of this stuff until somebody else says it differently.

      #2 - I don't think of it as the combination of two words, which is probably why I ignore the hard G from 'regular' and replace it with the soft G from 'register'. I do the same with Linux (lin ucks), char (chair), etc.

      Although I admit I can't pronounce the word "debacle" correctly for the life of me.
  • by Shux ( 5108 ) on Monday March 24, 2008 @03:51PM (#22849878) Homepage
    Regular expressions are easier than you think and once you get comfortable with them you will be wishing you hadn't done so sooner. In my opinion the difficult part of learning them is just getting used the strange mess of dots, pluses, brackets, backslashes, etc. and what they mean in different contexts. Unfortunately it is hard to walk away from an article or howto on regexes and actually remember the meaning of all the symbols. Regular expressions are deliberately terse and that makes them hard to read and understand by humans.

    Therefore I think the best way to learn regular expressions is by example. I highly recommend this small interactive program which will walk you through building regular expressions for a few different languagues. When you think you need a regex for a program, just fire it up and answer the questions.

    http://txt2regex.sourceforge.net/ [sourceforge.net]

    After a while you won't need txt2regex for simple stuff because you will have hopefully just absorbed the syntax. Once you have mastered the basic regexes which txt2regex can generate you will be able to dive into more advanced topics like capturing groups.
  • What about emacs? Grep? Sed? This book sounds like a good idea, but it's not so useful without a wider selection of applications. Frankly, though, I just want a short guide as to which things need to be escaped to get which meanings, and what character classes are available.
  • by Ed Avis ( 5917 ) <ed@membled.com> on Monday March 24, 2008 @04:25PM (#22850248) Homepage

    it may be of value to briefly discuss the essential concept of regular expressions,
    Before you say this, make sure you know what that concept is.

    A regular expression can be thought of as a program which generates a set of strings - or recognizes a set of strings, which is the same thing. Regular expressions correspond to finite state automatons, so just as a FSA cannot recognize the set of all palindromes, neither can a regular expression. Also languages like perl have extended the capabilities of their regular expression string matchers to include things like backreferences, which cannot be done in a true regular expression, so we tend to use the word 'regexp' nowadays.

    Or perhaps I'm just playing the grumpy computer scientist here.
    • by Shados ( 741919 )
      I'm confused about what a "true" regular expression is, vs a "non-true" one... I mean, back references are part of the ECMA standard... I'm sure there's something Im missing here, but I'd like to know what...
      • Re: (Score:3, Informative)

        by evilWurst ( 96042 )
        Rewording Ed for you: you can think of a "true" regular expression as just a shorthand for describing a state machine [wikipedia.org]. Feed a state machine a string and it can only either accept or reject. Backreferences are an addition to the modern programming implementation of regular expressions, but aren't part of the language theory sense of regular expressions. You can do things with backreferences that *cannot* be done with a deterministic finite state automata. Interestingly, that wiki link has a quote from Larry
      • I think the GP is referring to the kind of regular expressions you'd cover in a finite automata course (which I tend to refer to as "fake regular expressions", since I learned regex first...), not anything you'd actually ever implement in a library or programming language.

  • $text = "The bookkeeper was very careful to keep proper books as he did not wish to be booked for fraud."
    $text =~ s/book/publication/g;

    Yeah, that will work. Not.
  • Buy this program: http://www.regexbuddy.com/ [regexbuddy.com]

    It is the best $40 I ever spent when doing a project involving tons of Regular Expressions. It has detailed tutorials on how Regular Expressions work, a reference guide, debugging mode, real-time feedback on what your expression is doing, error checking, and a built-in forum where you can post your problems and people including the developer himself will chime in and help you figure it out!

    I'm not associated with JGSoft in anyway, but RegEx Buddy really is an awe
  • The first edition copy I have is pretty dog-eared from constantly being stashed in my laptop bag. I don't use regexes every day, but I understand them, and just need a handy reference. Definitely great for a "how do you specify X" kind of problem.

It is not best to swap horses while crossing the river. -- Abraham Lincoln

Working...