Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Book Reviews Books Media

Linux System Programming 98

Jon Mitchell writes "As a Perl programmer recently thrown in to the world of C development on Linux, I have been looking for something that would take my K&R level of experience and bring it up to date with modern methods, hopefully letting me write more efficient and reliable programs. Linux System Programming is a volume that targets this need. Robert Love, former "Chief Architect, Linux Desktop" at Novell, kernel hacker of many years, and Gnome developer of well known features such as Beagle and NetworkManager, attempts in this book to document the Linux system call and C API to common systems programming tasks. Given that he developed the pre-emptive kernel and inotify he has the knowledge." Read below for the rest of Jon's review.
Linux System Programming
author Robert Love
pages 388
publisher O'Reilly Media
rating 8/10
reviewer Jon Mitchell
ISBN 9780596009588
summary The Linux system call and C API explored in depth.
Getting this book out of the box, I had wrongly been expecting a cookbook style that I would get instant gratification from. Although structured around common programming tasks, it doesn't lend itself to just dipping in. The section on time lists a handful of ways that "time" is available to the programmer; jump into the middle of the section and you might miss the most suitable one for the job in hand. The book rewards reading it in larger chunks.

This doesn't mean it is necessary to read it from cover to cover. Logically organized into chapters around "things you want to do", such as file access, memory management and process management it will lead you in with a survey of techniques you might be familiar with, before drilling down with advanced methods.

Knowing advanced methods for performance is great, but not at all costs. One of the most useful and practical lessons this book gives is to encourage you to think about error conditions that may occur during a system call. Early on, in the section on reading files, a detailed example is given on reading from a file. Every possible case of return code from the read call is described together with what it means and how you should handle it — it can be surprising that 7 possible outcomes are listed, with good descriptions of what to do with each of them.

This good practice by example continues throughout the book. Every system call described also lists the errors that may occur. This does show up a slight weakness: many system calls share a common set of errors which are repeated many times in the text. If you are not paying attention it may feel like you are just flipping through man pages. However you are soon halted by the easy introduction of an advanced concept to get your teeth into.

These are done in a nicely graded level for each topic. In "file access" to give an example, you are lead from simple read/write calls, through to what the C library can provide in buffering, to improved performance using mmap. The techniques continue with descriptions of I/O schedulers and how the kernel will order hardware disk access, scatter/gather, and ends up with how it is possible to order block reads/writes yourself bypassing any scheduler.

You are hardly aware of the progression, as the pacing is very well done. New concepts clearly fit into what you have seen so far — current sections signpost the practical use of what is being explained and at what cost, allowing clear consideration of the use of advanced features against any consequences.

For process management discussion starts with fork and exec, before moving onto user ids and groups, covers daemonification and goes onto process scheduling, including real time scheduling. Throughout the book each new call is illustrated with a short code snippet showing the call being used in a practical situation.

Not everything is present and correct. The author immediately states that networking is not covered at all. This is a shame as this subject would benefit from the depth of coverage given to the topics in this book — although no doubt would increase the number of pages considerably. Perhaps scope for a second volume. The length of some sections seems odd — Asynchronous file I/O is whizzed through in a page with no code example, whereas I/O schedulers gets a luxurious 12.

On the other hand there are some unexpected and useful extras, such as a discussion in the appendix of gcc C language extensions and how they might be used to fine tune your code.

The books stated target is for modern Linux development, a 2.6.22 kernel, gcc 4.2 and glibc 2.5. Many calls have been standardized by POSIX, and where this is so it are noted in the text, so a large portion of the content is useful on other systems. There is even the occasional mention of non-Linux system calls, the use of which is not encouraged, but shown so you know how they function if you come across them in older code.

I recommend this book to anyone who has a need to developing Linux applications. The book is not a primer in C on Unix, so you are expected to be familiar at least to the level of K&R. From this level though the journey into getting the best from the kernel and C library into your programs is easy going and enjoyable.

You can purchase Linux System Programming from amazon.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.
This discussion has been archived. No new comments can be posted.

Linux System Programming

Comments Filter:
  • by Dareth ( 47614 ) on Monday April 14, 2008 @02:29PM (#23067246)
    I to..err... know this poor bastard who took all his compsci courses in C++. How hard would it be for a C++ coder to dig into this book?
    • Re: (Score:2, Funny)

      Bah. C++. Go talk to the KDE people. ;)

    • by Chandon Seldon ( 43083 ) on Monday April 14, 2008 @02:37PM (#23067368) Homepage

      How hard would it be for a C++ coder to dig into this book?

      Should be pretty easy. All the code examples are valid C++. All you need to do is remember that "class" is called "struct" and that you have to mangle your own names.

      • and do a lot more casting!
      • by Peaker ( 72084 ) <gnupeaker@nOSPAM.yahoo.com> on Monday April 14, 2008 @03:09PM (#23067868) Homepage

        How hard would it be for a C++ coder to dig into this book?

        Should be pretty easy. All the code examples are valid C++. All you need to do is remember that "class" is called "struct" and that you have to mangle your own names.

        C++ is not a superset of C, and is definitely not supposed to be written like C.

        For example variable-length arrays (added by C99) are not supported by C++ (which has vector objects instead).
        • by Chandon Seldon ( 43083 ) on Monday April 14, 2008 @03:32PM (#23068170) Homepage

          C++ is not a superset of C, and is definitely not supposed to be written like C.

          C++ is damn close to being a superset of C. Any C code examples given in this book are almost sure to be valid C++. Further, the fact that C code makes for awkward and ugly C++ code doesn't mean that it isn't *valid* C++ code.

          C and C++ are very different languages in programming style, but anyone who knows C++ already knows the C syntax and semantics - at most they'll need to learn the modern C programming style to actually use it.

        • by Evil Pete ( 73279 ) on Monday April 14, 2008 @08:12PM (#23071740) Homepage

          C++ was originally a superset of C. But later changes to C / C++ have drifted considerably from that. However, that means that generally C shouldn't be a problem for C++ programmers. There are large differences in the philosophy though that will affect the quality of your C code.

          • Re: (Score:3, Informative)

            by Curien ( 267780 )
            C++ was never a superset of C, and it was never intended to be such. Trivially,

            int main(void) {
            int class = 0;
            return 0;
            }

            was never a valid C++ or Cfront program, but it has always been (and probably will always be) a valid C program.

            I'm not an expert on Cfront, but I do know that there are quite a few major differences between C and ARM C++ (sizeof character literals, meaning of empty argument list, type conversions), so your characterization of C++ as having diver
            • by Raenex ( 947668 )

              C++ was never a superset of C, and it was never intended to be such.
              Just to clarify, though, that it was intended to be largely compatible:
              http://www.research.att.com/~bs/bs_faq.html#C-is-subset [att.com]
            • by Evil Pete ( 73279 ) on Tuesday April 15, 2008 @01:01AM (#23073888) Homepage

              I didn't say 'recently'. I remember it was stated that C++ WAS a superset. Though it was probably moer accurate to say a superset of ANSI C. In fact there were early C++ compilers that actually preprocessed the C++ code into C first. Of course I am talking 15-20 years ago.

              So I stick to my remarks.

              Bloody young whippersnappers.

              A comment that follows has a link to Stroustrop's page about this. Yes it is not a mathematical superset. But it is practically one:

              Thus, C++ is as much a superset of ANSI C as ANSI C is a superset of K&R C and much as ISO C++ is a superset of C++ as it existed in 1985.

              Well written C tends to be legal C++ also. For example, every example in Kernighan & Ritchie: "The C Programming Language (2nd Edition)" is also a C++ program.


        • For example variable-length arrays (added by C99) are not supported by C++ (which has vector objects instead).

          Vector objects are not a replacement for C99 variable length arrays. In the test application I wrote I found them to be quite expensive, to the point where the relatively trivial bookkeeping I had them doing was eating as much time as the (many) syscalls. C99 VLA's on the other hand seem to generate impressively efficient code (gcc, which is traditionally not noted for its great code generator). A rather natural extension involving no new syntax, just allowing a dynamic expression for the array dimensio

          • Re: (Score:3, Informative)

            by Curien ( 267780 )
            Both of those features were added to C after C++ was standardized. In particular, C99 VLAs were invented after C++ vectors (which were mostly solidified as part of the STL by the time of the C95 library update). As for your comparison, it would be interesting to know the specifics of your measurements (code, etc).

            The current C++ folks are more interested in fixing the mess they made with templates. Designated initializers would be mostly unnecessary if the language supported named argument mapping a la Ada
      • Re: (Score:3, Funny)

        How hard would it be for a C++ coder to dig into this book?

        Should be pretty easy. All the code examples are valid C++. All you need to do is remember that "class" is called "struct" and that you have to mangle your own names.

        I'm perfectly fine with you using C++ to shoot yourself in the foot, but don't you dare draw a bead on his.
    • Easy, just -- some of your knowledge.
    • Re: (Score:3, Insightful)

      by fm6 ( 162816 )
      OK, maybe I'm just showing my age, since I learned C++ when it was still considered to be a kind of preprocessor for C. But I find it very hard to understand how you can be an effective C programmer without knowing K&R-level C and how C++'s object architecture is built on C fundamental data types. C++ is a notoriously complex language, full of gotchas. These are hard enough to avoid even if you happen to know that (for example) that a C "string" is not a fundamental data type, though syntactically it lo
    • Re: (Score:2, Informative)

      by blitzkrieg3 ( 995849 )
      Though I haven't read the book, I think it is safe to say that you should familiarize yourself with C a little better before reading this. You should pick up K&R, or at the very least familiarize yourself with the way common data structures look in C.

      Having said that, if you have no problems understanding man pages for system calls, you should be good to go.
    • Re: (Score:2, Insightful)

      by Anonymous Coward
      Error handling in C is considerably different than the "right" way to do it in C++ (namely, with exceptions). You've been driving an automatic; now you'd be driving a manual --- but with real gauges instead of idiot lights.
    • I think you made a mistake in your post. Here's the correct syntax:

      for (i = 0; i < 42; i++)
      {
          cerr >> "know this poor bastard who took all his compsci courses in C++." >> endl;
      }


      • So many errors, so little code...

        1. You didn't declare i.
        2. You post-incremented i instead of pre-incrementing it. (Because i isn't declared, I don't know if it's a basic type or an iterator. If it's an iterator, pre-incrementing is more efficient.)
        3. You right-shifted instead of left-shifting.
        4. You used endl on cerr, which is redundant and inefficient. (Standard error has auto-flushing on newline, and endl is almost never what you want.)
  • by bytesex ( 112972 ) on Monday April 14, 2008 @02:33PM (#23067304) Homepage
    I like UNIX systems programming when it's complete; even when that surprises me. Recently, for example, I had to find a way to know how many processes had open file descriptors to a certain file. You know, the old shared database thing; so that I can make sure that I'm the only one in at a certain point (inside a file lock), to do some checks an'all. To no avail; UNIX basically said: 'if you can't do it with file locks, don't bother'. Then I discovered the good old sys/ipc.h and the associated sys/sem.h and sys/shm.h. Turns out that my issue *has* been thought about, and in a good way too. Sure, the APIs aren't all 'modern' feeling; lots of things are done with extremely short function-names, ellipsis and setting bits inside special structs, but it works. And it's fast too.

    Now if they only had a good standard API to a versioned, networked filesystem. Then I would be in heaven. But a guy can dream...
    • Re: (Score:3, Funny)

      Now if they only had a good standard API to a versioned, networked filesystem. Then I would be in heaven. But a guy can dream...
      If you want VMS, you know where to find it. ;)
    • Now if they only had a good standard API to a versioned, networked filesystem. Then I would be in heaven. But a guy can dream...


      Try ext3cow [ext3cow.com] and NFS.
  • by Anonymous Coward
    On most UNIX systems, the POSIX API is fully available to Perl scripts. One of the great things about Perl is that you get all sorts of high level features that aren't available in C, but then you also get all of the low level features that you often need when writing hardcore UNIX software. Best of all, Perl is damn fast, usually on par with C for most tasks. And it's often a lot faster when doing regular expressions work, for instance.
    • by moderatorrater ( 1095745 ) on Monday April 14, 2008 @03:05PM (#23067826)

      Best of all, Perl is damn fast, usually on par with C for most tasks
      Any way you could back that up with some numbers? I don't mean to say that you're wrong, but I'm skeptical about any claim that says an interpreted language can beat a compiled one. I would even be surprised if compiled perl could beat compiled C since C's been worked on so much longer and compiling perl into a binary isn't really its focus anyway.
      • by smcdow ( 114828 )

        ... interpreted language ...
        Ahem. When did perl beomce an interpreted language?

        • Their about page [perl.org] calls it the "perl interpreter" multiple times. How is it not an interpreted language?
          • It is not an interpreted language because it's compiled at runtime. Why do you think there isn't an interactive Perl interpreter (at least that I know of)?

            It's called an interpreter for the lack of a better name. It's usually used to run a script; I haven't seen it used too often to compile the scripts. The closest I can think of is perlcc.

            • Re: (Score:2, Informative)

              by Mornedhel ( 961946 )

              Why do you think there isn't an interactive Perl interpreter (at least that I know of)?

              Actually, you can start a debugger session with perl -de 1 (that's the number 1 ; any other empty script will do). That acts like an interactive Perl interpreter would (but really is a loop of "user entry/eval(user entry)/start again").

              Still, you're right in that Perl is a compiled-then-interpreted language (like Python and others).

            • Re: (Score:2, Insightful)

              by Bill Dog ( 726542 )
              It is not an interpreted language because it's compiled at runtime.

              Ultimately all source code has to get translated into machine code to be able to "run" the program. It's just a matter of when this happens (and how often). Once, on the developer's time. Or every time, on *my* fucking time. The former is compiled, the latter is interpreted.
            • It's interpreted, because you need read access to the script. If it were compiled, you wouldn't need read access, execute would suffice.

              (I know that's not what you hinted at, but it's an interesting distinction for some purposes - think about access password for your database).

      • by mr_mischief ( 456295 ) on Monday April 14, 2008 @03:29PM (#23068114) Journal
        Perl is compiled into an AST, goes through code improvements, and then is executed.

        Since it typically goes through this every time you use a program from the command line, the startup time tends to be pretty heavy.

        If you're using something like mod_perl or FastCGI or some other caching dispatch mechanism, your program gets dispatched without recompilation if it hasn't been changed.

        If your program is long-running, then the startup cost can become negligible.

        Perl's common routines are written in optimized C and with good algorithmic design in mind. If someone writes an equivalent from scratch in C instead of using a good library, then the Perl version will have been designed and refined by far more people.

        It's true that in many cases C comes out well faster than Perl, but those cases are not as common as people tend to think.
        • Re: (Score:3, Insightful)

          by pimpimpim ( 811140 )
          If you're going towards purely number crunching applications, perl will actually end up being a lot slower, think of a factor 100. I noticed this with some programs that run for at least a day, so the startup won't be much of a difference there. Searching the net for benchmarks, I found similar ratios for simple addition calculations. More important than the algorithm optimization: Perl takes the memory allocation out of your hands, which is extremely good for stable programs, but the peformance price is im
      • Re: (Score:3, Informative)

        by skeeto ( 1138903 )

        Any way you could back that up with some numbers?

        Unless your program only crunches a lot of numbers during its entire runtime (for example the ImageMagick tools) your program will spend most of its time waiting on some kind of I/O. This encompasses pretty much all software you will find on a normal desktop computer. Perl and C both spend the same amount of time waiting on I/O operations. It comes down to spinning disks or waiting on the slow, clumsy fingers of users.

        On the other hand, Perl is faster when it comes to development time. The Perl progra

  • K&R (Score:5, Interesting)

    by christurkel ( 520220 ) on Monday April 14, 2008 @02:55PM (#23067664) Homepage Journal
    You can probably tell I can't program, but what is "K&R level of experience" ?
    • by Otter ( 3800 )
      A basic familiarity with C syntax and simple code examples.

      (In general, the beard-and-suspenders set's insistence upon K&R specifically as an introductory programming text does students a disservice. It's a beloved historical artifact, but it's hardly the best current text for new programmers to start with. I doubt if K&R themselves would argue otherwise.)

      • by shrykk ( 747039 )
        A basic familiarity with C syntax and simple code examples.

        (In general, the beard-and-suspenders set's insistence upon K&R specifically as an introductory programming text does students a disservice. It's a beloved historical artifact, but it's hardly the best current text for new programmers to start with. I doubt if K&R themselves would argue otherwise.)


        If you think K&R only exposes you to simple code examples, you haven't really read it.

        It's beloved because of its thoroughness despi
      • What's specifically better for someone looking to learn C than the second edition of K&R, which covers ANSI C?

        There are certainly better books for first-time programmers, but that's largely because there are better languages for first-time programmers.

        As an introduction to the C language, I think the book is a good tool. As an introduction to programming, perhaps SICP, something on ocaml, or something teaching a modern C descendant like D, Objective C, or even C#.
    • K and R wrote THE text on C, informally defining a standard.

      http://en.wikipedia.org/wiki/K_and_R_C [wikipedia.org]

      He probably means he's read the book. It doesn't contain things like systems programming.
  • "...and where this is so it are noted in the text..."

    Well, I hope it aren't noted using grammar like that.
  • how does it compare with Stevens (RIP)?
    • Well, for one, it doesn't have what Stevens wrote in that entire other two-volume set, Unix Network Programming. This had Volume 1 (originally Sockets and XTI, but I think later just Sockets), and Volume 2: Inter-Process Communications.

      The great thing about sticking to Advanced Programming in the Unix Environment for what it covers is that the same guy wrote the networking stuff in those others. He also wrote 3 volumes of TCP/IP Illustrated in case you really want to dig deeply into networking.

      If you want t
  • by sticks_us ( 150624 ) on Monday April 14, 2008 @03:10PM (#23067892) Homepage
    ...if the amazon reviews [amazon.com] are accurate.

    O'Reilly is great, but I do think you gotta be careful; a lot of their books can, at times, seem to be mostly printouts of man pages (and other freely available documentation), as this reviewer notes:


    If you expect the quality of the author's other books from this book, you'll be disappointed. It just lists system calls and their descriptions that you can find from man pages without any serious examples. It doesn't provide any insight or thorough coverage you can find from other books such as Steven's book.


    Richard Stevens [wikipedia.org] was definitely "the man" when it came to writing books like this; I'd recommend them to anyone. Anyone who attempts to cover the same ground (even years later) has a tough act to follow.

    I've bought a lot of computer books over the years, and for my money, none have been as well-written and valuable as Stevens'.

    RIP, Richard.

    • Anyone who attempts to cover the same ground (even years later) has a tough act to follow.
      I think Mr. Love would agree whole-heartedly. I believe it was the lackluster APUE 2e (Rago's revision of Stevens's work) which motivated Robert to produce this work. Robert's book is much more concise and useful to a linux developer when compared to APUE 2e, where much attention is devoted to unix implementations other than linux.
    • by jgrahn ( 181062 )

      Richard Stevens was definitely "the man" when it came to writing books like this; I'd recommend them to anyone. Anyone who attempts to cover the same ground (even years later) has a tough act to follow.

      IIRC, someone came with a revised APUE two years ago or so.

      But yeah -- there are a few interesting questions. What does this book offer which isn't in Advanced Programming in the UNIX Environment, the man pages and the relevant standards? Is the reviewer familiar with Stevens' work? And why is the revie

  • by Daniel Phillips ( 238627 ) on Monday April 14, 2008 @03:11PM (#23067906)
    Robert has done plenty of useful work, but it was George AnzigerAnzinger [linuxsymposium.org] who developed the Linux preemption patch. Robert picked it up, maintained it and got it merged. The credits to George seemed to have gotten lost somewhere in that process.

    Credit where credit is due please.
  • by mi ( 197448 ) <slashdot-2017q4@virtual-estates.net> on Monday April 14, 2008 @03:19PM (#23068006) Homepage Journal

    Build your code with -Wall -Werror (or your compiler's equivalent). Once you clean up all the crud, that pops up, crank it up with -W -Wno-unused-parameter -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith. Once there — add -Wreturn-type -Wcast-qual -Wswitch -Wshadow -Wcast-align and tighten up by removing the no in -Wno-unused-parameter. The -Wwrite-strings is essential, if you wish your code to be compiled with a C++ compiler some day (hint: the correct type for static strings is " const char *").

    For truly clean code, add -Wchar-subscripts -Winline -Wnested-externs -Wredundant-decls.

    The people, who wrote and maintain the compiler, are, most likely, several levels above you in understanding programming in general and C-programming in particular. Ignoring the advice their code generates is foolish on your part...

    As a minimum, solved warnings will make your code more readable by reducing/eliminating the "Why is he doing this?" questions. More often than not, they point out bugs you would otherwise spend hours chasing with a debugger later.

    And they make your code more portable. But if you don't understand, why a warning is generated — ask around. Don't just "shut it up". For example, initializing a variable at declaration is usually a no-no. If the compiler thinks, the variable may be used before being initialized, scrutinize your program's flow. If you can't figure out, it may some times be better to disable this one warning temporarily with -Wno-uninitialized to move on, instead of shutting it up for ever by a bogus "= 0" or some such...

    The book may well say something about respecting warnings, but the review does not, which is a shame.

    • by david.emery ( 127135 ) on Monday April 14, 2008 @03:53PM (#23068430)
      Many studies (e.g. the Bell Labs 5-ESS fault analysis) and anecdotal stories indicate that failing to check the error return on a system call (or any other function, for that matter) is all-too-common. Adding to this problem, when a system call fails, often the manifestation/error/seg fault is not at that point of call, but further down, when a pointer/variable you expect to have meaningful data is null/garbage...

      That's why, when we did the Ada Binding to POSIX (IEEE 1003.5/ ISO 9945), we decided to accept the overhead of imposing exceptions for system call error returns (in most cases). You can't ignore the exception!

      This raised two interesting concerns that we discussed when developing the standard:

      1. What about tasking/threads/concurrency? The requirement on the implementation was to set up per-task errno values. From an implementation perspective, this meant that you needed to go outside of the standard interface to correctly implement POSIX/Ada, as you needed to grab the errno value and load it into task-specific storage, or require that your underlying POSIX threads implementation (if that's how you built the Ada runtime) do that for you. In practice, this is not too onerous, and it's proven to be a real boon for ensuring proper behavior (including debugging) in a multithreaded/multitasking environment.

      2. We also needed to think about the situation (usually representing really poor programming) where an unhandled exception (from a system call, an application call, or a language predefined exception) rips up the callstack and terminates the process. We wanted a return value from the process exit that would be 'close to 1 but not collide with commonly used values.' The number we chose: 42 (with the appropriate citation in the bibliography:-)

      So sure, a C++ program can use the C binding, but I think defining and using C++ exceptions in a better C++ interface would be preferred.

      dave (Tech Editor for the original IEEE P1003.5 project...)
      • Dave, this is fascinating, but rather unrelated to my post. I don't know, why you chose to post a follow-up, rather than start a thread of your own.

        The number we chose: 42 (with the appropriate citation in the bibliography:-)

        Interestingly, 42 is not listed in /usr/include/sysexits.h on neither Solaris, nor FreeBSD, nor Linux...

        • by david.emery ( 127135 ) on Monday April 14, 2008 @04:09PM (#23068652)
          That's not surprising, since the use of '42' is an artifact of the Ada binding, and those systems do not by default contain an implementation of 1003.5/9945. They should, but that's another story. Ada actually meshes very nicely with Unix, and is a good choice for system-level programming above the kernel level. Strong Typing -is your friend-! (I've been doing library level system programming on Unix systems, starting with Ultrix in 1984...)

          The standard Linux/Solaris Ada compiler is the GNU Ada Compiler, http://www.gnat.com

          But at least it's good to know there isn't a conflict.

                dave

        • Re: (Score:3, Funny)

          by T.E.D. ( 34228 )

          Interestingly, 42 is not listed in /usr/include/sysexits.h on neither Solaris, nor FreeBSD, nor Linux...


          Well of course you wouldn't want to *list* 42 as a possible exit code. If you did that, we'd be continually getting our Ada programs interrupted by Vogon destructor fleets.
    • Re: (Score:1, Insightful)

      by Anonymous Coward
      Quit, using, so, many, commas. It will, make, you easier, to understand. And people, may actually, finish, reading, what you write. If in, doubt, don't use a, comma.
    • by jd ( 1658 )
      For most purposes, -Wall will suffice on GCC. However, they should add a more comprehensive warning option and call it -Wall-to-wall.
      • by mi ( 197448 )

        However, they should add a more comprehensive warning option and call it -Wall-to-wall.

        That's what -W is :-)

        For more — add the flags by hand. I listed quite a few — borrowed from FreeBSD's BDECFLAGS (collected by Bruce Evans).

    • Let me preface this by saying that I've been accused of having terrible C style before, but I don't understand why initializing a variable at declaration is a bad thing. Would you mind explaining?
      • by mi ( 197448 )

        First of all because it is (slightly) inefficient. Second is because in most cases, the variable will get some other value later on in the function — and you'd like the compiler to tell you, if in some cases it may not get it.

        • Huh. I always thought it was safer to give it some kind of flag value and catch for it later. Live and learn- and thanks for the explanation.
    • And they make your code more portable. But if you don't understand, why a warning is generated — ask around. Don't just "shut it up". For example, initializing a variable at declaration is usually a no-no. If the compiler thinks, the variable may be used before being initialized, scrutinize your program's flow. If you can't figure out, it may some times be better to disable this one warning temporarily with -Wno-uninitialized to move on, instead of shutting it up for ever by a bogus "= 0" or some such...

      So, what you are saying is that you'd rather see the program fail with a completely bogus value you have no idea where it is coming from (which is whatever was on the stack at the time the variable was pushed) than a known invalid initialization value (e.g. -1) you pick and you set your variable to ?

      This has long debugging session written all over it...

      • Re: (Score:3, Informative)

        by mi ( 197448 )

        So, what you are saying is that you'd rather see the program fail with a completely bogus value you have no idea where it is coming from (which is whatever was on the stack at the time the variable was pushed) than a known invalid initialization value (e.g. -1) you pick and you set your variable to ?

        This sort of error is easily caught with something like Purify [wikipedia.org] or valgrind [wikipedia.org].

        Also, if the warning was generated, you disabled it, and your program failed with a random result, that's a very good indicator, tha

      • by jeremyp ( 130771 )
        The point is that the compiler is supposed to catch the error before you even run the program once.
    • i prefer to push my luck! :D
    • Re: (Score:3, Informative)

      by jgrahn ( 181062 )

      Build your code with ...

      I always use -W -Wall -pedantic -std=c89 plus any glibc #defines to enable POSIX/BSD/whatever functions I need.

      Seeing people respect and use the gcc warning flags makes me happy, but I don't know why you chose to leave out -pedantic and (more importantly!) the option to select which bloody language you are feeding the compiler.

      But if you don't understand, why a warning is generated ask around. Don't just "shut it up". For example, initializing a variable at declaration is usual

  • by Anonymous Coward

    As soon as I heard that Robert Love had written this book about userspace programming, I rushed to buy it.

    I had really enjoyed both "Linux Kernel Development" (Developer's Library, 2003) and "Linux Kernel Development 2nd ed." (Novell, 2005). I like how clearly and brightly the author describes linux internals, from the major architectural components to the key code chunks.

    This book was a great surprise. It's the best you may desire when you have to quickly design and develop complex solutions with glib

  • Robert Love .... (Score:3, Informative)

    by NullProg ( 70833 ) on Monday April 14, 2008 @08:15PM (#23071766) Homepage Journal
    Is a great kernel developer/programmer (He also does columns for Linux Journal). He is not a general purpose Linux programming author.

    Getting this book out of the box, I had wrongly been expecting a cookbook style that I would get instant gratification from. Although structured around common programming tasks, it doesn't lend itself to just dipping in.

    For getting your feet wet with Linux programming I recommend GNU/LINUX Application Programming by M. Tim Jones or Linux Application Development by Michael K. Johnson and Erik W. Troan.

    The Linux Unleashed series is also good (1000+ pages with hundreds dedicated to perl, python, and Gtk programming).

    Enjoy,
  • If this is anything like Linux Kernel Development by the same author, it is not aimed at those new to C. I suggest getting at very least a C pocket book, and reading up (thoroughly) on pointers before diving into this.
    • by boyter ( 964910 )
      I really don't get why people talk about pointers being a difficult concept. The only slightly obtuse thing about them is the syntax, and as anyone who can code in more then one language will tell you learning syntax is easy.
      • I don't think pointers are difficult, I just know enough developers who don't bother checking them. Memory leaks, segmentation faults and stack corruption are symptoms of programmers who use pointers flippantly.
  • Ride the Tour de France like the pro's. This book encourage you to think about the possible technical and physical hurdles you may encounter when riding the longest, toughest bicycle race known to men. The book covers some basic subjects like wheel tunning, seat height and handlebars using real life examples. Other chapters cover more advance subjects such as sprinting, time trials and team-riding. Mountain stages are not included in this book. I recommend this book to anyone who has a need to ride bicycle

Neutrinos have bad breadth.

Working...