Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Linux Books Media Software Book Reviews

Linux Patch Management 87

Ravi writes "Any system or network administrator will know the importance of applying patches to the various softwares running on their servers be it the numerous bug fixes or vulnerability checks. Now when you are maintaining just a single machine, this is really a simple affair of downloading the patches and applying them on your machine. But what happens when you are managing multiple servers and hundreds of client machines? How do you keep all these machines under your control up to date with the latest bug fixes? Obviously, it is a waste of time and bandwidth to individually download all the patches and security fixes for each machine. This is where this book named "Linux Patch Management - Keeping Linux systems up to date" authored by Michael Jang gains significance. This book released under the Bruce Perens' open source series aims to address the topic of patch management in detail." Read the rest of Ravi's review
Linux Patch Management - Keeping Linux Systems Up To Date
author Michael Jang
pages 270
publisher Prentice Hall
rating 8
reviewer Ravi
ISBN 0-13-236675-4
summary This book offers Linux professionals start-to-finish solutions, and examples for every environment, from single computers to enterprise-class networks.


The book is divided into seven detailed chapters, each covering a specific topic related to patch management. In the first chapter, the author starts the narration by giving an introduction to the basic patch concepts, the various distribution specific tools available for the user including Red Hat up2date agent, SUSE YaST online update, Debian apt-get and also community based sources like those in Fedora. What I found interesting was instead of just listing the various avenues that the user has regarding patching his system, the author goes the extra mile to stress the need for maintaining a local patch management server and also the need to support multiple repositories on it.

The second chapter deals exclusively with patch management on Red Hat and Fedora based Linux machines. Here the author walks the readers through creating a local Fedora repository. Maintaining a repository locally is not about just downloading all the packages to a directory on your local machine and hosting that directory on the network. You have to deal with a lot of issues here, like the hardware requirements, the kind of partition arrangement to make, what space to allocate to each partition, whether you need a proxy server and more. In this chapter, the author throws light on all these aspects in the process of creating the repositories. I really liked the section where the author describes in detail the steps needed to configure a Red Hat network proxy server.

The third chapter of this book namely SUSE's Update Systems and rsync mirrors describes in detail how one can manage patches with YaST. What is up2date for Red Hat is YaST for SuSE. And around 34 pages have been exclusively allocated for explaining each and every aspect of updating SuSE Linux using various methods like YaST Online Update and using rsync to configure a YaST patch management mirror for your LAN. But the highlight of this chapter is the explanation of Novell's unique way of managing the life cycle of Linux systems which goes by the name ZENworks Linux Management (ZLM). Even though the author does not go into the details of ZLM, he gives a fair idea about this new topic including accomplishing such basic tasks as installing the ZLM server, configuring the web interface, adding clients ... so on and so forth.

Ask any Debian user what he feels is the most important and useful feature of this OS, then in 90 percent of the cases, you will get the answer that it is Debian's contribution to a superior package management. The fourth chapter takes an in depth look into the working of apt. Usually a Debian user is exposed to just a few of the apt tools. In this chapter though, the author explains all the tools bundled with apt which makes this chapter a ready reference for any person managing Debian based system(s).

If the fourth chapter concentrated on apt for Debian systems, the next chapter explores how the same apt package management utility could be used to maintain Red Hat based Linux distributions.

One of the biggest complaints of users of Red Hat based Linux distributions a few years back was a lack of a robust package management tool in the same league as apt. To address this need, a group of developers created an alternative called YUM. The last two chapters of this book explores how one can use YUM to keep the system upto date as well as hosting ones own YUM repository on the LAN.

Each chapter of the book explores a particular tool to achieve patch management in Linux and the author gives in depth explanation of the usage of the tool. All Linux users irrespective of which Linux distribution they use will find this book very useful to host their own local repositories because the author covers all distribution specific tools in this book. The book is peppered with lots of examples and walk throughs which makes this book an all in one reference on the subject of Linux patch management."

Michael Jang has specialized in networks and operating systems. He has written books on four Linux certifications and one of them on RHCE is very popular among students attempting to get Red Hat certified. He also holds a number of certifications such as RHCE, SAIR Linux Certified Professional, CompTIA Linux+ Professional and MCP.


You can purchase Linux Patch Management - Keeping Linux Systems Up To Date from bn.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.

Update: 02/07 14:52 GMT by J : Book rating changed from an intended 4 (of 5) stars to Slashdot-normalized 8 (of 10), by Ravi's request.
This discussion has been archived. No new comments can be posted.

Linux Patch Management

Comments Filter:
  • Patches using RPM (Score:4, Interesting)

    by IMightB ( 533307 ) on Monday February 06, 2006 @04:43PM (#14653945) Journal
    What I want to know is how to issue patches via RPM rather than distributing the whole app again. Wether using some sort of binary diff, or just packaging the changed files. And how to manage things like this with the RPM database. I know that SuSE has got "patch" rpm's but I can't find any info as to how these are created, or how they are viewed/managed by the rpm DB.

    Anyone?
    • My Ubuntu tells me when a new update is available; Patches are then one click away.
      • At last... the blackmagic and ignorance that abounds in Windows has found its way into Linux :)

        Just click here and you'll be OK... trust us :(
      • While nice, that doesn't seem to be what the original poster was getting at. He was wanting a package manager that is able to distribute only what has changed in the binary, rather than downloading the whole new file. Essentially, the question was about size/bandwidth concerns, not ease of use.
    • by jd ( 1658 )
      Step 1 is only required if some patches are optional AND either have to be applied in a certain order or can't be applied together at all. This step involved using the %pre phase of the RPM to roll back any changes that will clash with the patch that you want to install.

      Step 2 has two parts. Files that simply overwrite existing files can be installed with no further change. There probably wouldn't be too many examples of those. The other step is to install patch files into a patch archive directory.

      Step 3 h

    • Re:Patches using RPM (Score:2, Informative)

      by Subrafta ( 848399 )
      I've used RPM to patch and update / change configurations on a number of in-house applications. We've got several hundred Red Hat systems of various vintages installed on customer networks. Basically I make the "patch" RPM dependant on the original RPM, then use the %pre and %post areas of the spec file to ID the target system, update configurations, start / stop services, and move updated files into place. It's not a perfect system and I only use it to do automated, broad-based configuration changes, no
    • by Peter H.S. ( 38077 ) on Monday February 06, 2006 @06:20PM (#14654826) Homepage
      What I want to know is how to issue patches via RPM rather than distributing the whole app again.

      I will try to answer why this probably won't happen for at foreseeable future, and why it probably not is a good idea.

      The only advantage that a binary patch system have over distributing the whole rpm package is that it saves bandwith.
      A major disadvantage of such a system is that it creates twice the overhead, since most of the work that a Linux distributer have with patching its software, is the (regression) testing. So now the Linux distributer has to track _and_ test two kinds of updates; binary diff packages, and whole packages. They can't skimp testing one of the two types, since that would almost certainly mean, that a trivial error borks the untested package, that then would hose thousends of machines. And if the distro skimps distributing the whole packages, well, then types like me would start to whine about how much is sucks to keep track of "package" +"hotfix_1" +"hotfix_2" +"hotfix_3" instead of just getting "updatedpackage".
      The package management systems would also have to be reworked, since they now have to keep detailed track of packages and updates, and the exact order of which to apply these updates. (when I was working with MS Windows servers years ago it was not uncommon that Windosupdate would loose track of updates and installed software, so that old software would overwrite new security patches)

      In short, a binary diff patch system would mean a lot of work, for a negliable gain

      Way back when I started with Linux, I also thought that it was a good idea just to distribute binary diff updates, since that was what I was used to, and because it somehow seems wastefull distribute a whole package.
      I changed my mind when I actually started to manage some Linux servers.

      --
      Regards
      Peter H.S.
      • I can see your point, however, my situation doesn't involve general distribution to the public. It's more like releasing to our Operations Department, who doesn't want to install a 80+ MB rpm for a PHP script patch. In this situation they actually want RPM + Hotfix 1 + Hotfix 2.... Or Patchrollup.rpm

        Right now historically, these have been provided in a tar.gz file that installed OVER the rpm files. Which makes it almost impossible to go through and see what patch level something is at.

        With RPM I can
        • Have you looked at the --justdb flag? It makes changes to the RPM database without installing files. You could script your upgrades so that the updated files are installed with tar and then the rpm database is updated with rpm --justdb. I've done something similar with an internal package which was initially released as a tarball so it was easier to keep track of.

          The downside to this is that it's prone to errors (e.g., you make a mistake and the rpm database could think that package owns files that don't ex
    • APT makes this stuff so simple, that the idea of writing a book on it is ridiculous. Just configure machines, using other apt tools for major roll-outs if necessary, and set up a proxy server which caches patches, then serves them to clients. If you want to pre-approve patches, you can do that too by running commands on demand rather than automatically. Simple. Or, as simple as you can expect it to be, at least, given that patches sometimes break stuff on any system. What we really need though, is orga
    • Mandriva also uses the patch, and provides the tools to generate the delta rpms (in the "deltarpm" package), so here are some extracts from the man page:

      NAME
      makedeltarpm - create a deltarpm from two rpms

      SYNOPSIS
      makedeltarpm [-v] [-V version] [-z compression] [-s seqfile] [-r] [-u]
      oldrpm newrpm deltarpm
      makedelt
  • by ezs ( 444264 ) on Monday February 06, 2006 @04:43PM (#14653947) Homepage
    This is a nice review of 'patching Linux' - and it's a subject close to my heart. Usual disclaimer - ZLM is partly my product and baby. One thing that the review clearly describes - there's a lot of choice out there. From Red Hat Network; to Novell update; to YaST Online Update - and there there is yum, apt etc etc etc. One of the cool things that ZENworks Linux Management brings to the table is the ability to integrate multiple sources of patches - RHN, YOU, Novell, roll-your-own, apt - and bring them into a central release server and control what goes where. For those that are too small or don't want to shell out for ZENworks - remember there is also the fully open source Open Carpet product - http://opencarpet.org/ [opencarpet.org]
  • At least in the RPM world, one would be neglect to not mention red-carpet and smart, IMHO the two best package managers out there. Although red-carpet has morphed into Novell's ZLM package, it is still the best system for enterprise Linux patch management, even if you use RedHat or some other non-Novell distribution. Smart is still in beta, but it is currently quite stable and functional even in its development state. Smart is definitely the next gen package manager, taking all the great features of apt-g
    • Clipped from the Smart homepage...

      The Smart Package Manager project has the ambitious objective of creating smart and portable algorithms for solving adequately the problem of managing software upgrading and installation. This tool works in all major distributions, and will bring notable advantages over native tools currently in use (APT, APT-RPM, YUM, URPMI, etc).

      Does this sound eerily similar to an academic publication to you? Regardless, it does seem to be something aspiring to be useful in the ente
      • LOL, I certainly agree with your claim on their mission statement. However, they have the product to back it up. The features I was talking about aren't *promised* features. They are there NOW. I use Smart on all my OpenSuSE and Fedora Core desktops in replacement of Yum and YaST, and it works fabulously. -DSR
  • by totro2 ( 758083 ) on Monday February 06, 2006 @04:45PM (#14653960)
    Old school commercial Unices like Solaris, HPUX, and AIX have "patches". Modern linux systems have "packages". Anyone who doesn't deal with a modern, automagical package management system like apt or yum is usually slogging through the mud unnecessarily. By updating a package, you get your patches. Most Linux users should never have to patch source code from tarballs, like the kernel or other software. This book may be useful for those few exceptions, however.
  • This only leaves running the updater manually to install updated kernels (by default it doesn't upgrade the kernel automatically, you can of course change this) and the occasional reboot once you update a kernel (network services are restarted as needed). You just set it and forget it like the Ronco showtime rottisiere (sp?) BBQ.
    • If you had two Linux virtual systems, you could update the one not in use, then fail-over all of the running applications to it. You then update the one that was in use. The kernel is then updated, but the user doesn't have to wait for a reboot, as a virtual system is always running.
  • by wawannem ( 591061 ) on Monday February 06, 2006 @04:46PM (#14653975) Homepage
    Chapter 1:
    apt-get update ; apt-get upgrade
    • For one machine, yeah, no problem.

      For 10 machines? 50? 100? 500? No thanks.
      • apt-cron

        although that only works when the patch doesn't need human attention
        • Then you'd have to trust that the distro doesn't self destruct by patches breaking your vital (read mission critical here) services.
          • Obviously you do testing on the test machines and only push the updates to your apt repository after they have been tested, at which point the production machines auto update with them.

            You don't point the production machines at the distro's repository, but non-retardation is an assumed and hence these bits aren't usually made explicit.
            • Say I have 50 production Linux servers (go on... say it) running various vital services and I have the budget for the required multiple duplicate "test" servers (and the trained staff) and the time to test every patch that is released deemed necessary by our security team. Lets say that one slips by the "exhaustive" tests. I'd still want a better solution than plain ole apt to undo the patches across the enterprise quickly despite my vast and obvious retardation.
              • for i in list-of-machines
                do
                ssh $i command-to-roll-back-a-patch
                done
              • You make a valid point, but at the same time, here is how I look at it... You have to pick one of the following:

                1. Use APT (or insert any other similar tool [YUM, Portage, etc.]) which is heavily tested by thousands or even millions of developers and allows you to make all packages uniform as far as installation and packaging whether it is a homebrew package or a distro package.

                or

                2. Homebrew package management. Don't get me wrong, I'm not saying that this isn't a viable option, there are advantages to
          • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Monday February 06, 2006 @05:25PM (#14654307)
            Then you'd have to trust that the distro doesn't self destruct by patches breaking your vital (read mission critical here) services.
            No trust allowed.

            Before anything goes into production, it goes into test.

            YOU are the one responsible if a package breaks a production server.

            You can still set a cron job to auto-magically download and install the apps, but you'd point it to your own repository where you put only the packages that have passed your testing.

            The more "mission critical" something is, the less you want to automate ANY process that changes ANYTHING on the OS or apps.

            For our critical database server, I come in on the weekend and hand apply every patch. And that is AFTER those same patches have been applied to the test server.
    • I am taking you too seriously maybe ;)
      Anyway, first of all i'd use aptitude instead of apt-get. It has similar command line options (aptitude update, aptitude [dist-]upgrade), it has nice ways to resolve dependency problems, and it keeps a log of the upgrades (more precisely of the upgrade requests, IIRC).

      Then, having each box doing an update on its own is an unnecessary waste of band. There is stuff like apt-proxy [sourceforge.net].
      Another trick is to copy the .deb packages (ONLY the .deb packages) from the /var/cache/
      • Another trick is to copy the .deb packages (ONLY the .deb packages) from the /var/cache/apt/archive of an updated machine to the one to be updated. Apt recognizes it already have a local copy of the packages and refrains from obtaining it again from the network. Handy when installing a slightly old debian version on a new partition.

        Can you also just put them in a nfs share and mount that on your remote hosts? It works for gentoo... But I try to avoid debian because every time I mess with it, I get pis

        • Can you also just put them in a nfs share and mount that on your remote hosts?

          There might be problems when two machines mess with the "partial" subdirectory, which containes unfinished downloads. Of course that can be solved (remounting something over partial is the first thing that comes to mind) but then i'd choose some apt tools instead.
      • Really, you are taking me too seriously.

        My post was simply meant to make light of someone's attempt to write a book on a topic that seems trivial to me. Although my original comment was quite simple in nature, I was meaning to point to a versatile set of tools. IIRC, debian and the APT tools were developed because of Ian Murdoch's need to keep the Pixar render cluster up to date. Any 'debian in the datacenter' SysAdmin can tell you that the entire suite of APT tools is very handy. RedHat's recent attempt
        • As a Fedora user I am consistently wondering why no one mentions yum (YellowDog Update Manager) when they talk about apt-get, rhn, etc. yum is a command-line package management tool that can be configured to use multiple internet repositories, and individual repositories are easy enough to build, at least for small (I don't have any experience with more than about 6 computers) networks. A GUI client, yumex, is available as well. Is there some reason no one mentions this?
      • Then, having each box doing an update on its own is an unnecessary waste of band. There is stuff like apt-proxy.

        I find that apt-cacher [nick-andrew.net] is much simpler and nicer. It doesn't support every possible method of fetching packages like apt-proxy purports to, but how many do you really need? HTTP seems plenty good enough.

        Another trick is to copy the .deb packages (ONLY the .deb packages) from the /var/cache/apt/archive of an updated machine to the one to be updated. Apt recognizes it already have a local co

    • chapter 2:
      emerge sync; emerge -uD world
      • If that's chapter 2, I know as a diehard Gentoo user that Chapter 3 is: Pray.
        • Bummer. I do that regularly without issue except I usually add some more switches like "emerge -vtDua world" but that is not good for automating because it prompts you to start the update but also gives you a chance to see what is going to be updated first.

          If you have a really old install and have not done a 'emerge -Du world' then I could see you running into problems. I had problems because I had installed Gentoo about 4 or 5 years ago and was not using the "-D" option for a while which updates libraries
  • Maintaining a repository locally is not about just downloading all the packages to a directory on your local machine and hosting that directory on the network. You have to deal with a lot of issues here, like the hardware requirements, the kind of partition arrangement to make, what space to allocate to each partition, whether you need a proxy server and more.

    Umm, why? Does a package repository need to be more super-optimized than any other network resource?

  • > If the fourth chapter concentrated on apt for Debian systems

    Maybe you should re-read the book and pay more attention this time?
    • > Maybe you should re-read the book and pay more attention this time?

      "re-read" implies that the book was read once already; from its depth, I assumed this review was based on a hard look at the table of contents.
  • by Dogers ( 446369 ) on Monday February 06, 2006 @04:54PM (#14654028)
    Such as to the actual open source series?
    http://www.phptr.com/promotions/promotion.asp?prom o=1484&redir=1&rl=1 [phptr.com]

    This book will be there as a PDF in a few months, or you can buy it in dead tree format now.

    Other books are also linked there.
    • Authors Wanted (Score:3, Interesting)

      by Bruce Perens ( 3872 ) *
      We're looking for more authors. All books in the series are placed under the Open Publication License (commercial use permitted, it's a real Open Source license) and made available in source and unencrypted PDF three months after they get to bookstores. Paper copies sell through about as much as other books outside of the series. We get good placement in brick-and-mortar bookstores like Borders and Barnes and Noble. But IMO the biggest benefit to authors is that once sales die down your book won't be locked
  • Book summary (Score:2, Interesting)

    by MirrororriM ( 801308 )
    1. Set up your own package server on the LAN - this means the package server will download from the internet. So you basically have ONE machine downloading from the net - the rest of the machines are done internally.

    2. Next, set up your sources.list file to point only to that server.

    3. 17 8 * * * root apt-get update; apt-get upgrade

    4. ???

    5. Profit!!!

    • Right, because everyone wants every patch on every system that they have as soon as it comes out. I guess that you don't look after a lot of systems, or a lot of important systems. This does not cut it an an enterprise environment.

      You need to QA your packages that you push out to all systems. You need to make sure that the patches you install keep the system as stable as it was before. A sysadmin where I work once imaged two systems about two weeks apart. He patched one system (without testing) and it took
      • No, apt-get update and apt-get upgrade, without using an internal repository and grabbing from a public system, would force untested patches. With an internal machine as a repository, you test the patches on a test machine before you put them on your internal production machine repository. The scheduled cron was for the internal machines to grab from the internal repository - since it would be the only resource in sources.list.

        Common sense applies...then again, this is Slashdot.
  • Just asking....

    -b
  • apt-proxy (Score:3, Informative)

    by Douglas Simmons ( 628988 ) on Monday February 06, 2006 @05:24PM (#14654297) Homepage
    For those of you running Debian networks with a lot of boxes, you can use apt-proxy to apt-update/upgrade and patch all the machines through one download.
  • What about Gentoo's system where you use emerge and emerge sync...
    Did it get a mention, or not?
  • Great review, interesting book, I'll consider buying a copy.

    But the name "Patch Management" sorry that really grates on me. Almost universally GNU/Linux systems have abandoned patches, and perform upgrades to whole components at a logical level. Its the best way to do it found so far, but I don't think of those as "patches".

    Or is that just me?
  • by bit01 ( 644603 ) on Tuesday February 07, 2006 @04:31AM (#14658277)

    I like making all files on all machines on a LAN, excluding network addressing, electronic licensing and logs, bit-for-bit identical. Doing so massively reduces management overhead and improves management control.

    I've managed networks of several hundred machines this way and it works well. I checksum all files and directories on all machines on a regular basis and if anything's different in time or space I find out why and make sure it doesn't happen again. I've found dozens of very obscure and troublesome software and hardware bugs this way, have very good uptime and I can concentrate on making sure the master machines are well configured rather than waste time trying to put out fires all over the network all the time. If individual machine classes need to have different configurations I partition those differences out and manage them separately

    Distributing patch packages is error prone. By working at the file level it's easy to be confident everything is okay. You can also often distribute and back out "patches" (just a list of files to be rsync'ed) in the background very quickly at short notice with minimal impact on users.

    ---

    Keep your options open!

What is research but a blind date with knowledge? -- Will Harvey

Working...