Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
GNU is Not Unix Operating Systems Software Programming Technology

Hurd/L4 Developer Marcus Brinkmann Interviewed 327

wikinerd writes "A few years ago when the GNU OS was almost complete, the kernel was the last missing piece, and most distributors combined GNU with the Linux kernel. But the GNU developers continued their efforts and unveiled the Hurd in 1990s, which is currently a functioning prototype. After the Mach microkernel was considered insufficient, some developers decided to start a new project porting the Hurd on the more advanced L4 microkernel using cutting-edge operating system design, thus creating the Hurd/L4. Last February one of the main developers, Marcus Brinkmann, completed the process initialization code and showed a screenshot of the first program executed on Hurd/L4 saying 'The dinner is prepared!' Now he has granted an interview about Hurd/L4, explaining the advantages of microkernels, the Hurd/L4 architecture, the project's goals and how he started the Debian port to Hurd."
This discussion has been archived. No new comments can be posted.

Hurd/L4 Developer Marcus Brinkmann Interviewed

Comments Filter:
  • DNF (Score:3, Funny)

    by Poromenos1 ( 830658 ) on Saturday March 19, 2005 @09:29AM (#11984100) Homepage
    Now they can begin porting Duke Nukem: Forever!
    • Re:DNF (Score:3, Funny)

      by ndogg ( 158021 )
      GNU is hoping to release this opposite the release of Longhorn, and 3d Realms has a deal with both GNU and Microsoft to have a port ready for both operating systems.

      I can't wait!
    • Re:DNF (Score:3, Insightful)

      by pbranes ( 565105 )
      Fire up your Project Xanadu browser and check out the link on Hurd and DNF!!
    • Mod parent +1: cruel, funny and brilliantly sarcastic.
  • by spaeschke ( 774948 ) on Saturday March 19, 2005 @09:31AM (#11984119)
    And no, I don't think it's necessarily a bad thing. One reality of open source OSes, though, is that there are always going to be people developing The Next Big Thing, and it dilutes effort over the wider spectrum. Some of the best minds in the scene get spread far too thin under this model.

    That's the difference between OSS and proprietary companies. They can focus like a laser on what they want to develop and leave a lot of the infrastructural heavy lifting to those hippy anarchists in the open source scene.

    It's win-win for them, because they get the benefit of a lot of what these groups produce, and often can improve upon it (BSD --> OSX). It's like having an unpaid R&D dept. working for you 24/7.

    • by Anonymous Coward
      So your theory is that Microsoft, Apple and Sun are all united together behind a single vision? Sorry to break it to you, but they're "fragmented" too. There's no more reason for all free software developers to be working on the same project than there is for all proprietary software developers to do so.
    • by spaeschke ( 774948 ) on Saturday March 19, 2005 @09:42AM (#11984190)
      No, not at all. You misunderstand what I'm saying. I don't think this tendency in OSS is necessarily a bad thing, either. However, look at how splintered just one branch of open source is, Linux. That's just one sect of the OSS movement, and the infighting there is legendary. Now toss in all the other Unix variants and their own subsets.

      You just don't see this in the proprietary companies. Sure, they compete with each other, but within the companies themselves there's much tighter integration.

      I think this has the tendency to make OSS be sort of the breeding ground for the real innovations in tech, but largely unable to provide the sort of polish that proprietary companies can. I also think it's a large part of what keeps projects like Linux, Unix, etc. from really breaking through in areas like the desktop.

      It's not necessarily a bad thing, but I think to dismiss it is a mistake.

      • by Taladar ( 717494 ) on Saturday March 19, 2005 @09:48AM (#11984230)
        I think what "keeps Linux from breaking through in the Desktop" is mostly the fact that the people who primarily want it to break through are the ones only talking about software and the people who develop it and have the power to change it in a way to break through don't want to sacrifice their vision of a good operating system, application, ... (depending on what they develop) for mass-acceptance.
      • There's no infighting where you work?

        Who do you work for and where can I send my resume?
      • by killjoe ( 766577 ) on Saturday March 19, 2005 @01:59PM (#11985707)
        "I think this has the tendency to make OSS be sort of the breeding ground for the real innovations in tech, but largely unable to provide the sort of polish that proprietary companies can."

        This seems to be a popular theory here on slashdot. The idea that the reason linux hasn't broken through is because it lacks polish.

        History shows us that a more polished desktop does not always win. The Mac is a perfect example of this. During the 80s the mac was a slick polished interface while the PC had an ugly DOS command line. The PC won handily. Why is that?

        I suspect because it was more "open". You could stick cards in it, you could expand it, you could hack it and most importantly it had lotus 123 running on it so the business community embraced it.

        FOr the exact same reasons I predict linux continues to advance into the desktop. The final breaking point will come when businesses see competitive advantage in it. Once that happens I expect explosive growth in linux adoption and yes world domination.

        • Price! (Score:4, Insightful)

          by bonch ( 38532 ) on Saturday March 19, 2005 @07:15PM (#11987582)
          History shows us that a more polished desktop does not always win. The Mac is a perfect example of this. During the 80s the mac was a slick polished interface while the PC had an ugly DOS command line. The PC won handily. Why is that?


          This is a classic logic fallacy in that you pick and choose the factors that support your argument. The PC didn't win because it was more "open." It won because it was cheaper than the $2000+ priced Macintosh, fueled by commodity PC-clones (remember the phrase "PC-compatible"?) that competed with each other and brought prices down each year.
    • by benjcurry ( 754899 ) on Saturday March 19, 2005 @09:44AM (#11984208) Homepage
      This has been and always will be the essence of the OSS development structure. The illusion is that the OSS world is somehow united. The Hurd project has NOTHING to do with Linux. Or any BSD. Or Arch Linux. Or the GIMP. Just as Macromedia Dreamweaver has NOTHING to do with Frontpage. It's not splintering...they're completely different things.
      • The Hurd project has NOTHING to do with Linux.... Just as Macromedia Dreamweaver has NOTHING to do with Frontpage. It's not splintering...they're completely different things.

        I don't know Frontpage or Dreamweaver well enough to know if they couldn't be used on the same website, but using Hurd necessarily precludes using Linux on the same machine (and vise versa)... such mutual exclusivity would by itself indicate an overlap of functionality, and it stands to reason anyway since they're both operating syst

    • Re:Focus (Score:3, Interesting)

      by gr8_phk ( 621180 )
      "They can focus like a laser on what they want to develop"

      Actually, the free developers are the one who focus like a laser on what they want to develop. At a Big Dumb Company(tm) the developers may not focus as sharply as the "decision makers". Here, the decisions are made by the developers and hence there is better focus on the goals. If it were universally agreed what the goal should be, everyone would focus. Since it's not a given, people will latch onto things others may think are unnecessary. My own e

    • Except that GNU was started for software freedom years before the open source movement existed. GNU was not about pushing anything "open source". Making a fork of GNU into a proprietary OS would definately not be considered any kind of advantage because such a program would deny software freedom to its users. Nor is the focus of the free software movement an issue of perfecting a development model aimed at rallying unpaid labor to work on one's program.

      I suggest learning more about the difference betwe [gnu.org]

  • by Anonymous Coward
    Hi Marcus,

    How many people do you think will submit questions thinking that this is a Slashdot interview?
  • GNU (Score:5, Interesting)

    by bcmm ( 768152 ) on Saturday March 19, 2005 @09:39AM (#11984166)
    GNU made most of the core programs that Linux normally uses, and they are universally considered excellent. So why is it so hard for them to make a kernel?

    Is it just loss of interest after Linux became popular?
    • Re:GNU (Score:3, Insightful)

      by Anonymous Coward
      Is it just loss of interest after Linux became popular?

      That, and I also seem to recall that they faced a choice: write a quick and dirty monolithic kernel, or write a much more complex (but theoretically more advanced, secure, robust, etc.) set of servers running on top of a microkernel. Linux came onto the scene and provided the former, so the GNU kernel folks decided to work on the latter. The latter, of course, being HURD.

      That's my simplistic take on things from what I've read in the past, anyway.
      • According to that other posting [slashdot.org] to this story, Hurd was already in development when Linus first asked for help developing his new OS.
      • Re:GNU (Score:2, Interesting)

        by Anonymous Coward
        Have you read the great debate? [oreilly.com] I don't think Linus would agree that the decision to use a monolithic kernel was driven by scarcity of development resources and time to market type thinking.
        • It doesn't matter what Linus things. Here is why:

          Suppose someone gets in a raft and starts heading to their friends house. The friend happens to be downstream. They get there, and the friend says "I see you decided to come down stream." But he didn't even think about that, he just wanted to get something done. He was successful because he happened to go the way that was possible with the resources he had.

          Linus might have just wanted to make a down and dirty kernel to hack on. But it was successful b
      • Re:GNU (Score:3, Insightful)

        by shaitand ( 626655 )
        "but theoretically more advanced, secure, robust, etc.) set of servers running on top of a microkernel."

        There are no shortage of people who would disagree with you about Microkernels. You also neglect mention that they are incredibly slow.
        • Re:GNU (Score:4, Informative)

          by vegetasaiyajin ( 701824 ) on Saturday March 19, 2005 @07:27PM (#11987647)
          There are no shortage of people who would disagree with you about Microkernels. You also neglect mention that they are incredibly slow

          While it is true that microkernels are slower than monolithic kernels, they have many advantages. They can be more stable and secure. Two things that plague current operating systems, including Linux.

          Regarding performance, everyone likes to take Mach as an example of how slow microkernels are. But, many microkernel bashers seem to forget QNX, which has never been accused of being terribly small. It is one of the best (if not the best) hard real time operating systems out there. OK, it is proprietary, but it is a proof that microkernel based operating systems can be done right.
    • Re:GNU (Score:5, Informative)

      by Richard_at_work ( 517087 ) on Saturday March 19, 2005 @09:49AM (#11984239)
      I dont think so, because the Hurd was under development for quite a few years before Linux became popular. Personally, I think the GNU philosophy works excellently for individual programmers working on their own individual projects (as the gnu toolchain shows) but a lot of the larger projects that Gnu has been involved in have stagnated sooner or later. It took a complete fork to kickstart GCC into version 3, the Hurd has had its core architecture changed multiple times so personally I think that the 'group' is more at fault than lack of interest etc.

      Oh boy, is this comment going down to -1 or what.
      • Re:GNU (Score:4, Insightful)

        by WolfWithoutAClause ( 162946 ) on Saturday March 19, 2005 @12:30PM (#11985197) Homepage
        I think it's the other way around. Large software programs are *hard*.

        The Linux kernel works because it fills a need, and because it fills a need, lots of people will want to work on it.

        The Hurd is more researchy and hence doesn't fullfill anyone's needs exactly, and yes, to the extent that Linux does fit them, then there are less people working on Hurd.

        In my experience, a program that fills 90% of the need for 10% of the effort will nearly always win out, even if the extra 10% costs another 90%. The Linux kernel was a quick hack (and I don't mean that in a bad way), whereas Hurd was trying for perfection...

    • Re:GNU (Score:2, Redundant)

      by pr0nbot ( 313417 )

      GNU made most of the core programs that Linux normally uses, and they are universally considered excellent. So why is it so hard for them to make a kernel?

      Most of the team left in the mid-90s to work on Duke Nukem Forever.

    • Re:GNU (Score:2, Interesting)

      by CondeZer0 ( 158969 )
      > GNU made most of the core programs that Linux normally uses, and they are universally considered excellent
      In my universe [bell-labs.com] they are considered junk; that BTW, is the same "universe" of the inventors of Unix and C(and many other things).

      RMS and GNU never understood Unix and the Unix philosophy, and it shows; they can't code in C either, take a look at the source of gnu-core-utils some day... I did it, and I'm still recovering from the trauma. And gcc and other gnu "tools" are not better.

      The only origin
      • Re:GNU (Score:3, Insightful)

        by say ( 191220 )

        take a look at the source of gnu-core-utils some day... I did it, and I'm still recovering from the trauma. And gcc and other gnu "tools" are not better.

        Uhm... you're saying that the gnu tools and projects should be assessed by their coding style? Who cares what the "unix philosophy" of coding style is? The tools should obviously be assessed by how they work and mimic the original tools - and as far as I know, they are "up there" with the commercial unices. Gcc is far better than any compiler from the o

        • Maintainability is a key attribute of a program, so yes coding style is important. Many GNU programs are rich in OBFUSCATING_MACRO()s and unfamiliar terms, coupled with poor source-level documentation. The funny indentation and occasional pre-ANSI C (needed for bootstrapping the toolchain on some platforms) doesn't help.

          Anything the gnu group can do to ease the learning curve can only help them in the long term.

          • Re:GNU (Score:3, Interesting)

            by dosius ( 230542 )
            I'd like to see a full userland based strictly on this [sourceforge.net] and BSD, with as little GNU as possible in it, simply because BSD and Heirloom Toolchest are closer in function to TRUE Unix.

            Moll.
      • Re:GNU (Score:3, Interesting)

        by anothy ( 83176 )
        in my experience, both in Bell Labs and elsewhere, anyone with experience on "real" unixes thinks the GNU tools are second-rate at best. their coding ability isn't the primary issue (although there are certainly questions there); it is, as you said, their lack of understanding of the philosophy. at least the BSD folks, who got many things wrong in their own derivative works, understood the fundamental philosophy (mostly) of Unix.

        to be fair, GNU and Linux have made some very significant, very positive contr
        • Re:GNU (Score:2, Insightful)

          by top_down ( 137496 )
          GNU and LInux are interesting for sociological/political reasons. they are not scientifically interesting.

          Amen to that.

          Happily the science of operating systems isn't very interesting anymore so this is no great loss. Political and economic reasons are of much greater importance, and this is where Linux shines.

        • uh... wrong (Score:5, Insightful)

          by jbellis ( 142590 ) <(jonathan) (at) (carnageblender.com)> on Saturday March 19, 2005 @12:40PM (#11985272) Homepage
          having adminned variously HP-UX, AIX, Irix, and Solaris boxes, one of the first things I did on a machine was install the gnu toolset. The proprietary stuff was years behind (Solaris was probably the worst) and getting almost anything modern to compile with them was a real bitch.
    • Because GNU tools were largely a copy of existing unix software? Operating systems involve lots of very uninteresting work like driver development and debugging, so unless you reach a critical mass of acceptance, you can't get very far. Plus, Linux and other free operating systems have poached a lot of the talent I presume. Who wants to write a driver for Hurd when a driver for Linux will make you rich and famous and get girls (haha).
    • It's adherence to some bad design decisions.
  • by rbarreira ( 836272 ) on Saturday March 19, 2005 @09:40AM (#11984170) Homepage
    Does anyone remember this quote from Linus Torvald's first announcement [google.com] of his pet project "Linux"?

    I can (well, almost) hear you asking yourselves "why?". Hurd will be out in a year (or two, or next month, who knows), and I've already got minix. This is a program for hackers by a hacker. I've enjouyed doing it, and somebody might enjoy looking at it and even modifying it for their own needs. It is still small enough to understand, use and modify, and I'm looking forward to any comments you might have.

    This was in 1991...
  • by Jovian_Storm ( 862763 ) on Saturday March 19, 2005 @09:41AM (#11984179) Homepage
    The kernel is the last missing piece? What's the first piece, an integrated browser?
  • Mirror (Score:4, Informative)

    by Anonymous Coward on Saturday March 19, 2005 @09:47AM (#11984220)
  • I have no idea why Linux became more popular in the first place, considering that there was already BSD and the HURD, but it's current popularity has pretty much killed any chance HURD had. BSD still has enough adherants that it can be continuously developed, but HURD never did. All the programmers who a) have the expertise required and b) are willing to work for free, are working on either Linux or BSD. I think HURD offers interesting possibilities, but I don't think a stable HURD will ever see the ligh
    • BSD has at least the tradition and mindset of the Berkley distributions behind it and carries a different approach to security than Linux does (out of the box). But the various BSDs have always been behind in hardware support. Not a big deal for those seeking a stable server platform, but almost totally fatal for most desktop users. HURD is likely to be as behind or moreso than BSD, couple that with the lack of anything compelling about it other than the philosophy and you've got nothing more than a curiosi
      • But the various BSDs have always been behind in hardware support.

        This is not true, of course. BSD was in use when Linux was in it's infancy, and thus had more drivers. In general, *BSD is not far behind Linux when it comes to supporting new hardware with open source drivers, and in some cases are ahead. Lagging behind is often due to hardware manufacturers refusing [undeadly.org] to give documentation.

        That said, several hardware manufacturers offer binary-only drivers for Linux for hardware. A well known example

    • I have no idea why Linux became more popular in the first place, considering that there was already BSD and the HURD,

      Linux became more popular than the HURD because it was ready and it worked. Linux became more popular than BSD because of combination of factors, including the distaste of some people for the BSD licence (the commercial-forks allowance in particular), and uncertainty about its copyright status while the AT&T suit was still pending.

    • but it's current popularity has pretty much killed any chance HURD had

      I really doubt it. If linux hadn't been such a success in '91, my guess is that the GNU project as a whole would have been dead. HURD might have been in a more "finished" state, but it would still have fewer users than it will get when it eventually is released.

      I think F/OSS would be marginal without linux. Now it's mainstream. That benefits all F/OSS projects.

    • I have no idea why Linux became more popular in the first place, considering that there was already BSD

      The license. If I'm going to do work for Apple and Microsoft, they can damn well pay me or at least give me a copy of the end product.

      As to HURD: who ever cared?

      TWW

    • I installed linux on my machine in late 1991 (I think it was version 0.11). At the time, 386bsd wasn't officially out yet; I think it came out in eary-mid 1992. Linux development was pretty rapid, especially during the first year, and 386bsd development was pretty slow.

      I think this is the real reason why linux became so popular. By the time netbsd started getting off the ground, there was already a pretty large linux user base. If the timing and development pace of 386bsd had been different, things ma

    • "I have no idea why Linux became more popular in the first place, considering that there was already BSD and the HURD,"

      BSD wasn't ready for 386 at the time, and had the AT&T lawsuit hanging over it. And with Hurd not ready to go now, what makes you think it was ready to go in the early 90s?
  • Good thing (Score:3, Interesting)

    by Skiron ( 735617 ) on Saturday March 19, 2005 @10:03AM (#11984331)
    You can't get 'too many' open source kernels. All right, the HURD is old, and development is slow. But at least it is another choice.

    Consider the alternative if GNU project wasn't started by RMS... even Linux wouldn't have been around...
  • Mirokernel Linux? (Score:3, Interesting)

    by Jovian_Storm ( 862763 ) on Saturday March 19, 2005 @10:38AM (#11984525) Homepage
    While we're on the topic of microkernels, wouldn't it be a good idea to gradually make the linux kernel less monolithic, finally turning it into a nifty microkernel based OS? Is there anything going on in this direction?
    • you should check out DragonFlyBSD [dragonflybsd.org]

      It is explicitly not a microkernel and they don't plan to make it one, but it has some microkernel-like properties. For example, programs do not invoke system calls directly, they pass though a translation layer in userspace. This allows a bunch of very cool things that I will not enumerate here because they're on the website.

      It's not done yet but they have a working release.
    • Linux is slowly moving that direction, in a natural-selection sort of way. Initramfs and klibc ("early userspace") will allow things like root-on-nfs and in-kernel DHCP (which arguably shouldn't have been in the kernel in the first place) to move to userspace, but developers are already beginning to talk about using it to put things like partition table parsing and console terminal support into userspace as well. I'd guess that more people will start to see applications for early userspace once it's "comp
  • by Ohreally_factor ( 593551 ) on Saturday March 19, 2005 @10:50AM (#11984610) Journal
    Could you please stop interviewing him and let Marcus get back to work? If you keep interviewing him, we're never going to see Hurd in a usable state.
  • by cies ( 318343 ) on Saturday March 19, 2005 @11:11AM (#11984742)
    i see many opst here and i wonder if y'all know that HURD has a key feature, namely:

    >>it should become possible to replace a running kernel

    in other words NEVER REBOOT AGAIN!

    in practise this is still hard to accomplish: but at least people are working on it. and yes, to implement this takes time.

  • by bdbolton ( 830677 ) on Saturday March 19, 2005 @11:12AM (#11984752) Journal
    Many of the posts above say Hurd is a waste of time. I suspect the Hurd team just enjoys hacking. I really don't think they care if its a "waste" of time. They just love what they do. I think it's awesome to be so dedicated to your craft. Even if the Hurd never works... I bet they will still look back on the whole experience as something pretty cool.

    My own personal experience: I worked on an 8 month student project that in many ways failed in the end. But I would never consider that a waste of time. I learned so much and had a blast doing it.

    -bdb
    • by The_Dougster ( 308194 ) on Saturday March 19, 2005 @12:02PM (#11985026) Homepage
      Many of the posts above say Hurd is a waste of time. I suspect the Hurd team just enjoys hacking. I really don't think they care if its a "waste" of time. They just love what they do. I think it's awesome to be so dedicated to your craft. Even if the Hurd never works... I bet they will still look back on the whole experience as something pretty cool.

      I think this hits the nail on the head. That's why I have enjoyed tinkering with Hurd over the years. I currently have a bootable Debian/Hurd partition, and I have recently built the L4/Hurd system up to its current state. I haven't been able to get banner running like Marcus did, but its not for a lack of trying.

      Many Slashdotters will say "Why waste your time with Hurd because BSD/Linux/Windows/OSX/etc already works great and needs more contributors?" Well, its my time and if I want to play around with experimental source code then that's what I am going to do.

      I already have a nicely working Gentoo Linux system that I use most of the time, and I'm happy with it. However, I am one of those types that wants to always learn, and by following the progress of Mach/Hurd and now L4/Hurd I get to grow up with the operating system and there is a small chance that I will be able to make a useful contribution here or there occasionally.

      Hurd isn't trying to sell itself to become a replacement for your current favorite operating system. It is simply a project to create an OS based on advanced and sometimes theoretical computer science ideas.

      People like Marcus put a lot of effort into realizing these abstractions in code. Sometimes it doesn't work out and they have to backstep, but progress continues. I have been on the developer's mailing list for years, and honestly I don't understand 90% of what they are talking about but it is pretty interesting nonetheless.

      Hurd makes pretty heavy use of GNU autotools; i.e. "./configure" and a lot of the real benefits of the old Mach based Debian/Hurd are that upstream sources have been patched so that you can hopefully build them on Hurd just by running configure and make. L4-Hurd is still Hurd, so all the work done is still relevent. When they whip the sheet off of the shiny new engine, the rest of the parts waiting on the shelf are already there.

      And they are making good progress. They have it now to the point where a lot of people are working on getting libc to build and once the kernel and libc are working that is the keystone that lets all the other peices of the puzzle come together.

      It's totally a research project. There is no agenda other than some people like Marcus thought it was a cool project and decided to fool around with it. I'm the same way, sometimes I get bored with Linux so I put on my Hurd cap and play around with it for a while.

  • Check this quote (Score:3, Interesting)

    by Anonymous Coward on Saturday March 19, 2005 @12:08PM (#11985065)
    From: ast@cs.vu.nl (Andy Tanenbaum)
    Subject: Re: LINUX is obsolete

    "I still maintain the point that designing a monolithic kernel in 1991 is a fundamental error. Be thankful you are not my student. You would not get a high grade for such a design"

    To Linus Torvalds.
  • by wikinerd ( 809585 ) on Saturday March 19, 2005 @12:38PM (#11985260) Journal

    You can try Hurd/L4 right now by burning a bootable CD (iso) using the Gnuppix [gnuppix.org].

    You can read the interview through its Coral cache [nyud.net] or its MirrorDot cache [mirrordot.org] . There is also a Google cache [google.com].

    There is also a MirrorDot-cached PDF version of the interview that can be downloaded by clicking here [mirrordot.org].

    Thanks

  • by Animats ( 122034 ) on Saturday March 19, 2005 @12:51PM (#11985336) Homepage
    There are really only two microkernels that work - VM for IBM mainframes, and QNX. Many others have tried, but few have succeeded. Here's why.

    VM is really a hypervisor, or "virtual machine monitor". The abstraction it offers to the application looks like the bare machine. So you have to run another OS under VM to get useful services.

    The PC analog of VM is VMware. VMware is much bigger and uglier than VM because x86 hardware doesn't virtualize properly, and horrible hacks, including code scanning and patching, have to be employed to make VMware work at all. IBM mainframe hardware has "channels", which support protected-mode I/O. So drivers in user space work quite well.

    QNX [qnx.com] is a widely used embedded operating system. It's most commonly run on small machines like the ARM, but it works quite nicely on x86. You can run QNX on a desktop; it runs Firefox and all the GNU command-line tools.

    QNX got interprocess communication right. Most academic microkernels, including Mach and L4, get it wrong. The key concept can be summed up in one phrase - "What you want is a subroutine call. What the OS usually gives you is an I/O operation". QNX has an interprocess communication primitive, "MsgSend", which works like a subroutine call - you pass a structure in, wait for the other process to respond, and you get another structure back. This makes interprocess communication not only convenient, but fast.

    The performance advantage comes because a single call does the necessary send, block, and call other process operations. But the issue isn't overhead. It's CPU scheduling. If the receiving process is waiting (blocked at a "MsgReceive"), control transfers immediately to the receiving process. There's no ambiguity over whether the message is complete, as there is with pipe/socket type IPC. There's no trip through the scheduler looking for a process to run. And, most important, there's no loss of scheduling quantum.

    This last is subtle, but crucial. It's why interprocess communication on UNIX, Linux, etc. loses responsiveness on heavily loaded systems. The basic trouble with one-way IPC mechanisms, which include those of System V and Mach, is that sending creates an ambiguity about who runs next.

    When you send a System V type IPC message (which is what Linux offers), the sender keeps on running and the receiving process is unblocked. Shortly thereafter, the sending process usually does an IPC receive to get a reply back, and finally blocks. This seems reasonable enough. But it kills performance, because it leads to bad scheduling decisions.

    The problem is that the sending process keeps going after the send, and runs until it blocks. At this point, the kernel has to take a trip through the scheduler to find which thread to run next. If you're in a compute-bound situation, and there are several ready threads, one of them starts up. Probably not the one that just received a message, because it's just become ready to run and isn't at the head of the queue yet. So each IPC causes the process to lose its turn and go to the back of the queue. So each interprocess operation carries a big latency penalty.

    This is the source of the usual observation that "IPC under Linux is slow". It's not that the implementation is slow, it's that the ill-chosen primitives have terrible scheduling properties. This is an absolutely crucial decision in the design of a microkernel. If it's botched, the design never recovers. Mach botched it, and was never able to fix it. L4 isn't that great in this department, either.

    Sockets and pipes are even worse, because you not only have the scheduling problem, you have the problem of determining when a message is complete and it's time to transfer control. The best the kernel can do is guess.

    QNX is a good system technically. The main problem with QNX, from the small user perspective, is the company's marketing operation, which is dominated by inside sales people who want to cut big

    • by Hard_Code ( 49548 ) on Saturday March 19, 2005 @01:22PM (#11985500)
      I am obviously not as well versed in operating system design as you are, but it seems what you are posing is merely an impedence mismatch in programming models.

      For programs that want IPC to work like a subroutine, blocking atomically, then yes implementing it as async IO in the kernel is a mismatch. But what about the reverse case? Are there not any examples of programs which would be better suited to queue and recieve IPC responses asynchronously? If you make IPC atomic this case is simply not possible. Whereas if you start with an asynchronous implementation, you can always optimize a fast path, with a blocking function like MsgSendAndRecieve(). Am I wrong?
      • Furthermore, for true portability and flexibility, isn't making an assumption about the implementation dangerous (a la Deutsch "Fallacies of Distributed Computing", #1,2,5,7). What if we want to implement a system call in some sort of cluster environment, with potentially remote, distinct IO resource (such as disk)? Won't the everything-is-a-subroutine assumption fuck us over then?
    • The BeOS microkernel worked and worked well. The company just wasn't financially viable. VM and QNX each have their niches (mainframes and embedded) where they didn't have to fight off MS. BeOS, not having such a niche tried to get installed on some manufacturers PC's but MS threatened to raise the price for Windows to those manufacturers if they did so, so Be was left to wither.
    • by Anonymous Coward on Saturday March 19, 2005 @03:07PM (#11986129)
      The design you describe can lead to deadlock (what if the receiver decides to never return control to the sender?), which is why QNX is suitable for its particular market; these problems can be mitigated to some extent by good design, but make no mistake, QNX is squarely targetted at the embedded market where you have near-total control over the environment.
    • by Anonymous Coward
      >If the receiving process is waiting (blocked at a "MsgReceive"), control transfers immediately to the receiving process. There's no ambiguity over whether the message is complete, as there is with pipe/socket type IPC. There's no trip through the scheduler looking for a process to run. And, most important, there's no loss of scheduling quantum.

      You have just described _exactly_ how L4 IPC works. I think you are saying important things, but I can not make anything out of your claim that L4 "gets it wron
  • It makes me smile that the next big step for the hurd seems will be a move away from a (technically inferior) GPL'ed microkernel (GNU Mach) to a BSD Licenced one (L4Ka Pistachio), but to see the 2 greatest visions of free (BSD freedom and GPL freedom); which have occasionally been perceived as 2 completely seperate camps software feeding one another in this way is quite a lovely thing to behold. Don't know how RMS takes the the hurd's steps away from GNU Mach or GPLed OSKit Mach though - it might mean that
  • Hurd, huh? Hey, how's that project going, anyway?
  • M y take (Score:5, Informative)

    by mfterman ( 2719 ) on Saturday March 19, 2005 @02:14PM (#11985778)
    There's a spectrum of issues in the computing world, that range from computer science, which involves doing things where you don't know what the algorithms are to do what you want and have to invent them, to software engineering, which is building something which is extremely well known and understood.

    Open source projects are on the whole better, or at least achieve success more quickly on software engineering problems than computer science problems. Writing a word processor is a software engineering problem. It's not like the concepts are not well understood and well documented all over the place, the trick is just building a solid and reliable instance of it. Because it's such a simple and well known problem you can bring in dozens to hundreds of programmers to work on it and there are few debates about how to do things, it's usually more an issue of prioritizing feature lists and bug fixes.

    Linux is a software engineering project in the classic sense. Linus and others have been rebuilding Unix, which is extremely well understood. Everyone more or less understood and agreed on how Unix systems work. It's not a question of how to build a memory management algorithm with acceptable performance, but rather which existing algorithm has the best peformance. In general, Linux tends to spend more time debating between existing solutions than trying to find a solution to a problem. The reason that Linux has come so far so fast is that it's treading on extremely familiar ground and isn't really trying to do anything new from a computer science level.

    Hurd is more in the area of computer science. They don't have thirty years of precedence going in their favor. While there has been plenty of work in microkernels, there's far less of work there than for Unix. The Hurd people are trying to make something new, rather than reinvent something that's familiar, which is a much easier task by far. So the fact that the Hurd people are moving more slowly is more an indication of the difficulty of the task.

    Now the question is, why work on Hurd at all? Well, the answer to that is the answer to the question of whether or not there are things with a microkernel that you cannot do with a regular kernel, and whether these things are worth doing. It is entirely possible on a security level for Linux to hit a dead end, running into the limits of a monolithic kernel architecture. That if there is to be any progress past a certain point, that a rearchitecture is needed to switch to a microkernel architecture. I'm not saying this is the case, but I am saying that it is not an impossibility.

    If that is the case, then Linus and others will need to do a major rearchitecture in a new release, or they need to switch over to an existing microkernel project that they feel is acceptable to them. Even if the Linux people decide to do their own microkernel architecture from scratch in that case, they will almost certainly be going over the entire history and the results of the GNU/Hurd project with a fine tooth comb for data on how to build a viable microkernel operating system.

    To say that microkernels are slower than monolithic kernels is on some level unimportant. CPU speeds have slowed down somewhat but we're still improving the speed of systems. The question becomes are you willing to trade a performance hit for security. Would you rather have a fast system that is more vulnerable to nasty software or would you rather have a slower but more secure system? So the Hurd people are focusing on security since that is potentially the greatest strength of microkernels over monolithic kernels.

    So look at the Hurd project, like a bunch of other projects as a research project. And yes, it's taking them a heck of a long time to get results but they're not in any particular hurry. It's like Linux versus Windows, Linux doesn't need to "win" next year. It just keeps on chugging and eventually grinds away at the opposition. Hurd just keeps getting better every year and maybe someday it will clearly surpass Linux in a few areas. Probably not anytime soon, but this isn't a race.

    So no, Hurd isn't a waste of time. It's a research project and one that may be of significant importance to Linux down the road.

No spitting on the Bus! Thank you, The Mgt.

Working...