Hurd/L4 Developer Marcus Brinkmann Interviewed 327
wikinerd writes "A few years ago when the GNU OS was almost complete, the kernel was the last missing piece, and most distributors combined GNU with the Linux kernel. But the GNU developers continued their efforts and unveiled the Hurd in 1990s, which is currently a functioning prototype. After the Mach microkernel was considered insufficient, some developers decided to start a new project porting the Hurd on the more advanced L4 microkernel using cutting-edge operating system design, thus creating the Hurd/L4. Last February one of the main developers, Marcus Brinkmann, completed the process initialization code and showed a screenshot of the first program executed on Hurd/L4 saying 'The dinner is prepared!' Now he has granted an interview about Hurd/L4, explaining the advantages of microkernels, the Hurd/L4 architecture, the project's goals and how he started the Debian port to Hurd."
Re:MirrorDot Mirror of Interview (Score:5, Informative)
Damn, wrong button!
Re:The continued splintering of OSS (Score:5, Informative)
Mirror (Score:4, Informative)
http://dilse.net/hurd.html [dilse.net]
Re:GNU (Score:5, Informative)
Oh boy, is this comment going down to -1 or what.
Mirrordot (Score:0, Informative)
Re:Does anyone else think it's funny... (Score:2, Informative)
Re:And how long have they been working on this? (Score:5, Informative)
On the other hand, I guess I'm not the only one of this mind, as it obviously wouldn't have taken 20 years to get to the point where a program can finally run on it if everybody else with development skills didn't also believe it a total waste of their time.
From the interview:
Security and stability are tightly related issues, and they are major motivations for any microkernel based system. However, we feel that security does not need to translate to loss of freedom. With a bit of extra trouble, you can be secure and even increase the freedom of the user. This is what we want to do.
In the Hurd, the operating system is implemented as a set of servers, and each runs in its own address space. Of course there are some essential system services which better not crash, or the system will reboot immediately as a last attempt to salvage the situation. But for many other services, a crash is not fatal. If a filesystem server crashes (except for the root filesystem), you can just restart it (or it is restarted automatically by the system). Dead-locks require manual interaction, and you will have to kill the hanging server to remove it from the system and release associated resources.
The Hurd achieves its stability and security by protocols between components that require no mutual trust. So, although a user can add their own filesystem to the filesystem hierarchy, and the parent filesystem will redirect accesses through such a mount point to the user's filesystem, there is nothing the user's filesystem can do that can affect the rest of the system in a bad way. The Hurd servers are written in a way to assume the worst from a communication partner, namely that it is malicious, as an implication you get fault-tolerance for free.
Re:GNU (Score:1, Informative)
If only Plan 9 were true, DFSG-Free open source and GPL compatible, eh?
Re:And how long have they been working on this? (Score:2, Informative)
Well, unless we looked at the Amiga OS, QNX, or any of a large number of (actually working) microkernels in academia...
Re:My opinion on this whole thing... (Score:3, Informative)
I installed linux on my machine in late 1991 (I think it was version 0.11). At the time, 386bsd wasn't officially out yet; I think it came out in eary-mid 1992. Linux development was pretty rapid, especially during the first year, and 386bsd development was pretty slow.
I think this is the real reason why linux became so popular. By the time netbsd started getting off the ground, there was already a pretty large linux user base. If the timing and development pace of 386bsd had been different, things may have been different today.
Re:My opinion on this whole thing... (Score:3, Informative)
BSD wasn't ready for 386 at the time, and had the AT&T lawsuit hanging over it. And with Hurd not ready to go now, what makes you think it was ready to go in the early 90s?
Re:Mirokernel Linux? (Score:3, Informative)
It is explicitly not a microkernel and they don't plan to make it one, but it has some microkernel-like properties. For example, programs do not invoke system calls directly, they pass though a translation layer in userspace. This allows a bunch of very cool things that I will not enumerate here because they're on the website.
It's not done yet but they have a working release.
Coral cache, more links (Score:4, Informative)
You can try Hurd/L4 right now by burning a bootable CD (iso) using the Gnuppix [gnuppix.org].
You can read the interview through its Coral cache [nyud.net] or its MirrorDot cache [mirrordot.org] . There is also a Google cache [google.com].
There is also a MirrorDot-cached PDF version of the interview that can be downloaded by clicking here [mirrordot.org].
Thanks
Re:And how long have they been working on this? (Score:1, Informative)
Re:the killer feature of HURD (Score:1, Informative)
Re:Linus Torvalds (Score:3, Informative)
AIX, IBM's Unix, does the same thing with its filesystem. All files are memory mapped and I/O is done by paging in or out.
This is one of the reasons the AIX JFS filesystem was tied tightly to the Power MMU hardware. When AIX was ported to IA64 this required the adoption of the new JFS2 (OS/2 heritage) which is the preferred filesystem on AIX now. This OS/2 derived JFS is the one available in Linux.
Markus
M y take (Score:5, Informative)
Open source projects are on the whole better, or at least achieve success more quickly on software engineering problems than computer science problems. Writing a word processor is a software engineering problem. It's not like the concepts are not well understood and well documented all over the place, the trick is just building a solid and reliable instance of it. Because it's such a simple and well known problem you can bring in dozens to hundreds of programmers to work on it and there are few debates about how to do things, it's usually more an issue of prioritizing feature lists and bug fixes.
Linux is a software engineering project in the classic sense. Linus and others have been rebuilding Unix, which is extremely well understood. Everyone more or less understood and agreed on how Unix systems work. It's not a question of how to build a memory management algorithm with acceptable performance, but rather which existing algorithm has the best peformance. In general, Linux tends to spend more time debating between existing solutions than trying to find a solution to a problem. The reason that Linux has come so far so fast is that it's treading on extremely familiar ground and isn't really trying to do anything new from a computer science level.
Hurd is more in the area of computer science. They don't have thirty years of precedence going in their favor. While there has been plenty of work in microkernels, there's far less of work there than for Unix. The Hurd people are trying to make something new, rather than reinvent something that's familiar, which is a much easier task by far. So the fact that the Hurd people are moving more slowly is more an indication of the difficulty of the task.
Now the question is, why work on Hurd at all? Well, the answer to that is the answer to the question of whether or not there are things with a microkernel that you cannot do with a regular kernel, and whether these things are worth doing. It is entirely possible on a security level for Linux to hit a dead end, running into the limits of a monolithic kernel architecture. That if there is to be any progress past a certain point, that a rearchitecture is needed to switch to a microkernel architecture. I'm not saying this is the case, but I am saying that it is not an impossibility.
If that is the case, then Linus and others will need to do a major rearchitecture in a new release, or they need to switch over to an existing microkernel project that they feel is acceptable to them. Even if the Linux people decide to do their own microkernel architecture from scratch in that case, they will almost certainly be going over the entire history and the results of the GNU/Hurd project with a fine tooth comb for data on how to build a viable microkernel operating system.
To say that microkernels are slower than monolithic kernels is on some level unimportant. CPU speeds have slowed down somewhat but we're still improving the speed of systems. The question becomes are you willing to trade a performance hit for security. Would you rather have a fast system that is more vulnerable to nasty software or would you rather have a slower but more secure system? So the Hurd people are focusing on security since that is potentially the greatest strength of microkernels over monolithic kernels.
So look at the Hurd project, like a bunch of other projects as a research project. And yes, it's taking them a heck of a long time to get results but they're not in any particular hurry. It's like Linux versus Windows, Linux doesn't need to "win" next year. It just keeps on chugging and eventually grinds away at the opposition. Hurd just keeps getting better every year and maybe someday it will clearly surpass Linux in a few areas. Probably not anytime soon, but this isn't a race.
So no, Hurd isn't a waste of time. It's a research project and one that may be of significant importance to Linux down the road.
Re:Genera. (Score:1, Informative)
And it's not a "functional language". It's a multi-paradigm language. Actually, ML and Haskell people scorn lisp because lispers won't commit to the functional programming "one true path". You can do functional programming with type inferencing in lisp with a compiler like CMUCL or SBCL. But you don't _have_ to. And lispers, not being the nazis of the purist FP community, like it that way.
So I suggest you get on over to http://cliki.net/ [cliki.net] and get clue. k thx bye.
Re:GNU (Score:4, Informative)
While it is true that microkernels are slower than monolithic kernels, they have many advantages. They can be more stable and secure. Two things that plague current operating systems, including Linux.
Regarding performance, everyone likes to take Mach as an example of how slow microkernels are. But, many microkernel bashers seem to forget QNX, which has never been accused of being terribly small. It is one of the best (if not the best) hard real time operating systems out there. OK, it is proprietary, but it is a proof that microkernel based operating systems can be done right.
Re:Themicrokernels that work - VM and QNX (Score:3, Informative)
You have just described _exactly_ how L4 IPC works. I think you are saying important things, but I can not make anything out of your claim that L4 "gets it wrong". It offers more flexibility, but you can implement the semantics you are asking for _exactly_ within the much more powerful L4 primitives.
I would really like to hear why you think that this is something L4 does wrong, given that it does exactly what you say (and then some).
To be more specific: An L4_IPC system call that specifies the CALL semantics (send followed by a closed receive from the receiver) will, if the receiving thread is in the wait state, lead to an immediate context switch to the receiver, running on the donated time slice of the sender. If you can process the message and return it within that time slice, you will return to the sender immediately, running on what is left of your time slice after the RPC. This path is extremely optimised in L4. (I made some assumptions here, for example that both threads are scheduled on the same CPU. I assume you made similar assumptions).
Now, granted, you can also use the L4 primitives in "the wrong way" and thus lose performance. But that is a different matter. It certainly allows for the semantics you have described (I don't know QNX IPC semantics, so I can not tell if L4 also allows to do all the other things a QNX RPC will do).
What else? Right. The L4 people are still on the search for the best semantics of IPC security. It's a difficult problem, and they are looking for the most universal, flexible approach that delivers maximum performance in most if not all usage patterns, not only one specific RPC model. We have to acknowledge that. There are problems, but for all I can say, they are not where you claim they are (based on what you wrote in your message).
Marcus
Re:It's offensive. Linux/Hurd would be better. (Score:1, Informative)
gcc development?
In fact, the GNU C library was designed and implemented by Roland McGrath for the Hurd project in the beginning (he was paid for this work by the FSF). I do not know when it was ported to Linux, but as with all GNU tools, portability was always a concern (and Roland showed how portable a C library can be).
Eventually, maintenance was taken over by Ulrich Drepper, and the glibc survived while libc5 is merely a footnote of Linux history today. I don't know the story behind the switch from libc5 to libc6, but certainly Roland got _some_ things right. Linux and the Hurd are still the two architectures which are supported by the GNU C library source base.
It's true - the GNU/Hurd benefits enormously from all the software people write for GNU/Linux, BSD and other systems. So do these projects benefit from us when we port their software and resolve some POSIX incompatibilities in their software due to the small differences in implementation.
So, two lessons from this story: (1) Free software is the big game of sharing. Everybody wins. (2) It's good to know your history and pick your examples carefully
Thanks,
Marcus