Hurd/L4 Developer Marcus Brinkmann Interviewed 327
wikinerd writes "A few years ago when the GNU OS was almost complete, the kernel was the last missing piece, and most distributors combined GNU with the Linux kernel. But the GNU developers continued their efforts and unveiled the Hurd in 1990s, which is currently a functioning prototype. After the Mach microkernel was considered insufficient, some developers decided to start a new project porting the Hurd on the more advanced L4 microkernel using cutting-edge operating system design, thus creating the Hurd/L4. Last February one of the main developers, Marcus Brinkmann, completed the process initialization code and showed a screenshot of the first program executed on Hurd/L4 saying 'The dinner is prepared!' Now he has granted an interview about Hurd/L4, explaining the advantages of microkernels, the Hurd/L4 architecture, the project's goals and how he started the Debian port to Hurd."
Re:The continued splintering of OSS (Score:2, Interesting)
GNU (Score:5, Interesting)
Is it just loss of interest after Linux became popular?
Re:The continued splintering of OSS (Score:1, Interesting)
Rubbish. A proprietary company with the number of employees equal to the number of OSS developers doesn't exist, but there are smaller but still huge ones, such as IBM and Microsoft, in which the infighting is, uh, legendary.
No ppp for Hurd? Why not? (Score:0, Interesting)
Zot. Buh-bye Hurd.
Re:Focus (Score:3, Interesting)
Actually, the free developers are the one who focus like a laser on what they want to develop. At a Big Dumb Company(tm) the developers may not focus as sharply as the "decision makers". Here, the decisions are made by the developers and hence there is better focus on the goals. If it were universally agreed what the goal should be, everyone would focus. Since it's not a given, people will latch onto things others may think are unnecessary. My own experience shows that most people will not be convinced that an alternative is better until you show them. That means those interested in the Hurd must build it and show the rest of you. Only then should we decide.
Good thing (Score:3, Interesting)
Consider the alternative if GNU project wasn't started by RMS... even Linux wouldn't have been around...
Re:GNU (Score:2, Interesting)
In my universe [bell-labs.com] they are considered junk; that BTW, is the same "universe" of the inventors of Unix and C(and many other things).
RMS and GNU never understood Unix and the Unix philosophy, and it shows; they can't code in C either, take a look at the source of gnu-core-utils some day... I did it, and I'm still recovering from the trauma. And gcc and other gnu "tools" are not better.
The only original "contribution" GNU did to Unix was info, a documentation system so hideous that even most GNU zealots don't use it.
Re:GNU (Score:2, Interesting)
Mirokernel Linux? (Score:3, Interesting)
Re:GNU (Score:3, Interesting)
to be fair, GNU and Linux have made some very significant, very positive contributions. but with one or two exceptions, they are not in code. GNU and LInux are interesting for sociological/political reasons. they are not scientifically interesting.
HURD does, at least, have some interesting ideas in it. of course, most of them got there because Plan 9 [bell-labs.com] had them, wasn't open source, and RMS wanted access to them. now that it is just use the real thing [bell-labs.com].
the killer feature of HURD (Score:3, Interesting)
>>it should become possible to replace a running kernel
in other words NEVER REBOOT AGAIN!
in practise this is still hard to accomplish: but at least people are working on it. and yes, to implement this takes time.
Re:GNU (Score:1, Interesting)
Maybe it is the fact that the HURD project tried to explore new ground, whereas most of GNU was copied from existing UNIX, adding a few flags here and there for added functionality.
FOSS development is just not suited for dramatic new ideas. The best new ideas are developed by small teams of wizards, working away from public scrutiny on radical new concepts that break stability and compatibility. FOSS development is anti-thetical to that, due to its large development teams of average-quality programmers and its general averseness for "rocking the boat" codebase-wise.
Re:slashdot is missing the point... (Score:5, Interesting)
I think this hits the nail on the head. That's why I have enjoyed tinkering with Hurd over the years. I currently have a bootable Debian/Hurd partition, and I have recently built the L4/Hurd system up to its current state. I haven't been able to get banner running like Marcus did, but its not for a lack of trying.
Many Slashdotters will say "Why waste your time with Hurd because BSD/Linux/Windows/OSX/etc already works great and needs more contributors?" Well, its my time and if I want to play around with experimental source code then that's what I am going to do.
I already have a nicely working Gentoo Linux system that I use most of the time, and I'm happy with it. However, I am one of those types that wants to always learn, and by following the progress of Mach/Hurd and now L4/Hurd I get to grow up with the operating system and there is a small chance that I will be able to make a useful contribution here or there occasionally.
Hurd isn't trying to sell itself to become a replacement for your current favorite operating system. It is simply a project to create an OS based on advanced and sometimes theoretical computer science ideas.
People like Marcus put a lot of effort into realizing these abstractions in code. Sometimes it doesn't work out and they have to backstep, but progress continues. I have been on the developer's mailing list for years, and honestly I don't understand 90% of what they are talking about but it is pretty interesting nonetheless.
Hurd makes pretty heavy use of GNU autotools; i.e. "./configure" and a lot of the real benefits of the old Mach based Debian/Hurd are that upstream sources have been patched so that you can hopefully build them on Hurd just by running configure and make. L4-Hurd is still Hurd, so all the work done is still relevent. When they whip the sheet off of the shiny new engine, the rest of the parts waiting on the shelf are already there.
And they are making good progress. They have it now to the point where a lot of people are working on getting libc to build and once the kernel and libc are working that is the keystone that lets all the other peices of the puzzle come together.
It's totally a research project. There is no agenda other than some people like Marcus thought it was a cool project and decided to fool around with it. I'm the same way, sometimes I get bored with Linux so I put on my Hurd cap and play around with it for a while.
Check this quote (Score:3, Interesting)
Subject: Re: LINUX is obsolete
"I still maintain the point that designing a monolithic kernel in 1991 is a fundamental error. Be thankful you are not my student. You would not get a high grade for such a design"
To Linus Torvalds.
Re:Mi[c]rokernel Linux? (Score:3, Interesting)
Filesystems and the network stack are never going to move to userspace, though; Linus is dead-set against it, for the same reasons he always has been. If you were completely batshit, you could probably convince FUSE to run from initramfs, and get wholly userspace filesystems that way, but FUSE doesn't support any non-network filesystems, AFAIK.
Re:And how long have they been working on this? (Score:3, Interesting)
For those with Non German Language Disorder here a rough translation:
"Dilettants! Dilettants! - so are those called, who are doing a science or an art, out of love or joy for it, per il loco diletto, with disrespect by those, who do it for the gain, because they like the money that can be earned by it. This disrespect is founded in the mean conviction, that no one seriously does anything if not forced by misery, hunger or another greed. The public has the same mind and thus the same opinion: from here comes the full respect for "people of the craft" and his mistrust for dilettants. In reality for the dilettant the thing is the end, for the craftsman it is just means; only he will do a thing with full seriosity, who is immediately interested, and who is occupied by it out of love, does it con Amore. Those, not the paid servants, always started the greatest things."
Themicrokernels that work - VM and QNX (Score:5, Interesting)
VM is really a hypervisor, or "virtual machine monitor". The abstraction it offers to the application looks like the bare machine. So you have to run another OS under VM to get useful services.
The PC analog of VM is VMware. VMware is much bigger and uglier than VM because x86 hardware doesn't virtualize properly, and horrible hacks, including code scanning and patching, have to be employed to make VMware work at all. IBM mainframe hardware has "channels", which support protected-mode I/O. So drivers in user space work quite well.
QNX [qnx.com] is a widely used embedded operating system. It's most commonly run on small machines like the ARM, but it works quite nicely on x86. You can run QNX on a desktop; it runs Firefox and all the GNU command-line tools.
QNX got interprocess communication right. Most academic microkernels, including Mach and L4, get it wrong. The key concept can be summed up in one phrase - "What you want is a subroutine call. What the OS usually gives you is an I/O operation". QNX has an interprocess communication primitive, "MsgSend", which works like a subroutine call - you pass a structure in, wait for the other process to respond, and you get another structure back. This makes interprocess communication not only convenient, but fast.
The performance advantage comes because a single call does the necessary send, block, and call other process operations. But the issue isn't overhead. It's CPU scheduling. If the receiving process is waiting (blocked at a "MsgReceive"), control transfers immediately to the receiving process. There's no ambiguity over whether the message is complete, as there is with pipe/socket type IPC. There's no trip through the scheduler looking for a process to run. And, most important, there's no loss of scheduling quantum.
This last is subtle, but crucial. It's why interprocess communication on UNIX, Linux, etc. loses responsiveness on heavily loaded systems. The basic trouble with one-way IPC mechanisms, which include those of System V and Mach, is that sending creates an ambiguity about who runs next.
When you send a System V type IPC message (which is what Linux offers), the sender keeps on running and the receiving process is unblocked. Shortly thereafter, the sending process usually does an IPC receive to get a reply back, and finally blocks. This seems reasonable enough. But it kills performance, because it leads to bad scheduling decisions.
The problem is that the sending process keeps going after the send, and runs until it blocks. At this point, the kernel has to take a trip through the scheduler to find which thread to run next. If you're in a compute-bound situation, and there are several ready threads, one of them starts up. Probably not the one that just received a message, because it's just become ready to run and isn't at the head of the queue yet. So each IPC causes the process to lose its turn and go to the back of the queue. So each interprocess operation carries a big latency penalty.
This is the source of the usual observation that "IPC under Linux is slow". It's not that the implementation is slow, it's that the ill-chosen primitives have terrible scheduling properties. This is an absolutely crucial decision in the design of a microkernel. If it's botched, the design never recovers. Mach botched it, and was never able to fix it. L4 isn't that great in this department, either.
Sockets and pipes are even worse, because you not only have the scheduling problem, you have the problem of determining when a message is complete and it's time to transfer control. The best the kernel can do is guess.
QNX is a good system technically. The main problem with QNX, from the small user perspective, is the company's marketing operation, which is dominated by inside sales people who want to cut big
Re:Themicrokernels that work - VM and QNX (Score:3, Interesting)
Re:The continued splintering of OSS (Score:4, Interesting)
This seems to be a popular theory here on slashdot. The idea that the reason linux hasn't broken through is because it lacks polish.
History shows us that a more polished desktop does not always win. The Mac is a perfect example of this. During the 80s the mac was a slick polished interface while the PC had an ugly DOS command line. The PC won handily. Why is that?
I suspect because it was more "open". You could stick cards in it, you could expand it, you could hack it and most importantly it had lotus 123 running on it so the business community embraced it.
FOr the exact same reasons I predict linux continues to advance into the desktop. The final breaking point will come when businesses see competitive advantage in it. Once that happens I expect explosive growth in linux adoption and yes world domination.
Re:GNU (Score:3, Interesting)
Moll.
Re:Themicrokernels that work - VM and QNX (Score:4, Interesting)
Re:GNU (Score:4, Interesting)
Now, you may not be a fan of emacs, but first stating that you shouldn't use GNU code to get things done, and then holding up Emacs as an example, is ignorant at best. I am not personally a fan of Emacs, having discovered vi first, but I still can appreciate the power, flexibility, and usefulness of it.
I can't appreciate it and yes I've used it. An editor that requires that you spend your time bouncing on the meta key fails one of the more important requirements of a text editor - that the editing keys need to be handily placed.
A friend of mine is big an emacs fan. He uses it for text editing, e-mail, news reading, and coding.
With all due respect to your genius friend there are better programs for editing, reading news and coding. Ever heard of the unix principle of doing one thing and doing it well? Well GNU code usually does more than one thing...and it usually does them all badly. Emacs being a case in point.For example, comparing the Bourne Shell to bash is like comparing the old, original vi, to vim.
I was comparing the system shell of BSD and GNU/Linux. The advantage of sh over bash is not marginal when the system is spawning lots of shells ie. on boot-up. When I ran Linux the system took 3 times as long to come up compared to FreeBSD running the same softs and 2.4 performance wise was abysmal.But they both [vim/bash] also offer an immensely great level of functionality.
As far as user shells go, bash is so far behind ksh in function it's not funny. I've already mentioned that bash is shot through with longstanding and seemingly intractable bugs. Which reminds me, the GNU readline library stinks too which is one reason why most developers choose to implement their own.If you really want to stay stuck 10+ years in the past, with ancient (but possibly less buggy) software, that's cool. You have that choice.
Me, though. . . I'm looking over that hill there, wondering what we're going to see next.
I looked over the hill some 4-5 years ago, saw that to become a GNU developer you have to have the right religious attitude irrespective of whether you can code or not.
That was when I dumped GNU/Linux as even then it had become a ghetto for those with some sort of OS religious axe to grind. Now it's beyond a joke (21 kernel vulnerabilities in 3 months!) and the fact that you and others continue to argue that it is good is frankly bizarre.
Yes, I'll be modded down again for making valid criticisms of GNU/Linux and the rotten semi-commercial/semi-free Frankensteins monster it has become.
It's a shame, I was largely introduced to unix through Linux and have a soft spot for it. There are times when you have to call a spade a spade even though the penguinista apologistas will inevitably come crawling out of the woodwork and inform me that black is actually white and what a troll I am. Sigh.
Re:And how long have they been working on this? (Score:4, Interesting)
Hurd is also attempting to solve very real practical problems. Consider a typical UNIX network daemon:
(1) Must be started as root to listen on a privileged port
(2) Upon an incoming connection, must accept and then "drop privileges"
This causes mnay, MANY problems, that are very practical. If you took all the remote root security exploits ever in UNIX, and you subtracted those that involved a network daemon that needed to be run as root to listen on a privileged port, you'd be left with a rather secure system.
I just can't imagine why nobody recognizes a problem like that. You don't inherently need to run Apache/BIND/Sendmail with the privilege to overwrite the boot sector, but people ignore it as if "Oh well, it's a network daemon, of course it needs to be able to rewrite the boot sector. We'll just hope there are no bugs.".
Not only is this a security nightmare (which is only mitigated by the fact that UNIX is compared to windows rather than an ideal), but it's also got many performance implications. If you're measuring raw performance of already-written applications, a monolithic kernel will never be worse than a microkernel architecture. However, on a linux system, a lot of resources are traded in order to jump through hoops that don't need to be there, particularly for security. Maybe you don't really need to start that new process, and you can do everything you need in the current process. Sounds like a serious performance win to me, and not "my kernel avoids that 0.1% performance penalty you have to take on operation X".
For example, what about Apache, CGI and suexec? You really don't need all that when you could just be getting/releasing authentication tokens and using an apache module rather than starting a new process just so you can change privileges.
You can't separate the details of performance characteristics from capabilities. Capabilities may cost overhead, but may reduce algorithmic requirements.
Re:GNU (Score:4, Interesting)
That's a very subjective requirement. Personally, for my own use, I agree with it. That's why I use vi. But the fact that it fails one of your requirements doesn't mean that everyone else will view it the same.
With all due respect to your genius friend there are better programs for editing, reading news and coding. Ever heard of the unix principle of doing one thing and doing it well? Well GNU code usually does more than one thing...and it usually does them all badly. Emacs being a case in point.
There are better programs for you to do those things. He has taken advantage of the strengths of Emacs to customize and enhance it until it is the best program available for him to do those activities.
Regarding the Unix principle and Emacs. . . you really have to blame the Emacs users for that. Emacs started out as a fairly small and lightweight editor. At the begining, it was basically just a very basic editor with a built in lisp interpreter for extensibility. The majority of Emacs functionality has been implemented at a higher level, often by users, via elisp.
I was comparing the system shell of BSD and GNU/Linux. The advantage of sh over bash is not marginal when the system is spawning lots of shells ie. on boot-up. When I ran Linux the system took 3 times as long to come up compared to FreeBSD running the same softs and 2.4 performance wise was abysmal.
Yes, and it still isn't a valid comparison. You're comparing three BSD distributions, which are, despite their kernel level differences (and a few others), very homogenous, with all of the Linux distributions out there, which number in the hundreds, and are often very specialized.
To give one quick example of where it breaks down, I'll point to Debian, my Linux distribution of choice. Debian includes bash, true, but it also includes dash, formerly ash, which is the sh from NetBSD. It is trivial to set
As far as user shells go, bash is so far behind ksh in function it's not funny. I've already mentioned that bash is shot through with longstanding and seemingly intractable bugs. Which reminds me, the GNU readline library stinks too which is one reason why most developers choose to implement their own.
To each their own. I've tried ksh before, and been forced to use it on multiple occasions, and I didn't particularly care for it. Don't get me wrong, it's better than tcsh, and much better than csh or bourne shell, but it is lacking a number of features that bash supports, and I found it inconvenient and annoying.
I've actually used the GNU readline library before, too. It wasn't my favorite library to develop with, but I've used worse. And it generally gets the job done.
I looked over the hill some 4-5 years ago, saw that to become a GNU developer you have to have the right religious attitude irrespective of whether you can code or not.
That's a steaming pile of horse crap. You can do whatever the hell you want. If you want to sit at home on a Linux machine and develop proprietary software, you're free to do that. No one is gonig to break down your door and beat you with a penguin stick for it.
That was when I dumped GNU/Linux as even then it had become a ghetto for those with some sort of OS religious axe to grind. Now it's beyond a joke (21 kernel vulnerabilities in 3 months!) and the fact that you and others continue to argue that it is good is frankly bizarre.
Ah, so you choose what operating system you use by how much you like other people using it?