Hurd/L4 Developer Marcus Brinkmann Interviewed 327
wikinerd writes "A few years ago when the GNU OS was almost complete, the kernel was the last missing piece, and most distributors combined GNU with the Linux kernel. But the GNU developers continued their efforts and unveiled the Hurd in 1990s, which is currently a functioning prototype. After the Mach microkernel was considered insufficient, some developers decided to start a new project porting the Hurd on the more advanced L4 microkernel using cutting-edge operating system design, thus creating the Hurd/L4. Last February one of the main developers, Marcus Brinkmann, completed the process initialization code and showed a screenshot of the first program executed on Hurd/L4 saying 'The dinner is prepared!' Now he has granted an interview about Hurd/L4, explaining the advantages of microkernels, the Hurd/L4 architecture, the project's goals and how he started the Debian port to Hurd."
DNF (Score:3, Funny)
Re:DNF (Score:3, Funny)
I can't wait!
Re:DNF (Score:3, Insightful)
Re:DNF (Score:2)
The continued splintering of OSS (Score:4, Insightful)
That's the difference between OSS and proprietary companies. They can focus like a laser on what they want to develop and leave a lot of the infrastructural heavy lifting to those hippy anarchists in the open source scene.
It's win-win for them, because they get the benefit of a lot of what these groups produce, and often can improve upon it (BSD --> OSX). It's like having an unpaid R&D dept. working for you 24/7.
Re:The continued splintering of OSS (Score:2, Interesting)
Re:The continued splintering of OSS (Score:4, Insightful)
You just don't see this in the proprietary companies. Sure, they compete with each other, but within the companies themselves there's much tighter integration.
I think this has the tendency to make OSS be sort of the breeding ground for the real innovations in tech, but largely unable to provide the sort of polish that proprietary companies can. I also think it's a large part of what keeps projects like Linux, Unix, etc. from really breaking through in areas like the desktop.
It's not necessarily a bad thing, but I think to dismiss it is a mistake.
Re:The continued splintering of OSS (Score:4, Insightful)
No infighting in companies? (Score:3, Insightful)
Who do you work for and where can I send my resume?
Re:The continued splintering of OSS (Score:4, Interesting)
This seems to be a popular theory here on slashdot. The idea that the reason linux hasn't broken through is because it lacks polish.
History shows us that a more polished desktop does not always win. The Mac is a perfect example of this. During the 80s the mac was a slick polished interface while the PC had an ugly DOS command line. The PC won handily. Why is that?
I suspect because it was more "open". You could stick cards in it, you could expand it, you could hack it and most importantly it had lotus 123 running on it so the business community embraced it.
FOr the exact same reasons I predict linux continues to advance into the desktop. The final breaking point will come when businesses see competitive advantage in it. Once that happens I expect explosive growth in linux adoption and yes world domination.
Price! (Score:4, Insightful)
This is a classic logic fallacy in that you pick and choose the factors that support your argument. The PC didn't win because it was more "open." It won because it was cheaper than the $2000+ priced Macintosh, fueled by commodity PC-clones (remember the phrase "PC-compatible"?) that competed with each other and brought prices down each year.
Re:The continued splintering of OSS (Score:5, Informative)
Re:The continued splintering of OSS (Score:2)
I don't know Frontpage or Dreamweaver well enough to know if they couldn't be used on the same website, but using Hurd necessarily precludes using Linux on the same machine (and vise versa)... such mutual exclusivity would by itself indicate an overlap of functionality, and it stands to reason anyway since they're both operating syst
Re:Focus (Score:3, Interesting)
Actually, the free developers are the one who focus like a laser on what they want to develop. At a Big Dumb Company(tm) the developers may not focus as sharply as the "decision makers". Here, the decisions are made by the developers and hence there is better focus on the goals. If it were universally agreed what the goal should be, everyone would focus. Since it's not a given, people will latch onto things others may think are unnecessary. My own e
Time to read about software freedom. (Score:2)
Except that GNU was started for software freedom years before the open source movement existed. GNU was not about pushing anything "open source". Making a fork of GNU into a proprietary OS would definately not be considered any kind of advantage because such a program would deny software freedom to its users. Nor is the focus of the free software movement an issue of perfecting a development model aimed at rallying unpaid labor to work on one's program.
I suggest learning more about the difference betwe [gnu.org]
Re:MirrorDot Mirror of Interview (Score:5, Informative)
Damn, wrong button!
Question for Marcus (Score:2, Funny)
How many people do you think will submit questions thinking that this is a Slashdot interview?
GNU (Score:5, Interesting)
Is it just loss of interest after Linux became popular?
Re:GNU (Score:3, Insightful)
That, and I also seem to recall that they faced a choice: write a quick and dirty monolithic kernel, or write a much more complex (but theoretically more advanced, secure, robust, etc.) set of servers running on top of a microkernel. Linux came onto the scene and provided the former, so the GNU kernel folks decided to work on the latter. The latter, of course, being HURD.
That's my simplistic take on things from what I've read in the past, anyway.
Re:GNU (Score:2)
Re:GNU (Score:2, Interesting)
Re:GNU (Score:2)
Suppose someone gets in a raft and starts heading to their friends house. The friend happens to be downstream. They get there, and the friend says "I see you decided to come down stream." But he didn't even think about that, he just wanted to get something done. He was successful because he happened to go the way that was possible with the resources he had.
Linus might have just wanted to make a down and dirty kernel to hack on. But it was successful b
Re:GNU (Score:3, Insightful)
There are no shortage of people who would disagree with you about Microkernels. You also neglect mention that they are incredibly slow.
Re:GNU (Score:4, Informative)
While it is true that microkernels are slower than monolithic kernels, they have many advantages. They can be more stable and secure. Two things that plague current operating systems, including Linux.
Regarding performance, everyone likes to take Mach as an example of how slow microkernels are. But, many microkernel bashers seem to forget QNX, which has never been accused of being terribly small. It is one of the best (if not the best) hard real time operating systems out there. OK, it is proprietary, but it is a proof that microkernel based operating systems can be done right.
Re:GNU (Score:5, Informative)
Oh boy, is this comment going down to -1 or what.
Re:GNU (Score:4, Insightful)
The Linux kernel works because it fills a need, and because it fills a need, lots of people will want to work on it.
The Hurd is more researchy and hence doesn't fullfill anyone's needs exactly, and yes, to the extent that Linux does fit them, then there are less people working on Hurd.
In my experience, a program that fills 90% of the need for 10% of the effort will nearly always win out, even if the extra 10% costs another 90%. The Linux kernel was a quick hack (and I don't mean that in a bad way), whereas Hurd was trying for perfection...
Re:GNU (Score:2, Redundant)
GNU made most of the core programs that Linux normally uses, and they are universally considered excellent. So why is it so hard for them to make a kernel?
Most of the team left in the mid-90s to work on Duke Nukem Forever.
Re:GNU (Score:2, Interesting)
In my universe [bell-labs.com] they are considered junk; that BTW, is the same "universe" of the inventors of Unix and C(and many other things).
RMS and GNU never understood Unix and the Unix philosophy, and it shows; they can't code in C either, take a look at the source of gnu-core-utils some day... I did it, and I'm still recovering from the trauma. And gcc and other gnu "tools" are not better.
The only origin
Re:GNU (Score:3, Insightful)
take a look at the source of gnu-core-utils some day... I did it, and I'm still recovering from the trauma. And gcc and other gnu "tools" are not better.
Uhm... you're saying that the gnu tools and projects should be assessed by their coding style? Who cares what the "unix philosophy" of coding style is? The tools should obviously be assessed by how they work and mimic the original tools - and as far as I know, they are "up there" with the commercial unices. Gcc is far better than any compiler from the o
Re:GNU (Score:2)
Anything the gnu group can do to ease the learning curve can only help them in the long term.
Re:GNU (Score:3, Interesting)
Moll.
Re:GNU (Score:3, Insightful)
The one bit of GNU code that really bugs me: Emacs
I have to admit, I found this to be an odd way of putting things.
Now, you may not be a fan of emacs, but first stating that you shouldn't use GNU code to get things done, and then holding up Emacs as an example, is ignorant at best. I am not personally a fan of Emacs, having discovered vi first, but I still can appreciate the power, flexibility, a
Re:GNU (Score:4, Interesting)
Now, you may not be a fan of emacs, but first stating that you shouldn't use GNU code to get things done, and then holding up Emacs as an example, is ignorant at best. I am not personally a fan of Emacs, having discovered vi first, but I still can appreciate the power, flexibility, and usefulness of it.
I can't appreciate it and yes I've used it. An editor that requires that you spend your time bouncing on the meta key fails one of the more important requirements of a text editor - that the editing keys need to be handily placed.
A friend of mine is big an emacs fan. He uses it for text editing, e-mail, news reading, and coding.
With all due respect to your genius friend there are better programs for editing, reading news and coding. Ever heard of the unix principle of doing one thing and doing it well? Well GNU code usually does more than one thing...and it usually does them all badly. Emacs being a case in point.For example, comparing the Bourne Shell to bash is like comparing the old, original vi, to vim.
I was comparing the system shell of BSD and GNU/Linux. The advantage of sh over bash is not marginal when the system is spawning lots of shells ie. on boot-up. When I ran Linux the system took 3 times as long to come up compared to FreeBSD running the same softs and 2.4 performance wise was abysmal.But they both [vim/bash] also offer an immensely great level of functionality.
As far as user shells go, bash is so far behind ksh in function it's not funny. I've already mentioned that bash is shot through with longstanding and seemingly intractable bugs. Which reminds me, the GNU readline library stinks too which is one reason why most developers choose to implement their own.If you really want to stay stuck 10+ years in the past, with ancient (but possibly less buggy) software, that's cool. You have that choice.
Me, though. . . I'm looking over that hill there, wondering what we're going to see next.
I looked over the hill some 4-5 years ago, saw that to become a GNU developer you have to have the right religious attitude irrespective of whether you can code or not.
That was when I dumped GNU/Linux as even then it had become a ghetto for those with some sort of OS religious axe to grind. Now it's beyond a joke (21 kernel vulnerabilities in 3 months!) and the fact that you and others continue to argue that it is good is frankly bizarre.
Yes, I'll be modded down again for making valid criticisms of GNU/Linux and the rotten semi-commercial/semi-free Frankensteins monster it has become.
It's a shame, I was largely introduced to unix through Linux and have a soft spot for it. There are times when you have to call a spade a spade even though the penguinista apologistas will inevitably come crawling out of the woodwork and inform me that black is actually white and what a troll I am. Sigh.
Re:GNU (Score:4, Interesting)
That's a very subjective requirement. Personally, for my own use, I agree with it. That's why I use vi. But the fact that it fails one of your requirements doesn't mean that everyone else will view it the same.
With all due respect to your genius friend there are better programs for editing, reading news and coding. Ever heard of the unix principle of doing one thing and doing it well? Well GNU code usually does more than one thing...and it usually does them all badly. Emacs being a case in point.
There are better programs for you to do those things. He has taken advantage of the strengths of Emacs to customize and enhance it until it is the best program available for him to do those activities.
Regarding the Unix principle and Emacs. . . you really have to blame the Emacs users for that. Emacs started out as a fairly small and lightweight editor. At the begining, it was basically just a very basic editor with a built in lisp interpreter for extensibility. The majority of Emacs functionality has been implemented at a higher level, often by users, via elisp.
I was comparing the system shell of BSD and GNU/Linux. The advantage of sh over bash is not marginal when the system is spawning lots of shells ie. on boot-up. When I ran Linux the system took 3 times as long to come up compared to FreeBSD running the same softs and 2.4 performance wise was abysmal.
Yes, and it still isn't a valid comparison. You're comparing three BSD distributions, which are, despite their kernel level differences (and a few others), very homogenous, with all of the Linux distributions out there, which number in the hundreds, and are often very specialized.
To give one quick example of where it breaks down, I'll point to Debian, my Linux distribution of choice. Debian includes bash, true, but it also includes dash, formerly ash, which is the sh from NetBSD. It is trivial to set
As far as user shells go, bash is so far behind ksh in function it's not funny. I've already mentioned that bash is shot through with longstanding and seemingly intractable bugs. Which reminds me, the GNU readline library stinks too which is one reason why most developers choose to implement their own.
To each their own. I've tried ksh before, and been forced to use it on multiple occasions, and I didn't particularly care for it. Don't get me wrong, it's better than tcsh, and much better than csh or bourne shell, but it is lacking a number of features that bash supports, and I found it inconvenient and annoying.
I've actually used the GNU readline library before, too. It wasn't my favorite library to develop with, but I've used worse. And it generally gets the job done.
I looked over the hill some 4-5 years ago, saw that to become a GNU developer you have to have the right religious attitude irrespective of whether you can code or not.
That's a steaming pile of horse crap. You can do whatever the hell you want. If you want to sit at home on a Linux machine and develop proprietary software, you're free to do that. No one is gonig to break down your door and beat you with a penguin stick for it.
That was when I dumped GNU/Linux as even then it had become a ghetto for those with some sort of OS religious axe to grind. Now it's beyond a joke (21 kernel vulnerabilities in 3 months!) and the fact that you and others continue to argue that it is good is frankly bizarre.
Ah, so you choose what operating system you use by how much you like other people using it?
Re:You are missing the point. (Score:3, Insightful)
GNU's not unix, its an inferior work-alike that serves no purpose.
Serves no purpose? That's funny. Currently, it serves a lot of my needs. I believe that the purpose of the tools is to serve user's needs, but I might be mistaken. Enlighten me.
Re:GNU (Score:3, Interesting)
to be fair, GNU and Linux have made some very significant, very positive contr
Re:GNU (Score:2, Insightful)
Amen to that.
Happily the science of operating systems isn't very interesting anymore so this is no great loss. Political and economic reasons are of much greater importance, and this is where Linux shines.
uh... wrong (Score:5, Insightful)
Re:GNU (Score:3, Insightful)
If that is not enough, complain to Lucent [mailto] with a clear explanation of why the LPL is not good enough for you.
Re:GNU (Score:2)
Plan 9 is not advertised in thousands of magazines, websites, movies, tv shows, etc.
Plan 9 is not UNIX and it's not very UNIX-like either. If you're familiar with UNIX concepts, you'll probably have to ditch most of them to figure out how Plan 9 works.
The Web is not Plan 9 compatible.
Plan 9 isn't made for everyday desktop use, ``surfing the net'' or really most things that you
Peopl
Re:GNU (Score:2)
Re:GNU (Score:2)
Re:What GNU tools do you use? (Score:2)
2) GNU utils are broken? WTF? Whatever you say, they are certainly not very buggy.
Linus Torvalds (Score:4, Funny)
This was in 1991...
Re:Linus Torvalds (Score:3, Informative)
AIX, IBM's Unix, does the same thing with its filesystem. All files are memory mapped and I/O is done by paging in or out.
This is one of the reasons the AIX JFS filesystem was tied tightly to the Power MMU hardware. When AIX was ported to IA64 this required the adoption of the new JFS2 (OS/2 heritage) which is the preferred filesystem on AIX now. This OS/2 derived JFS is the one available in Linux.
Markus
Missing piece? (Score:5, Funny)
Re:Missing piece? (Score:5, Funny)
The kernel is the last missing piece? What's the first piece, an integrated browser?
Yes, they named it emacs
Re:Missing piece? (Score:5, Funny)
Mirror (Score:4, Informative)
http://dilse.net/hurd.html [dilse.net]
My opinion on this whole thing... (Score:2)
Re:My opinion on this whole thing... (Score:2)
Re:My opinion on this whole thing... (Score:2)
This is not true, of course. BSD was in use when Linux was in it's infancy, and thus had more drivers. In general, *BSD is not far behind Linux when it comes to supporting new hardware with open source drivers, and in some cases are ahead. Lagging behind is often due to hardware manufacturers refusing [undeadly.org] to give documentation.
That said, several hardware manufacturers offer binary-only drivers for Linux for hardware. A well known example
Re:My opinion on this whole thing... (Score:2)
Linux became more popular than the HURD because it was ready and it worked. Linux became more popular than BSD because of combination of factors, including the distaste of some people for the BSD licence (the commercial-forks allowance in particular), and uncertainty about its copyright status while the AT&T suit was still pending.
Re:My opinion on this whole thing... (Score:2)
but it's current popularity has pretty much killed any chance HURD had
I really doubt it. If linux hadn't been such a success in '91, my guess is that the GNU project as a whole would have been dead. HURD might have been in a more "finished" state, but it would still have fewer users than it will get when it eventually is released.
I think F/OSS would be marginal without linux. Now it's mainstream. That benefits all F/OSS projects.
Re:My opinion on this whole thing... (Score:2)
The license. If I'm going to do work for Apple and Microsoft, they can damn well pay me or at least give me a copy of the end product.
As to HURD: who ever cared?
TWW
Re:My opinion on this whole thing... (Score:3, Informative)
I installed linux on my machine in late 1991 (I think it was version 0.11). At the time, 386bsd wasn't officially out yet; I think it came out in eary-mid 1992. Linux development was pretty rapid, especially during the first year, and 386bsd development was pretty slow.
I think this is the real reason why linux became so popular. By the time netbsd started getting off the ground, there was already a pretty large linux user base. If the timing and development pace of 386bsd had been different, things ma
Re:My opinion on this whole thing... (Score:3, Informative)
BSD wasn't ready for 386 at the time, and had the AT&T lawsuit hanging over it. And with Hurd not ready to go now, what makes you think it was ready to go in the early 90s?
Re:My opinion on this whole thing... (Score:2)
Re:My opinion on this whole thing... (Score:2)
Good thing (Score:3, Interesting)
Consider the alternative if GNU project wasn't started by RMS... even Linux wouldn't have been around...
Re:Good thing-Linus, I'm not your father. (Score:2)
Mirokernel Linux? (Score:3, Interesting)
Re:Mirokernel Linux? (Score:3, Informative)
It is explicitly not a microkernel and they don't plan to make it one, but it has some microkernel-like properties. For example, programs do not invoke system calls directly, they pass though a translation layer in userspace. This allows a bunch of very cool things that I will not enumerate here because they're on the website.
It's not done yet but they have a working release.
Re:Mi[c]rokernel Linux? (Score:3, Interesting)
Special Request to the Tech Press (Score:3, Funny)
the killer feature of HURD (Score:3, Interesting)
>>it should become possible to replace a running kernel
in other words NEVER REBOOT AGAIN!
in practise this is still hard to accomplish: but at least people are working on it. and yes, to implement this takes time.
slashdot is missing the point... (Score:5, Insightful)
My own personal experience: I worked on an 8 month student project that in many ways failed in the end. But I would never consider that a waste of time. I learned so much and had a blast doing it.
-bdb
Re:slashdot is missing the point... (Score:5, Interesting)
I think this hits the nail on the head. That's why I have enjoyed tinkering with Hurd over the years. I currently have a bootable Debian/Hurd partition, and I have recently built the L4/Hurd system up to its current state. I haven't been able to get banner running like Marcus did, but its not for a lack of trying.
Many Slashdotters will say "Why waste your time with Hurd because BSD/Linux/Windows/OSX/etc already works great and needs more contributors?" Well, its my time and if I want to play around with experimental source code then that's what I am going to do.
I already have a nicely working Gentoo Linux system that I use most of the time, and I'm happy with it. However, I am one of those types that wants to always learn, and by following the progress of Mach/Hurd and now L4/Hurd I get to grow up with the operating system and there is a small chance that I will be able to make a useful contribution here or there occasionally.
Hurd isn't trying to sell itself to become a replacement for your current favorite operating system. It is simply a project to create an OS based on advanced and sometimes theoretical computer science ideas.
People like Marcus put a lot of effort into realizing these abstractions in code. Sometimes it doesn't work out and they have to backstep, but progress continues. I have been on the developer's mailing list for years, and honestly I don't understand 90% of what they are talking about but it is pretty interesting nonetheless.
Hurd makes pretty heavy use of GNU autotools; i.e. "./configure" and a lot of the real benefits of the old Mach based Debian/Hurd are that upstream sources have been patched so that you can hopefully build them on Hurd just by running configure and make. L4-Hurd is still Hurd, so all the work done is still relevent. When they whip the sheet off of the shiny new engine, the rest of the parts waiting on the shelf are already there.
And they are making good progress. They have it now to the point where a lot of people are working on getting libc to build and once the kernel and libc are working that is the keystone that lets all the other peices of the puzzle come together.
It's totally a research project. There is no agenda other than some people like Marcus thought it was a cool project and decided to fool around with it. I'm the same way, sometimes I get bored with Linux so I put on my Hurd cap and play around with it for a while.
Check this quote (Score:3, Interesting)
Subject: Re: LINUX is obsolete
"I still maintain the point that designing a monolithic kernel in 1991 is a fundamental error. Be thankful you are not my student. You would not get a high grade for such a design"
To Linus Torvalds.
Coral cache, more links (Score:4, Informative)
You can try Hurd/L4 right now by burning a bootable CD (iso) using the Gnuppix [gnuppix.org].
You can read the interview through its Coral cache [nyud.net] or its MirrorDot cache [mirrordot.org] . There is also a Google cache [google.com].
There is also a MirrorDot-cached PDF version of the interview that can be downloaded by clicking here [mirrordot.org].
Thanks
Themicrokernels that work - VM and QNX (Score:5, Interesting)
VM is really a hypervisor, or "virtual machine monitor". The abstraction it offers to the application looks like the bare machine. So you have to run another OS under VM to get useful services.
The PC analog of VM is VMware. VMware is much bigger and uglier than VM because x86 hardware doesn't virtualize properly, and horrible hacks, including code scanning and patching, have to be employed to make VMware work at all. IBM mainframe hardware has "channels", which support protected-mode I/O. So drivers in user space work quite well.
QNX [qnx.com] is a widely used embedded operating system. It's most commonly run on small machines like the ARM, but it works quite nicely on x86. You can run QNX on a desktop; it runs Firefox and all the GNU command-line tools.
QNX got interprocess communication right. Most academic microkernels, including Mach and L4, get it wrong. The key concept can be summed up in one phrase - "What you want is a subroutine call. What the OS usually gives you is an I/O operation". QNX has an interprocess communication primitive, "MsgSend", which works like a subroutine call - you pass a structure in, wait for the other process to respond, and you get another structure back. This makes interprocess communication not only convenient, but fast.
The performance advantage comes because a single call does the necessary send, block, and call other process operations. But the issue isn't overhead. It's CPU scheduling. If the receiving process is waiting (blocked at a "MsgReceive"), control transfers immediately to the receiving process. There's no ambiguity over whether the message is complete, as there is with pipe/socket type IPC. There's no trip through the scheduler looking for a process to run. And, most important, there's no loss of scheduling quantum.
This last is subtle, but crucial. It's why interprocess communication on UNIX, Linux, etc. loses responsiveness on heavily loaded systems. The basic trouble with one-way IPC mechanisms, which include those of System V and Mach, is that sending creates an ambiguity about who runs next.
When you send a System V type IPC message (which is what Linux offers), the sender keeps on running and the receiving process is unblocked. Shortly thereafter, the sending process usually does an IPC receive to get a reply back, and finally blocks. This seems reasonable enough. But it kills performance, because it leads to bad scheduling decisions.
The problem is that the sending process keeps going after the send, and runs until it blocks. At this point, the kernel has to take a trip through the scheduler to find which thread to run next. If you're in a compute-bound situation, and there are several ready threads, one of them starts up. Probably not the one that just received a message, because it's just become ready to run and isn't at the head of the queue yet. So each IPC causes the process to lose its turn and go to the back of the queue. So each interprocess operation carries a big latency penalty.
This is the source of the usual observation that "IPC under Linux is slow". It's not that the implementation is slow, it's that the ill-chosen primitives have terrible scheduling properties. This is an absolutely crucial decision in the design of a microkernel. If it's botched, the design never recovers. Mach botched it, and was never able to fix it. L4 isn't that great in this department, either.
Sockets and pipes are even worse, because you not only have the scheduling problem, you have the problem of determining when a message is complete and it's time to transfer control. The best the kernel can do is guess.
QNX is a good system technically. The main problem with QNX, from the small user perspective, is the company's marketing operation, which is dominated by inside sales people who want to cut big
Re:Themicrokernels that work - VM and QNX (Score:4, Insightful)
For programs that want IPC to work like a subroutine, blocking atomically, then yes implementing it as async IO in the kernel is a mismatch. But what about the reverse case? Are there not any examples of programs which would be better suited to queue and recieve IPC responses asynchronously? If you make IPC atomic this case is simply not possible. Whereas if you start with an asynchronous implementation, you can always optimize a fast path, with a blocking function like MsgSendAndRecieve(). Am I wrong?
Re:Themicrokernels that work - VM and QNX (Score:2)
Re:Themicrokernels that work - VM and QNX (Score:3, Interesting)
Re:Themicrokernels that work - VM and QNX (Score:4, Interesting)
Re:Themicrokernels that work - VM and QNX (Score:3, Informative)
You have just described _exactly_ how L4 IPC works. I think you are saying important things, but I can not make anything out of your claim that L4 "gets it wron
GPL and BSD Licence (Score:2)
Oh, that's right! (Score:2)
M y take (Score:5, Informative)
Open source projects are on the whole better, or at least achieve success more quickly on software engineering problems than computer science problems. Writing a word processor is a software engineering problem. It's not like the concepts are not well understood and well documented all over the place, the trick is just building a solid and reliable instance of it. Because it's such a simple and well known problem you can bring in dozens to hundreds of programmers to work on it and there are few debates about how to do things, it's usually more an issue of prioritizing feature lists and bug fixes.
Linux is a software engineering project in the classic sense. Linus and others have been rebuilding Unix, which is extremely well understood. Everyone more or less understood and agreed on how Unix systems work. It's not a question of how to build a memory management algorithm with acceptable performance, but rather which existing algorithm has the best peformance. In general, Linux tends to spend more time debating between existing solutions than trying to find a solution to a problem. The reason that Linux has come so far so fast is that it's treading on extremely familiar ground and isn't really trying to do anything new from a computer science level.
Hurd is more in the area of computer science. They don't have thirty years of precedence going in their favor. While there has been plenty of work in microkernels, there's far less of work there than for Unix. The Hurd people are trying to make something new, rather than reinvent something that's familiar, which is a much easier task by far. So the fact that the Hurd people are moving more slowly is more an indication of the difficulty of the task.
Now the question is, why work on Hurd at all? Well, the answer to that is the answer to the question of whether or not there are things with a microkernel that you cannot do with a regular kernel, and whether these things are worth doing. It is entirely possible on a security level for Linux to hit a dead end, running into the limits of a monolithic kernel architecture. That if there is to be any progress past a certain point, that a rearchitecture is needed to switch to a microkernel architecture. I'm not saying this is the case, but I am saying that it is not an impossibility.
If that is the case, then Linus and others will need to do a major rearchitecture in a new release, or they need to switch over to an existing microkernel project that they feel is acceptable to them. Even if the Linux people decide to do their own microkernel architecture from scratch in that case, they will almost certainly be going over the entire history and the results of the GNU/Hurd project with a fine tooth comb for data on how to build a viable microkernel operating system.
To say that microkernels are slower than monolithic kernels is on some level unimportant. CPU speeds have slowed down somewhat but we're still improving the speed of systems. The question becomes are you willing to trade a performance hit for security. Would you rather have a fast system that is more vulnerable to nasty software or would you rather have a slower but more secure system? So the Hurd people are focusing on security since that is potentially the greatest strength of microkernels over monolithic kernels.
So look at the Hurd project, like a bunch of other projects as a research project. And yes, it's taking them a heck of a long time to get results but they're not in any particular hurry. It's like Linux versus Windows, Linux doesn't need to "win" next year. It just keeps on chugging and eventually grinds away at the opposition. Hurd just keeps getting better every year and maybe someday it will clearly surpass Linux in a few areas. Probably not anytime soon, but this isn't a race.
So no, Hurd isn't a waste of time. It's a research project and one that may be of significant importance to Linux down the road.
Re:And how long have they been working on this? (Score:5, Funny)
Be fair: let us all know what you do in your spare time so we can sneer at you too.
Re:And how long have they been working on this? (Score:5, Insightful)
Re:And how long have they been working on this? (Score:5, Insightful)
This is why so many projects fail. They try too hard to create a mythical overdesigned piece of software that "works in theory" rather than create something that works and then improve it from there.
Re:And how long have they been working on this? (Score:4, Insightful)
This is why so many projects fail. They try too hard to create a mythical overdesigned piece of software that "works in theory" rather than create something that works and then improve it from there.
Ah, the grand old dispute between academics and business people. Linux wouldn't have existed without some quite theoretic early approaches (Babbage, Turing, von Neumann). After all - who cared about the early computers like Babbage's analytical engine? An abacus would be much faster for all relevant computations!
Without theoretic deep-dives like HURD, we would never know if microkernels are a possibility or not. Because it hasn't been proved to work yet. The HURD theorists suggest that microkernels are better than monolithic kernels. Let them explore it, make prototypes, and then someone will make a working kernel to play Duke Nukem Forever on.
Remember: a lot of research just discovers uselessness. It is important to know what's useless, so no-one have to take that path.
Re:And how long have they been working on this? (Score:2)
No, not really. Unless they have very, very patient funding sources, academic projects can't survive indefinitely this way, either...
Re:And how long have they been working on this? (Score:3, Interesting)
Re:And how long have they been working on this? (Score:2)
Re:And how long have they been working on this? (Score:2)
You are stuck in a very narrow-minded definition of academic. Studies into "fire", for instance, didn't need much funding, and weren't very successful for a long time. Yet, they seemed to cope. Same thing with a lot of different academic areas of interest. They don't need any funding, they are driven by pure interest.
Re:And how long have they been working on this? (Score:2, Informative)
Well, unless we looked at the Amiga OS, QNX, or any of a large number of (actually working) microkernels in academia...
Re:And how long have they been working on this? (Score:3, Insightful)
I strongly agree with the OP. Even if HURD never competes with Linux, it's still important as a research project.
Re:And how long have they been working on this? (Score:2)
Like Netscape?
Re:And how long have they been working on this? (Score:4, Interesting)
Hurd is also attempting to solve very real practical problems. Consider a typical UNIX network daemon:
(1) Must be started as root to listen on a privileged port
(2) Upon an incoming connection, must accept and then "drop privileges"
This causes mnay, MANY problems, that are very practical. If you took all the remote root security exploits ever in UNIX, and you subtracted those that involved a network daemon that needed to be run as root to listen on a privileged port, you'd be left with a rather secure system.
I just can't imagine why nobody recognizes a problem like that. You don't inherently need to run Apache/BIND/Sendmail with the privilege to overwrite the boot sector, but people ignore it as if "Oh well, it's a network daemon, of course it needs to be able to rewrite the boot sector. We'll just hope there are no bugs.".
Not only is this a security nightmare (which is only mitigated by the fact that UNIX is compared to windows rather than an ideal), but it's also got many performance implications. If you're measuring raw performance of already-written applications, a monolithic kernel will never be worse than a microkernel architecture. However, on a linux system, a lot of resources are traded in order to jump through hoops that don't need to be there, particularly for security. Maybe you don't really need to start that new process, and you can do everything you need in the current process. Sounds like a serious performance win to me, and not "my kernel avoids that 0.1% performance penalty you have to take on operation X".
For example, what about Apache, CGI and suexec? You really don't need all that when you could just be getting/releasing authentication tokens and using an apache module rather than starting a new process just so you can change privileges.
You can't separate the details of performance characteristics from capabilities. Capabilities may cost overhead, but may reduce algorithmic requirements.
Re:And how long have they been working on this? (Score:4, Insightful)
Almost everyone who has started with a "pure" microkernel design has eventually moved away from it for performance and other reasons.
Many of the problems HURD was trying to address have either become irrelevant, determined to be non-issues, or have been solved. (The same can be said of things like IA64 or SPARC).
There are still interesting ideas in HURD that deserve research. However if they intend to challenge the enormous momentum behind *BSD and Linux, HURD is going to have to offer some truly astounding functionality / killer app or few people are going to use it.
Re:And how long have they been working on this? (Score:5, Informative)
On the other hand, I guess I'm not the only one of this mind, as it obviously wouldn't have taken 20 years to get to the point where a program can finally run on it if everybody else with development skills didn't also believe it a total waste of their time.
From the interview:
Security and stability are tightly related issues, and they are major motivations for any microkernel based system. However, we feel that security does not need to translate to loss of freedom. With a bit of extra trouble, you can be secure and even increase the freedom of the user. This is what we want to do.
In the Hurd, the operating system is implemented as a set of servers, and each runs in its own address space. Of course there are some essential system services which better not crash, or the system will reboot immediately as a last attempt to salvage the situation. But for many other services, a crash is not fatal. If a filesystem server crashes (except for the root filesystem), you can just restart it (or it is restarted automatically by the system). Dead-locks require manual interaction, and you will have to kill the hanging server to remove it from the system and release associated resources.
The Hurd achieves its stability and security by protocols between components that require no mutual trust. So, although a user can add their own filesystem to the filesystem hierarchy, and the parent filesystem will redirect accesses through such a mount point to the user's filesystem, there is nothing the user's filesystem can do that can affect the rest of the system in a bad way. The Hurd servers are written in a way to assume the worst from a communication partner, namely that it is malicious, as an implication you get fault-tolerance for free.
Re:And how long have they been working on this? (Score:2)
In what way is Unix stuck in the past ? What defect, exactly speaking, threatens to kill it ?
Why ?
What flaws in Unix does a microkernel fix ?
Re:Does anyone else think it's funny... (Score:2, Informative)
But Unix is 30+ years old. (Score:2, Insightful)
Operating systems develop slowly in their core design and philosophy, and that's no bad thing.
Linux literally exploded into existence, but its design is 80% identical to that of original Unixes so it enjoyed the immense benefits of inherent architectural stability. Linux knows where it's going, but the horizons surrounding the Hurd are very fuzzy inde
Re:But Unix is 30+ years old. (Score:3, Insightful)
Unless it remains vaporware the entire time.
Linux knows where it's going, but the horizons surrounding the Hurd are very fuzzy indeed. It will take time.
Until they actually release an OS, all this discussion is pretty much theoretical. The horizons are fuzzy because we can't actually run the damned thing to refine it.
Please, good people, RTFA! (Score:2, Insightful)
Btw, Hurd will be compatible with a lot of linux' device drivers (i heard all linux 2.0 device drivers were ported, but I could be mistaken) , and imho, device drivers aren't the main job of kernel developers, but of hardware makers. That is a bit unrealistic at this time, though, but it should be.
Re:Please, good people, RTFA! (Score:3, Funny)
And it always will be ;).
Re:Microkernels DESERVE another look (Score:2)