Debian GNU/Hurd 2013 Released 264
jrepin writes "The GNU Hurd is the GNU project's replacement for the Unix kernel. It is a collection of servers that run on the Mach microkernel to implement file systems, network protocols, file access control, and other features that are implemented by the Unix kernel or similar kernels (such as Linux). The Debian GNU/Hurd team announces the release of Debian GNU/Hurd 2013. This is a snapshot of Debian 'sid' at the time of the Debian 'wheezy' release (May 2013), so it is mostly based on the same sources. Debian GNU/Hurd is currently available for the i386 architecture with more than 10,000 software packages available (more than 75% of the Debian archive)."
Need Clarity (Score:3, Interesting)
Re:Need Clarity (Score:5, Informative)
What are the benefits of using GNU/Hurd 2013?
There aren't any.
Re:Need Clarity (Score:5, Funny)
To pretend you are still a hippy in the 70's?
Re: (Score:3)
If courts decide GPL2 doesn't cover some loophole and people are abusing Linux in some way that Linus does not like, Linus is screwed. With Hurd, there is a license upgrade path.
Re: (Score:3)
What are the benefits of using GNU/Hurd 2013?
There aren't any.
There are. You can keep a dying dream alive and foster hope that the Linux poseur can be dethroned and GNU WILL LIVE!
Re:Need Clarity (Score:5, Informative)
Debian Wheezy - Linux kernel, GNU tools, 100% of software compiled for i386/64.
Debian GNU/Hurd 2013 - Hurd kernel, GNU tools, 75% of software compiled for i386/64 (I'm ready to assume it doesn't have support for other platforms but might be wrong).
Hurd has been conceptual official kernel of GNU project for years (But then Linux came and put Hurd on backburner). Thanks to renewed interest it's development has picked up and therefore we have some actual distribution running with it.
Main problem for Hurd would be support for hardware who needs closed parts (firmware, binary drivers) as Hurd propably is GPL3 which essentially forbids usage of such things without disclosure to user, essentially killing any chances of having binary Nvidia driver supported. Still, most of open source stuff can be ported to be used with it.
Re:Need Clarity (Score:5, Informative)
Debian Wheezy - Linux kernel, GNU tools, 100% of software compiled for i386/64.
Wheezy is also available for other CPU architectures, e.g. ARM and MIPS. And, as a preview, you can use it with a FreeBSD kernel on i386 and amd64 instead of the normal Linux kernel.
Debian GNU/Hurd 2013 - Hurd kernel, GNU tools, 75% of software compiled for i386/64 (I'm ready to assume it doesn't have support for other platforms but might be wrong).
You're right, in fact it's only i386, not i386 and amd64.
Re: (Score:2)
I was going to say the same thing. Think of Debian GNU/Hurd 2013 as a snapshot of a subset of the whole Debian collection.
Re: (Score:2)
i386 != x86_64. Hurd is 32-bit only, according to the FAQ [gnu.org].
Re: (Score:2)
"Hurd is 32-bit only, according to the FAQ [gnu.org]."
What in the actual fuck. Do they think it's still the '90s or something?
Hurdles for HURD (Score:5, Interesting)
Main problem for Hurd would be support for hardware who needs closed parts (firmware, binary drivers) as Hurd propably is GPL3 which essentially forbids usage of such things without disclosure to user, essentially killing any chances of having binary Nvidia driver supported. Still, most of open source stuff can be ported to be used with it.
Yeah, that is what would make it pretty much a non starter on the desktop, since it's probably GPL3 - or else, its rationale for existence separate from Linux is as strong as the rationale for East Germany or North Korea existing. Since binary blobs would be banned here, they'd be limited to Intel & AMD GPUs, however bad, and then, on top of that, run X, and GNOME 3.whatever in fallback mode, or in real mode if the drivers are liberated. In short, the best use of HURD, where it would be almost guaranteed to work right, is in CLI mode, if one is like RMS and lives in an emacs world. In which case, the login script could just as well include that, one goes into emacs, and then is off doing everything that one does there.
Just wondering if the "Libre-" Linux crowd will celebrate this, or release a list of 50 reasons why Debian doesn't pass the purity test and therefore, Debian Hurd can't be endorsed? I sure wish gNewSense comes up w/ a HURD distro based on this one.
Re:Need Clarity (Score:5, Insightful)
I'd just like to interject for a moment. What you're referring to as Linux, is in fact, GNU/Linux, or as I've recently taken to calling it, GNU plus Linux.
Sorry, but this war has been fought, and your side lost. I'm not using GNU/Linux/x.org/XFCE anymore than others are using Windows/CrystalReports/Office/PhotoShop.
Listing every single component of the system is stupid. Linux is the kernel, Linux is what gets recognized as the OS. There are a lot of programs that go into making the system usable - each one need not be referenced in the name.
Re:Need Clarity (Score:5, Interesting)
Re:Need Clarity (Score:5, Interesting)
Excellent point! I'll remember that one. I just cut to the chase and call all my systems debian.
Re:Need Clarity (Score:4, Insightful)
No, to do that would be to do the same silliness as the GNU/Linux crowd. The Android system is a separate entity. I don't hark on ideals. It has become standard to refer to that system as "Android". Insisting on putting "Linux" in the name (or making it the name) is just as silly and foolish as insisting that GNU be in the name of what's become commonly called the "Linux" desktop OS.
Re: (Score:3)
I think you parsed my intent incorrectly. I wasn't stating the Linux is the kernel and THUS the OS, I was stating that Linux is the kernel AND the OS.
Re: (Score:2)
"I was stating that Linux is the kernel AND the OS."
But it isn't.
Linux is only the kernel + drivers.
There is no userland, which is where GNU comes in.
GNU is the OS, Linux is the kernel.
Re: (Score:2)
"I was stating that Linux is the kernel AND the OS."
But it isn't.
Linux is only the kernel + drivers.
There is no userland, which is where GNU comes in.
GNU is the OS, Linux is the kernel.
gnu or busybox or plan9 or bsd or some mix there of
Re: Need Clarity (Score:2, Informative)
You just described how language works. Things are called something because people call them that, regardless of whether or not that is fair or technically correct.
Re: (Score:2)
And some people think cucumbers taste better pickled.
Re: (Score:3)
Re:Need Clarity (Score:5, Insightful)
Listing every single component of the system is stupid. Linux is the kernel, Linux is what gets recognized as the OS. There are a lot of programs that go into making the system usable - each one need not be referenced in the name.
Mmm, but why do you choose the kernel as the piece so important that you name your whole system after it?
I'm forever seeing posts that say "Windows sucks and Linux rules, because in Linux I can do stuff like {insert neat adhoc bash script}". But you could run that script in a MacOS terminal, with Darwin replacing the Linux kernel. You could run it in Cygwin, with the combination of the Windows Kernel and the Cygwin compatibility libraries replacing the Linux kernel.
Linux is great, but it's a thin layer compared to the collection of GNU (mostly) tools that *actually provide the interface people love*.
Re:Need Clarity (Score:4, Insightful)
Because people like an easy, pronounceable, memorable label for things.
It usually goes like:
GNU: Do you spell it out, "gee en you"? Or is it "new" like the wildebeest? And what's with the recursive acronym (GNU's Not Unix)? Why do you geeks pick such awkward names?
Linux: Only two possible pronunciations, both easy.
Given a choice between technically correct and easy, most people will pick easy.
Re: (Score:3)
In 1994, at university, I was in much the same situation (except it was only Sun -- we didn't have SGI boxes, and I couldn't make head nor tail of the solitary NeXT box).
I cut my teeth on SunOS. Then I got a 486 and ran Slackware on it in my dorm room. I found that bash was better than csh (which our admins had made the default shell on SunOS). I found that GNU date was better than SunOS date.
Then I found that our admins had a /usr/gnu/bin NFS mount for the Sun boxes, which we just had to put in our paths t
Re:Need Clarity (Score:4, Insightful)
But today most users would be interfacing with gnome, or kde, or unity etc and are unlikely to touch the gnu tools - ie they do useful stuff in the background, just like the kernel does...
Re:Need Clarity (Score:5, Insightful)
One could argue that the OS is Debian (or Fedora or Ubuntu). All of which use the Linux kernel, and the GNU tools.
Re: (Score:3)
Yall are posting in a troll thread.
Hint: Google parent's post.
Re: (Score:2)
If you were really following your logic, you'd call the operating system Debian or GNU. Just as Mac OS X isn't called "Mach" or MS Windows isn't "NT kernel".
Re:Need Clarity (Score:4, Insightful)
Sorry, but this war has been fought, and your side lost. I'm not using GNU/Linux/x.org/XFCE anymore than others are using Windows/CrystalReports/Office/PhotoShop.
Actually, people *do* typically refer to their computer software stack at a level appropriate for the task being described. If someone asks, "what did you photoshop that picture with," do you say "Mach microkernel"? --- No, you describe what you're using at a level appropriate to the activity: you might say "Gimp, on Ubuntu." Thus, if your work consists of using GNU utilities and applications, or writing programs linking against GNU libraries (and compiling them with a GNU compiler) --- it's perfectly reasonable to say you're using GNU on Linux (just like someone might say "Office on Windows" to describe their computer work environment, instead of saying "I write company newsletters using a Core i5-3350p").
Re: (Score:2)
I'd just like to interject for a moment. What you're referring to as Linux, is in fact, GNU/Linux, or as I've recently taken to calling it, GNU plus Linux.
Sorry, but this war has been fought, and your side lost. I'm not using GNU/Linux/x.org/XFCE ...
Listing every single component of the system is stupid. Linux is the kernel, Linux is what gets recognized as the OS. There are a lot of programs that go into making the system usable - each one need not be referenced in the name.
Although, there is a LOT more GNU in a "Linux" system than Linux - which was also built using GNU utilities...
Re: (Score:2)
Linux is the kernel, Linux is what gets recognized as the OS.
False. Everybody knows that it's called GNU/Linux, "Linux" is just an everyday abbreviation to make life easier, nothing more.
Credit to where it's due. Or, try to run your kernel without the GNU utilities.
Re: (Score:2)
More to the point, everyone calls it Linux, and while there is actually something called Linux in there, they're not going to call it GNU/Linux. It's annoying enough to write the additional characters down in text, but even more cumbersome to say. And all for the effect of someone thinking you're either pedantic, or an uninformed audience wondering what the hell you are talking about.
Yeah, all the tools are GNU tools, but honestly, no one has forgotten that. We all know who we need to contribute to in or
Re:Need Clarity (Score:4, Insightful)
The war is still ongoing. And it will, as long as we still have to use closed software. You are too old to fight, that's all. Calling it GNU/Linux is simply a way to give credit to the people who started all the Free Software movement. Without GNU, there would be no Linux.
Most people recognize that a distribution is the sum of its parts (many of which have nothing to do with GNU or the FSF) and therefore don't elevate any particular group above the others and are quite content to refer to the whole lot as Linux. I suspect that the whole GNU/Linux thing is just some underlying resentment that Linux succeeded precisely for the reasons Hurd failed so miserably - because the FSF is big on ideas, not so big on actually bringing them to fruition in a timely and practical fashion.
Re: (Score:3)
By your definition, Microsoft WIndows must not be an OS. After all, it can't compile itself, because it doesn't come with a compiler.
Nor is Android; I doubt that it even has a compiler. Even app development for Android is done with a cross compiler on a different system.
Of course, all of those dilemmas are false, because in reality, the definition of "OS" simply does not contain a requirement for self-compilation.
Re:Need Clarity (Score:4, Interesting)
I'd just like to interject for a moment. What you're referring to as Linux, is in fact, GNU/Linux, or as I've recently taken to calling it, GNU plus Linux. Linux is not an operating system unto itself, but rather another free component of a fully functioning GNU system made useful by the GNU corelibs, shell utilities and vital system components comprising a full OS as defined by POSIX.
Both GNU components and the Linux kernel come with a written [gnu.org] permission [gnu.org] to call it whatever you want. I can take all these components, distribute them even verbatim and call them "yomama", and I would be fully compliant with the licenses. The difference is that no one would have a clue what I am talking about.
For this reason it is correct to call it Linux, or Android or Ubuntu or any other name (subject to trademark laws of course). Just use the name most people are familiar with so they know what you are talking about. Calling these systems GNU is merely a courtesy, a form of respect you pay to the GNU project, not a requirement in any way, and not "the right way" but merely your preference.
Re:Need Clarity (Score:5, Insightful)
The GNU project was a project to develop a free OS and tools.
All works developed for the GNU project were released under the GNU license. Numerous other projects were released under the same license as well.
Linux was a project to develop a free drop-in (and superior) replacement for Minix, and although released under the GNU license, and was distributed with GNU tools, it was never actually part of the GNU project, any more than AIX or HPUX would have become part of the GNU project by replacing their standard tools with GNU equivalents (I personally used an HPUX system at university which had all of the standard tools replaced with GNU ones, but that wouldn't suddenly change the name of that system to GNU/HPUX).
The notion that without the GNU tools, a Linux distribution would not be usable, and therefore the GNU prefix should be applicable to Linux also ought to apply to Minix itself, which like Linux, was never part of the GNU project (and was released under a different license), but was practically unusable out of the box, and most users of it took the source code to the GNU tools, which was freely and readily available, and compiled them to run under Minix to create a usable system. Minux, starting from approximately v 3 onwards, actually started being distributed with the GNU tools to make it more fully functional out of the box, but nobody ever tries to call Minix GNU/Minix.
Linux is Linux. GNU/Linux is just a name that people who were tired of waiting forever for Hurd wanted to call it so they feel like they had some closure.
Re:Need Clarity (Score:4, Insightful)
I agree completely with this, which is why I think that trying to prepend "GNU" onto Linux is a rather foolishidea.
Re: (Score:3)
Linux has a philosophy, though maybe it's more subtle. When it was new it was "this is cool, try it out". Later on it was "this is practical, use it".
You can give to a charity without joining a church or movement; similarly you should be able to write, contribute to, and use free software without adopting a structured philosophy.
Round and Round (Score:5, Funny)
Re: (Score:2)
I run Busybox/Linux, and when Toybox gets done, I'll think about switching to Toybox/Linux. ^_^
Is It Still Just for Nerds? (Score:2)
Most of us want to popularize the OS enough so that it will continue to be developed, and so that a larger selection of end user software will be developed for it. Getting bogged down in a disagreement about whether it should be called "Linux", "GNU/Linux", or by a distribution name, is counter-productive to that objective. It gives the impression that all of this is only for nerds (and that is a reason enough for most people to want nothing to do with it).
When I refer to the OS as "Linux", I am not discoun
Re: (Score:3)
What about those of us using the Linux kernel with a BSD derived or Android userland?
Re: (Score:2)
The Debian devs took a huge number of the packages available and re-built them to work in the Hurd environment, which uses a different kernel.
There are probably no inherent benefits to using Hurd over Linux - and there are certainly many reasons for picking Linux over Hurd, support being just one of them.
If you have a spare VM though then it may be worth installing Hurd just as a learning process.
Re:Need Clarity (Score:5, Informative)
At this stage of Hurd's development, parent is correct. For daily desktop use, Linux is clearly mature enough and Hurd is very probably not.
From the perspective of design, Hurd has some good ideas, as the GNU Web site explains [gnu.org]. My favorite is:
So there are design features of the Hurd that make it attractive to developers. I can foresee the Hurd maturing to the point where embedded device makers would seriously consider it, for example.
Re: (Score:2)
To tell you the truth, I'm using Ubuntu daily and I don't remember ever having the kernel itself crash, which I assume would produce the Linux equivalent of a BSOD. Usually it's just user-space programs that crash.
So if the kernel not crashing is the main selling point of Hurd, I don't see much reason why I personally would use it, because it fixes a nonexistent problem (at least for me).
Re:Need Clarity (Score:5, Insightful)
That's entirely pragmatic of you, and that's fine.
But say you wanted to try out an experimental device driver. In Linux it would be a kernel module. If it went wrong, it could potentially cause a kernel panic and halt your entire system. Or, since it has kernel privileges, it could just quietly spy on some element of your system and phone home with your confidential data without you knowing.
On a microkernel, your experimental device driver would run in separate memory space to other components. If the experimental driver crashes out, the rest of the system keeps going. It can't spy on your other components, because its access is restricted.
It may not address a need *you* have, but it may well be useful to others.
Re: (Score:2)
On the flip side, having all that stuff in userspace surely means a massive performance hit, does it not?
I seem to recall for instance with various *nixes it is possible to have a single box handling tens of thousands of IPSec requests, because IPSec is handled within the kernel (IIRC), whereas that same box might handle a few hundred OpenVPN sessions simply because it is all userland and context switching absolutely kills performance.
If that is an accurate example of the type of performance hit incurred on
Re: (Score:2)
If you're developing kernel components, having a kernel that crashes cleanly can make development much easier. Being able to shut down your buggy kernel level program and then try again sure beats rebooting after a panic. Even though this isn't directly helpful to users of the system, making the test side of development easier can lead to the program evolving more quickly over time. The Hurd design has been filled with taking the side of various trade-offs that take longer, but are believed to be more po
Re: (Score:2)
That makes plenty of sense until you realize that device drivers that interact with the hardware are far more likely to crash than things like TCP. Hardware often has things like Direct Memory Access(DMA) to and from the device to make access more efficient and when a hardware driver crashes, a misplaced DMA setting on the hardware can scribble over any memory it wants.
Re: (Score:3)
From the perspective of design, Hurd, has some interesting ideas. Unfortunately, they have for the most part, not turned out to be good ones. Microkernels have failed.
[ citation needed ]
Re: (Score:2)
Hurd has been in "development" for thirty years without ever coming close to moving to production. Aside from that and Minix (which was never intended to be a production system), name me a microkernel that can number its user base in five figures.
Re: (Score:2)
Hurd has been in "development" for thirty years without ever coming close to moving to production. Aside from that and Minix (which was never intended to be a production system), name me a microkernel that can number its user base in five figures.
Good points all, but also all irrelevant to the question. Success/failure are not measured simply by the size of the user base. Adoption, acceptance and usage - sure, but just those.
I don't know to what extent micro-kernels are used for production in the real world, but Tandem Non-Stop systems used a message-based OS and Cray had UNICOS variants based on Mach and Chorus for use on their YMP and other systems. According to Wikipedia, Symbian (used on Nokia and Vertu smart phones) is a real-time micro-kern
Re:Need Clarity (Score:4, Informative)
http://en.wikipedia.org/wiki/GNU_Hurd
Lets start with that. Basically a different kernel (BSD and linux are 'monolitic' kernels, MacOSX is a hybrid).
The idea is everything as much as possible runs in its own process and funnels thru an IPC. The actual kernel does not do much other than scheduling and memory management and IPC.
The idea is you can upgrade your network stack without rebooting the computer. This was very appealing 20 years ago when rebooting to 15+ mins for some bigger hardware. When you can reboot a computer in under 30 seconds it is not as interesting. It is becoming a bit more interesting today with companies wanting '0 downtime' with SLA's. It was also possible that you could run stuff on another machine and it be considered part of the OS. Cool stuff but in practice it ended up being slower than direct calls in many key instances.
Now the downside, they have been working on this since 1987. So work is slow, updates few, resets of the project seem to happen every 3-5 years. At this point you have 4 major OS's to choose from that are all very good (2 of them basically being Unix and one that is a very good clone).
The end user POV of say MacOSX vs Linux vs BSD vs Hurd. Not much. At this point Hurd is basically a research project. Oh I am sure there are a few out there who use it for 'production use'. But not many.
Re:Need Clarity (Score:5, Informative)
Wheezy is a GNU/Linux operating system based on the Linux kernel. GNU/Hurd is the GNU operating system based on the Herd kernel. Commonly people simply call GNU/Linux "Linux" but Linux is the kernel which GNU runs atop of.
If you do a bit of research on Hurd the benefits are quite intriguing. One interesting bit is since Hurd is a microkernel, the concept of kernel-space and user-space disappears as everything runs in userspace. The kernel only worries about memory management, process/thread scheduling and message passing. Services are provided by "servers" running in userspace and talk to each other via messages. Users no longer need root access to do simple tasks like installing software, mounting disks, accessing hardware or other tasks which require root access or sudo because they live in kernel space. Instead they talk to the servers directly. The idea is that by moving services to user space, the need to grant users any type of root access (setuid, su or sudo) is removed. There is still a security hierarchy with a "root" user who has full control of the system, but users never need access to that user or group. No need for root access means less chance that the root account can be compromised. Imagine the problem of "privileged ports" disappearing because those services (ftp, http, etc.) no longer need any sort of root access. They are simply allowed to read/write certain files/directories and access the network. If that service is compromised, it can't gain root access.
Some have said microkernels are not necessary or that they impose a larger overhead in the form of message passing. That argument was valid ten plus years ago but today we have quad core 1.5GHz cell phones and PC hardware that is so fast that its stagnated the market. Linus Torvalds famously argued against the microkernel with Andrew Tanenbaum (Minix creator who inspired Torvalds) who is in favor of them. Eric S. Raymond once said about Plan 9 (The planned successor to Unix that failed) "There is a lesson here for ambitious system architects: the most dangerous enemy of a better solution is an existing codebase that is just good enough.". Hurd may never see production use and like Plan 9 be relegated to a research or pet project of a handful of developers interested in operating system design. I hope it succeeds.
Re: (Score:3)
Imagine the problem of "privileged ports" disappearing because those services (ftp, http, etc.) no longer need any sort of root access.
The "privileged ports" restriction is a historical artefact that should be retired, in my opinion. It supposedly reassures the client that the service they're talking to is blessed by root on the server. That meant something in the days when UNIX only ran on big expensive boxes with admins holding the reins tight; when people generally trusted the routes between hosts. It means almost nothing when everyone and their dog can be root on a system of their own, and you've no idea what NAT routers and MITM explo
Re: (Score:2)
It is funny how the GNU/world newspeak has become so reflexive that it has lost all meaning.
That is a worship word. Yang worship. You will not speak it.
Does anyone have any non-silly comments? (Score:3)
Does anyone here know much about the Hurd?
I know it got stuck in "which microkernel shall we use" hell for the longest time. They seem to have settled, but it's not clear if the new one is a modern high performance one (under the Mach name), or if they just settled on the older one and suffered a performance hit.
Also, why is a microkernel OS so apparently difficult to construct?
As far as I can see, the basic bits of hurd are all in place: the things that make it an operating system that actually works. But what took it so long? Micro kernel based things sound like they ought to be easier to develop (segfaults instead of a lockup, for instance), but apparently they are not.
Anyone got any experience?
Re: (Score:3)
According to RMS in "Revolution OS" the difficulties with their attempt at the microkernel is in the timing of messages back and forth to all the little sub processes/daemons/whatevers
Re:Does anyone have any non-silly comments? (Score:5, Interesting)
Microkernel operating systems aren't inherently difficult to construct but there's a very noticeable tradeoff between the performance of a hybrid/monolithic kernel and the security/stability of a microkernel.
The performance hit comes from the hardware isolated process model used by modern microprocessors. Whenever an application needs to do something outside of its own scope, such as request additional memory, access shared resources, or interface with a device driver it makes a system call. In a monolithic system this requires the processor to switch from the running task to the kernel task, perform the requested action, and then switch back to the running task. If the kernel needs to access the tasks memory, it can access it through segmentation or shared memory with ease because the kernel in a monolithic system has no access constraints.
In a microkernel system the processor switches from the running task to an interprocess messaging task (part of the microkernel), which then copies the message to the requested server's buffer, switches to the server task, processes the message, switches back to the messaging task, copies the response back to the original client's buffer, and then switches back to the client task.
Task switches are very expensive in terms of CPU cycles, so minimizing them is key to obtaining performance. Hybrid and Monolithic kernels have a massive performance edge on modern processors because they perform a fraction as many task switches and memory operations whenever a system call is performed.
Re:Does anyone have any non-silly comments? (Score:4, Informative)
Lets be clear: the performance hit comes from the expensive x86/x64 trap handling. RISC processors are on the order of 30 cycles. x86/x64 is on the order of 2,000-3,000 cycles. The braindead x86 architecture is the only reason microkernels haven't already "taken over the world".
The L4 and EROS/CapROS microkernels did a lot of small hacks to reduce the above overhead, and they got some pretty decent performance even out of x86. But contrary to your previous claim, x86 makes good microkernels very difficult to construct.
Re: (Score:2)
Performance is a problem but it isn't the problem. The distributed enforcement of policy is potentially a harder problem than even performance.
For example, on a monolithic kernel, ensuring that no process (except a specified list) is both setuid and talks to the network is (relatively) easy because different parts of the kernel can trust and rely on each others behavior. In a microkernel setting, these sorts of policies have to be encoded into how the different services interact with each other. That sor
Re:Does anyone have any non-silly comments? (Score:4, Interesting)
I looked at it a while back with an eye towards doing some work on it, but I'm interested in file systems and large storage and Hurd was limited to a max of 4GB per file because all files were memory mapped all the time and Hurd only runs on 32-bit architectures. So, for me, the amount of work before I could do something interesting was pretty steep.
I think the main reason that microkernels don't have great performance is because not much work has been put into them. I worked on Apple's Copland OS back in the mid-90's (the "failed" OS before OS X). Copland was a true microkernel and there were a number of performance optimizations that we'd put in. Had it shipped, we probably would have started making some modifications to the CPUs to support the microkernel better as well.
A big issue for performance is switching between processes. If you have to make multiple process switches for each kernel call that can get slow due to things like reloading the MMU tables, etc. There are a lot of different paths that could be taken. I could imagine a micro kernel, for example, written in Java or similar language running in a VM that enforced fine-grained memory controls, e.g. at the object level. If you used this for memory protection between trusted (e.g. OS level) servers you could avoid the hit of reloading the CPU's page maps. User space separations could be enforced by the CPU for better security.
Re:Does anyone have any non-silly comments? (Score:4, Informative)
>I could imagine a micro kernel, for example, written in Java or similar language running in a VM that enforced fine-grained memory controls, e.g. at the object level. If you used this for memory protection between trusted (e.g. OS level) servers you could avoid the hit of reloading the CPU's page maps. User space separations could be enforced by the CPU for better security.
Microsoft Research has done a lot of work on this exact idea. They even produced a usable operating system
http://en.wikipedia.org/wiki/Singularity_(operating_system) [wikipedia.org]
Re:Does anyone have any non-silly comments? (Score:5, Funny)
I suppose there's a first time for everything.
Re:Does anyone have any non-silly comments? (Score:5, Interesting)
Managing the trust graph is why it's hard. Security is always hard. On a monolithic kernel we just say: Uhm, yeah, I trust all these drivers and whatever, even though I probably shouldn't because... well... That's how it works. GNU/HURD/HIRD has a more modular approach that pushes the drivers out of kernel space, but it has some design flaws ( letting a directory node provide its own ".." -- Yikes! ), and the number of developers is next to non-existent.
Furthermore modern processors are designed for monolithic kernels. Just like x86 has a bunch of cruft from when ASM coders wanted more complex instructions (for less / easier coding), Features like Multiple Execution Ring Levels are missing. ARM gives me Two Rings. AMD x86 gives me Two Rings. Intel x86 gives me 4 rings! A ring level essentially is a hardware supported security level. Each ring allows another "mode" of security. So, with only two rings, I can create an OS that has userspace and kernel mode. With 3 rings I can have Kernel, Trusted Driver/Module/Interface, and Userspace. The barriers required to easily create a secure microkernel don't exist. With only 2 rings we have to decide if userspace or kernel mode is where a module belongs -- They don't belong in either! We Need The One Ring to be an intermediary between Ring Zero (which rules them all) and give Ring 2 to the userland, and in the darkness bind them.
Everyone's using monoliths, hardware makers give us 2 rings to make that happen. Hell the hardware even prevents adoption of new (more secure) programming paradigms. Even the virtual memory addressing system in modern chipsets is designed to work best with C. I'm working on a more secure language with separate call and data stacks, and code-pointer overwrite protections for heap data, but the x86 / x64 / ARM platforms I'm working on are built for single stacks, and thus stack smashing or buffer overflow is an inherit design flaw. Segmented memory would be great for securing functions on a per call basis -- Swapping Stacks at will, Super easy Co-Routines... but those bits were sacrificed to the More Memory God, and the registers became a part of the virtual addressing system. On 16 bit code I can do some neat things that I can't do on 32bit mode code without a huge headache, because the hardware doesn't support me doing it.
So, that's why it takes so long. Because we're trying to do stuff in software that the hardware doesn't support. These things are more secure and are great for modularity, but the hardware's designed to do it faster the monolith / C way. Note that to a program it won't matter about whether the filesystem is uber modular, or the device drivers are not in ring 0. Hell, eventually I'll port a C compiler to the multi-stack code.
Note: I don't work on GNU/HURD/HIRD, just develop my own OSs. Yeah, I could work on Linux or other POSIX OSs, but why? That's not going to advance the state of the art in Operating Systems at all. A reliable design is grand for production systems, but to make the leap from the 80's, we're going to need some new hardware to help us out. Got Viruses? Blame the Chip Maker, Language Implementer (not designer), and Operating System. Seriously, they're all doing it WRONG if security is the goal. With a separate call and data stacks on chip, One Ring more, you could actually have the damn security you want.
Re: (Score:2)
Not sure I follow the comment about more than two rings.
Wouldn't a 2 ring system with an IOMMU be sufficient? That way drivers could sit in ring 1, but still have access to the piece of hardware required.
This may not be a sane question: I have read a fair but, but I've never tried to write a kernel.
Re: (Score:2)
Even without an IOMMU, you could have drivers in ring >0; you'd just run the risk of them snarfing precious bodily fluids via PCI DMA.
I thought that was one of the things that IOMMUs were supposed to fix (e.g. like the firewire security hole).
But really, microkernels aren't supposed to be the solution to the Wonky Unreliable Driver Issue. Virtualization is. Personally I'm glad we'll have widespread IOMMUs in about ten years' time.
Well, do the two not work well together? The microkernel alleviates teh seg
Re: (Score:3)
In the final analysis, a modular message-passing architecture posed performance problems they were never able to adequately solve, pretty much as the nay-sayers predicting when microkernels were first proposed.
Re:Does anyone have any non-silly comments? (Score:5, Informative)
They aren't. There are many microkernel OSes out there that are successful, like QNX (which has made plenty of noise about how it runs nuclear reactors and such). Hell, even Windows was completely microkernel at one point.
The main problem is performance. This comes from two problems - repeated kernel requests, and IPC.
Kernel requests happen because device drivers are run at application level (which provides great isolation). However, device drivers tend to require a lot of stuff at the kernel level (which is why they're typically in the kernel...) - things like interrupts, physical memory access, DMA, memory allocations (both physical and virtual), and such. Each one of those things it can't do alone (because well, it's an application - if applications can do those things, your microkernel is no better than DOS - the goal is to isolate things from each other). So it becomes an kernel API call to request an interrupt, to register an event object (the interrupt handler runs in the driver server as an interrupt thread), to get memory mappings installed, etc. Each API call is a system call in the end, which are generally expensive things because they require context saving and switching (some microkernel OSes use "thread migration" to mitigate this) and so forth.
The second problem is IPC. All the servers are isolated from each other and can only communicate through IPC mechanisms. So a microkernel has to end up being a message routing and forwarding service as well. Let's say an application wants to read a file it has open. It calls read(), which traps into the kernel (system call, after all), which the kernel then sends a message to the server which can handle the call (filesystem), so it passes the message to the filesystem server and then switches back to user mode so the filesystem server can handle it. The filesystem server then translates it to a block and issues a read to the partition driver (which if it's a separate server is yet another user-kernel-user transition), which then goes to the disk driver (u-k-u). From here, it goes to the bus handler (because said disk can be on SATA, IDE, USB, Firewire) where the transfer actually happens, and then the message winds it way back to the disk driver, the partition driver, the filesystem driver, then the application.
Switching from user to kernel is expensive - generally requires generating a software interrupt (system call) which triggers into the kernel's exception handler which then has to decode the request. Switching back is generally cheaper (usually just a return instruction which sets the proper mode bits), but you're still taking several mode switches per API call.
No big surprise, these things add up into a ton of cycles.
Microkernel OSes have developed means to alleviate the issue - thread migration being a big one (typically a server is implemented as a thread waiting on a mailbox, it gets the message then handles it). Thread migration means the application's thread context isn't saved, but migrated to the kernel, then passed onto the servers as necessary so instead of having to wake up threads and run the server loop, it becomes more expensive function calls, almost like RPC except the thread that called it is where it's executing on.
In a monolithic OS like Linux, all those messages and IPC are reduced down to function calls (usually through function pointers) - so the application making the system call becomes the only transition - the virtual filesystem handles the call, calls the filesystem driver, which calls the partition driver, which calls the disk driver, which calls the bus driver, ... and then they return up the stack like a typical subroutine call.
Oh, and Windows NT 3.51 did this as well. Guess what? Graphics performance sucked, which is why in NT 4, Microsoft moved the graphics driver into ring0 (kernel mode), thus creating the ability for poorly written graphics drivers to crash the entire OS. But, graphics are faster because you're not shuffling so much messages around. I think Windows has steadily put more and more of the graphics stack in the kernel since then, as well.
Academic Use (Score:5, Informative)
If you're interested in understanding microkernel OS architectures, then Hurd might be useful to experiment with. Other than that its pretty close to unusuable as there isn't even basic SATA and USB support (IE you're going to have to install on OLD hardware, or much more likely in a VM where you can supply virtualized IDE).
Honestly, while I certainly don't want to rain on anyone's pet project Hurd has mostly become pointless. Its user space really offers nothing beyond what Linux or other POSIX *nix user spaces offer, and while microkernels are interesting concepts they've never proven to be terribly practical in most applications. Even in terms of microkernel design Hurd is dated. I'd think it would be much more interesting to work on future-looking OSes, say something with a Plan 9-like user space and some more modern experimental kernal with features designed around high core counts and heterogeneous compute resources. Not sure what that is, but I'm sure there are people out there working on stuff like that.
Re: (Score:2)
Even in terms of microkernel design Hurd is dated
Well, that would be the place to focus. If Hurd focused on being the best microkernel project (vs. Minix or whatever) then they would attract lots of help from academia. Is there something preventing this?
I wonder, too, since a (the?) major issue with microkernels is the cost of message passing, if some of the newer/alternative distributed architectures (which have an inherent message passing delay anyway) wouldn't be a better fit for Hurd than x86 hypervis
Re: (Score:2)
Yeah, I don't know. I can't even pretend to be any more than superficially informed about modern OS design. There was a day when I worked on bare metal and RTOSes, ported FORTH to new processors and such things, but its been 25 years now. I understand that people have developed some more useful communications techniques and that a lot of the issue is CPU designs that assume a monolithic kernel architecture and aren't kind to things like microkernels. I know from casual skimming there are various areas of ac
Re: (Score:3)
I agree. However, I think "mostly pointless" is the most moronic phrase. You like viruses? Keep using a single stack for code and data and having no fine grained memory access barriers... Think it's "mostly pointless" to try and solve the malware issue? Well, fuck you then. Say a solution is found, it won't be in a monolithic kernel design. We need at least one more layer between Users and Master of the Universe. Hell, we could even have another level under userspace for "plugins", wouldn't it be g
Re: (Score:2)
My my, and we must suppose you kiss your mother with that mouth too!
I know of no general principle which would make me conclude that message passing is safer than making system calls. In fact they offer pretty much exactly the same sorts of dangers. Much the same argument was touted by virtualization technology providers, and it hasn't proven particularly hard for exploit developers to worm their way from application to guest OS to hypervisor. I'm not at all convinced that microkernels are inherently any sa
Re: (Score:2)
Not even. Mach is a horrible microkernel. I have no idea why GNU/Hurd hasn't switch to something more serious, like an L4 variant.
Comment removed (Score:5, Informative)
Re:Loss of face if they dumped it (Score:5, Informative)
If they dumped Hurd now it would be a complete loss of face
Yay it's the daily make shit up about the FSF/RMS thread!
http://blog.reddit.com/2010/07/rms-ama.html [reddit.com]
TL;DR
http://lists.gnu.org/archive/html/bug-hurd/2010-08/msg00000.html [gnu.org]
Seriously, is it hard to google RMS Hurd before posting crap?
Re: (Score:2)
Seriously, is it hard to google RMS Hurd before posting crap?
It takes a while when you have to fetch web pages from other sites by sending mail to a program that fetches them, much like wget, and then mails them back so you can then look at them using a local web browser. (Seriously [stallman.org]!)
Re: (Score:2)
Well played!
Stallman is entertainingly single minded, but I hope anyone can respect someone who actually manages to live day to day by the principles he claims to have[*].
[*]Well I say "anyone", of couse, but if your principles include mass murder etc, then respect is perhaps not due.
Re: (Score:2)
Yeah, he's a bit out there, but it would be incredibly hypocritical not to give credit where due. The guy is clearly the prime mover behind a lot of free software. I'm sitting here typing this on my nice FC17 system in Firefox and using all sorts of FSF software practically every hour of the day to run my business etc. The world needs guys like RMS, even if it doesn't seem to know it or appreciate them a whole lot. In my world one RMS is worth 12 Bill Gateses.
Re: (Score:2)
Yeah, he's a bit out there, but it would be incredibly hypocritical not to give credit where due.
Not only that, but he has the annoying habit of actually being right years in advance and banging on about topics that on one cares about because he's far ahead.
Like GNU/Linux. No one cared. But now Android came along and it makes much more sense as people make funny non sensical comments about Linux, confusing the OS and the kernel.
The world does indeed need guys like RMS and they will always seem odd and stubb
Re: (Score:3)
Like GNU/Linux. No one cared.
Hate to break it to you, but we still don't care. Seriously. 99% of the people who use Android have no idea that it has anything to do with Linux. They just call it Android. And of the small minority of people who run "Linux" on the desktop, about 99% (from my observation), just call it "Linux." Richard Stallman and a handful of his groupies are still the only people who still care about putting "GNU" in front of Linux.
MIcrokernels are yesterdays tech (Score:2)
As a theoretical design they're very clean and simple to understand. In reality however due to all the message passing and context switching they're dog slow and when every bit of performance matters thats just unacceptable.
Re: (Score:2)
Surely in the past "every bit of performance mattered" more than it does today? You can compensate for slow software by throwing faster hardware at the problem. Today we have faster hardware.
That said, I'm not volunteering to use a slower kernel full-time.
Windows 98 vs Windows 2000 (Score:2)
Remember all the complaints in performance from gamers when they lost all that speed going from a shared memory OS (windows 9x) to a protected memory OS?
Microkernels are a similar problem but without a big corporation to force users kicking and screaming into the modern age. I would like to have Multics like features; my CPUs are mostly idle today. Being able to replace RAM, storage, CPUs without shutdown or even turning off half the computer for most of the day...
It's not like we don't have more CPU powe
Re: (Score:2)
Re: (Score:2, Interesting)
This is rediculous. Modern computers are faster than most people need. We have the cycles to go micro kernel everywhere. There are phone OS that run on a micro kernel! If a phone can do it, so can your PC.
Consider if android upgraded to a micro kernel. Sure, they couldn't sell A9 chips anymore for tablets, but beyond that it would be awesome. The sound server could restart when something crashed, etc. Tablets and phones are a great example of something that should be always up. No one wants to reboot
question (Score:2)
What, if any, are the advantages that a user would notice of GNU/Hurd over Gnu/Linux?
Technoloy Ecosystem (Score:3, Insightful)
Re:Oh come on. (Score:5, Funny)
Oh come on. April 1st is over. Everyone knows Hurd is a running gag. It's an ancient meme.
Ha, indeed! Someone once tried to convince me that Duke Nukem Forever had been released too. I'm not so stupid that I'd fall for that!
Re: (Score:2)
Re:Oh come on. (Score:5, Funny)
The Mayans were pointing to the dawn of a new era, the age of Linux on the desktop, which supposedly will last for the next B'ak'tun until Hurd is up and running on the L4 microkernel.
Re: (Score:3)
Turns out, he only got up to pee and then went back to sleep.
Re:Oh come on. (Score:4, Funny)
Oh come on. April 1st is over. Everyone knows Hurd is a running gag. It's an ancient meme.
You mean this is "Debian does Dallas"?
Re: (Score:2)
HURD was announced in 1990, "What is love" came out in 1993, "Hurt" came out in 1994...
The More You Know!
Re: (Score:2)
The GNU/ in this case is actually unneeded as it is a GNU project unlike GNU/Linux where the Linux kernel was not a GNU project.
Re: (Score:2)
The FAQ claims they have a working AHCI driver.
Re: (Score:2)
My understanding is that Hurd has been held back because Stallman was a bit of a control freak and made it very difficult for the community to help him develop the kernel, even after his wrists went and he couldn't personally code anymore. Linus had a much much better attitude towards community development which allowed the Linux kernel to completely disp
Re: (Score:2)
In case you were actually offended, place blame where it belongs [wikipedia.org].
Then help fix the problem by submitting a bug report to your favorite distro instead of just bitching about it.
Re:So how many GNU/whatevers are there (Score:4, Informative)
There is Debian, with its GNU/BSD version:
http://www.debian.org/ports/kfreebsd-gnu/ [debian.org]
And Gentoo has their variant:
http://www.gentoo.org/proj/en/gentoo-alt/bsd/fbsd/ [gentoo.org]
Those are the only two I know about :)