Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
GNU is Not Unix Debian Operating Systems

Debian GNU/Hurd 2013 Released 264

jrepin writes "The GNU Hurd is the GNU project's replacement for the Unix kernel. It is a collection of servers that run on the Mach microkernel to implement file systems, network protocols, file access control, and other features that are implemented by the Unix kernel or similar kernels (such as Linux). The Debian GNU/Hurd team announces the release of Debian GNU/Hurd 2013. This is a snapshot of Debian 'sid' at the time of the Debian 'wheezy' release (May 2013), so it is mostly based on the same sources. Debian GNU/Hurd is currently available for the i386 architecture with more than 10,000 software packages available (more than 75% of the Debian archive)."
This discussion has been archived. No new comments can be posted.

Debian GNU/Hurd 2013 Released

Comments Filter:
  • Need Clarity (Score:3, Interesting)

    by CrimsonKnight13 ( 1388125 ) on Wednesday May 22, 2013 @08:54AM (#43792809) Homepage
    Would anyone mind explaining to me the key differences between Debian Wheezy & Debian GNU/Hurd 2013? What are the benefits of using GNU/Hurd 2013?
    • Re:Need Clarity (Score:5, Informative)

      by Anonymous Coward on Wednesday May 22, 2013 @08:59AM (#43792859)

      What are the benefits of using GNU/Hurd 2013?

      There aren't any.

    • Re:Need Clarity (Score:5, Informative)

      by Pecisk ( 688001 ) on Wednesday May 22, 2013 @09:00AM (#43792877)

      Debian Wheezy - Linux kernel, GNU tools, 100% of software compiled for i386/64.
      Debian GNU/Hurd 2013 - Hurd kernel, GNU tools, 75% of software compiled for i386/64 (I'm ready to assume it doesn't have support for other platforms but might be wrong).

      Hurd has been conceptual official kernel of GNU project for years (But then Linux came and put Hurd on backburner). Thanks to renewed interest it's development has picked up and therefore we have some actual distribution running with it.

      Main problem for Hurd would be support for hardware who needs closed parts (firmware, binary drivers) as Hurd propably is GPL3 which essentially forbids usage of such things without disclosure to user, essentially killing any chances of having binary Nvidia driver supported. Still, most of open source stuff can be ported to be used with it.

      • Re:Need Clarity (Score:5, Informative)

        by Anonymous Coward on Wednesday May 22, 2013 @09:25AM (#43793101)

        Debian Wheezy - Linux kernel, GNU tools, 100% of software compiled for i386/64.

        Wheezy is also available for other CPU architectures, e.g. ARM and MIPS. And, as a preview, you can use it with a FreeBSD kernel on i386 and amd64 instead of the normal Linux kernel.

        Debian GNU/Hurd 2013 - Hurd kernel, GNU tools, 75% of software compiled for i386/64 (I'm ready to assume it doesn't have support for other platforms but might be wrong).

        You're right, in fact it's only i386, not i386 and amd64.

        • by armanox ( 826486 )

          I was going to say the same thing. Think of Debian GNU/Hurd 2013 as a snapshot of a subset of the whole Debian collection.

      • by dpiven ( 518007 )

        Debian Wheezy - Linux kernel, GNU tools, 100% of software compiled for i386/64.
        Debian GNU/Hurd 2013 - Hurd kernel, GNU tools, 75% of software compiled for i386/64 (I'm ready to assume it doesn't have support for other platforms but might be wrong).

        i386 != x86_64. Hurd is 32-bit only, according to the FAQ [gnu.org].

        • by Ultra64 ( 318705 )

          "Hurd is 32-bit only, according to the FAQ [gnu.org]."

          What in the actual fuck. Do they think it's still the '90s or something?

      • Hurdles for HURD (Score:5, Interesting)

        by unixisc ( 2429386 ) on Wednesday May 22, 2013 @05:11PM (#43797623)

        Main problem for Hurd would be support for hardware who needs closed parts (firmware, binary drivers) as Hurd propably is GPL3 which essentially forbids usage of such things without disclosure to user, essentially killing any chances of having binary Nvidia driver supported. Still, most of open source stuff can be ported to be used with it.

        Yeah, that is what would make it pretty much a non starter on the desktop, since it's probably GPL3 - or else, its rationale for existence separate from Linux is as strong as the rationale for East Germany or North Korea existing. Since binary blobs would be banned here, they'd be limited to Intel & AMD GPUs, however bad, and then, on top of that, run X, and GNOME 3.whatever in fallback mode, or in real mode if the drivers are liberated. In short, the best use of HURD, where it would be almost guaranteed to work right, is in CLI mode, if one is like RMS and lives in an emacs world. In which case, the login script could just as well include that, one goes into emacs, and then is off doing everything that one does there.

        Just wondering if the "Libre-" Linux crowd will celebrate this, or release a list of 50 reasons why Debian doesn't pass the purity test and therefore, Debian Hurd can't be endorsed? I sure wish gNewSense comes up w/ a HURD distro based on this one.

    • Debian Wheezy is the Linux kernel based O/S we all know and love - Debian 7.0.
      The Debian devs took a huge number of the packages available and re-built them to work in the Hurd environment, which uses a different kernel.

      There are probably no inherent benefits to using Hurd over Linux - and there are certainly many reasons for picking Linux over Hurd, support being just one of them.
      If you have a spare VM though then it may be worth installing Hurd just as a learning process.
      • Re:Need Clarity (Score:5, Informative)

        by SirGarlon ( 845873 ) on Wednesday May 22, 2013 @09:21AM (#43793071)

        There are probably no inherent benefits to using Hurd over Linux - and there are certainly many reasons for picking Linux over Hurd, support being just one of them.

        At this stage of Hurd's development, parent is correct. For daily desktop use, Linux is clearly mature enough and Hurd is very probably not.

        From the perspective of design, Hurd has some good ideas, as the GNU Web site explains [gnu.org]. My favorite is:

        the Hurd goes one step further in that most of the components that constitute the whole kernel are running as separate user-space processes and are thus using different address spaces that are isolated from each other. This is a multi-server design based on a microkernel. It is not possible that a faulty memory dereference inside the TCP/IP stack can bring down the whole kernel, and thus the whole system, which is a real problem in a monolothic Unix kernel architecture.

        So there are design features of the Hurd that make it attractive to developers. I can foresee the Hurd maturing to the point where embedded device makers would seriously consider it, for example.

        • by satuon ( 1822492 )

          To tell you the truth, I'm using Ubuntu daily and I don't remember ever having the kernel itself crash, which I assume would produce the Linux equivalent of a BSOD. Usually it's just user-space programs that crash.

          So if the kernel not crashing is the main selling point of Hurd, I don't see much reason why I personally would use it, because it fixes a nonexistent problem (at least for me).

          • Re:Need Clarity (Score:5, Insightful)

            by slim ( 1652 ) <john@hartnup . n et> on Wednesday May 22, 2013 @10:34AM (#43793833) Homepage

            That's entirely pragmatic of you, and that's fine.

            But say you wanted to try out an experimental device driver. In Linux it would be a kernel module. If it went wrong, it could potentially cause a kernel panic and halt your entire system. Or, since it has kernel privileges, it could just quietly spy on some element of your system and phone home with your confidential data without you knowing.

            On a microkernel, your experimental device driver would run in separate memory space to other components. If the experimental driver crashes out, the rest of the system keeps going. It can't spy on your other components, because its access is restricted.

            It may not address a need *you* have, but it may well be useful to others.

            • On the flip side, having all that stuff in userspace surely means a massive performance hit, does it not?

              I seem to recall for instance with various *nixes it is possible to have a single box handling tens of thousands of IPSec requests, because IPSec is handled within the kernel (IIRC), whereas that same box might handle a few hundred OpenVPN sessions simply because it is all userland and context switching absolutely kills performance.

              If that is an accurate example of the type of performance hit incurred on

          • If you're developing kernel components, having a kernel that crashes cleanly can make development much easier. Being able to shut down your buggy kernel level program and then try again sure beats rebooting after a panic. Even though this isn't directly helpful to users of the system, making the test side of development easier can lead to the program evolving more quickly over time. The Hurd design has been filled with taking the side of various trade-offs that take longer, but are believed to be more po

        • by gmack ( 197796 )

          That makes plenty of sense until you realize that device drivers that interact with the hardware are far more likely to crash than things like TCP. Hardware often has things like Direct Memory Access(DMA) to and from the device to make access more efficient and when a hardware driver crashes, a misplaced DMA setting on the hardware can scribble over any memory it wants.

    • Re:Need Clarity (Score:4, Informative)

      by Anonymous Coward on Wednesday May 22, 2013 @09:18AM (#43793041)


      Lets start with that. Basically a different kernel (BSD and linux are 'monolitic' kernels, MacOSX is a hybrid).

      The idea is everything as much as possible runs in its own process and funnels thru an IPC. The actual kernel does not do much other than scheduling and memory management and IPC.

      The idea is you can upgrade your network stack without rebooting the computer. This was very appealing 20 years ago when rebooting to 15+ mins for some bigger hardware. When you can reboot a computer in under 30 seconds it is not as interesting. It is becoming a bit more interesting today with companies wanting '0 downtime' with SLA's. It was also possible that you could run stuff on another machine and it be considered part of the OS. Cool stuff but in practice it ended up being slower than direct calls in many key instances.

      Now the downside, they have been working on this since 1987. So work is slow, updates few, resets of the project seem to happen every 3-5 years. At this point you have 4 major OS's to choose from that are all very good (2 of them basically being Unix and one that is a very good clone).

      The end user POV of say MacOSX vs Linux vs BSD vs Hurd. Not much. At this point Hurd is basically a research project. Oh I am sure there are a few out there who use it for 'production use'. But not many.

    • Re:Need Clarity (Score:5, Informative)

      by LoRdTAW ( 99712 ) on Wednesday May 22, 2013 @09:56AM (#43793413)

      Wheezy is a GNU/Linux operating system based on the Linux kernel. GNU/Hurd is the GNU operating system based on the Herd kernel. Commonly people simply call GNU/Linux "Linux" but Linux is the kernel which GNU runs atop of.

      If you do a bit of research on Hurd the benefits are quite intriguing. One interesting bit is since Hurd is a microkernel, the concept of kernel-space and user-space disappears as everything runs in userspace. The kernel only worries about memory management, process/thread scheduling and message passing. Services are provided by "servers" running in userspace and talk to each other via messages. Users no longer need root access to do simple tasks like installing software, mounting disks, accessing hardware or other tasks which require root access or sudo because they live in kernel space. Instead they talk to the servers directly. The idea is that by moving services to user space, the need to grant users any type of root access (setuid, su or sudo) is removed. There is still a security hierarchy with a "root" user who has full control of the system, but users never need access to that user or group. No need for root access means less chance that the root account can be compromised. Imagine the problem of "privileged ports" disappearing because those services (ftp, http, etc.) no longer need any sort of root access. They are simply allowed to read/write certain files/directories and access the network. If that service is compromised, it can't gain root access.

      Some have said microkernels are not necessary or that they impose a larger overhead in the form of message passing. That argument was valid ten plus years ago but today we have quad core 1.5GHz cell phones and PC hardware that is so fast that its stagnated the market. Linus Torvalds famously argued against the microkernel with Andrew Tanenbaum (Minix creator who inspired Torvalds) who is in favor of them. Eric S. Raymond once said about Plan 9 (The planned successor to Unix that failed) "There is a lesson here for ambitious system architects: the most dangerous enemy of a better solution is an existing codebase that is just good enough.". Hurd may never see production use and like Plan 9 be relegated to a research or pet project of a handful of developers interested in operating system design. I hope it succeeds.

      • by slim ( 1652 )

        Imagine the problem of "privileged ports" disappearing because those services (ftp, http, etc.) no longer need any sort of root access.

        The "privileged ports" restriction is a historical artefact that should be retired, in my opinion. It supposedly reassures the client that the service they're talking to is blessed by root on the server. That meant something in the days when UNIX only ran on big expensive boxes with admins holding the reins tight; when people generally trusted the routes between hosts. It means almost nothing when everyone and their dog can be root on a system of their own, and you've no idea what NAT routers and MITM explo

  • Does anyone here know much about the Hurd?

    I know it got stuck in "which microkernel shall we use" hell for the longest time. They seem to have settled, but it's not clear if the new one is a modern high performance one (under the Mach name), or if they just settled on the older one and suffered a performance hit.

    Also, why is a microkernel OS so apparently difficult to construct?

    As far as I can see, the basic bits of hurd are all in place: the things that make it an operating system that actually works. But what took it so long? Micro kernel based things sound like they ought to be easier to develop (segfaults instead of a lockup, for instance), but apparently they are not.

    Anyone got any experience?

    • According to RMS in "Revolution OS" the difficulties with their attempt at the microkernel is in the timing of messages back and forth to all the little sub processes/daemons/whatevers

    • by Pinhedd ( 1661735 ) on Wednesday May 22, 2013 @09:34AM (#43793173)

      Microkernel operating systems aren't inherently difficult to construct but there's a very noticeable tradeoff between the performance of a hybrid/monolithic kernel and the security/stability of a microkernel.

      The performance hit comes from the hardware isolated process model used by modern microprocessors. Whenever an application needs to do something outside of its own scope, such as request additional memory, access shared resources, or interface with a device driver it makes a system call. In a monolithic system this requires the processor to switch from the running task to the kernel task, perform the requested action, and then switch back to the running task. If the kernel needs to access the tasks memory, it can access it through segmentation or shared memory with ease because the kernel in a monolithic system has no access constraints.

      In a microkernel system the processor switches from the running task to an interprocess messaging task (part of the microkernel), which then copies the message to the requested server's buffer, switches to the server task, processes the message, switches back to the messaging task, copies the response back to the original client's buffer, and then switches back to the client task.

      Task switches are very expensive in terms of CPU cycles, so minimizing them is key to obtaining performance. Hybrid and Monolithic kernels have a massive performance edge on modern processors because they perform a fraction as many task switches and memory operations whenever a system call is performed.

      • by naasking ( 94116 ) <naasking@@@gmail...com> on Wednesday May 22, 2013 @12:31PM (#43795003) Homepage

        The performance hit comes from the hardware isolated process model used by modern microprocessors.

        Lets be clear: the performance hit comes from the expensive x86/x64 trap handling. RISC processors are on the order of 30 cycles. x86/x64 is on the order of 2,000-3,000 cycles. The braindead x86 architecture is the only reason microkernels haven't already "taken over the world".

        The L4 and EROS/CapROS microkernels did a lot of small hacks to reduce the above overhead, and they got some pretty decent performance even out of x86. But contrary to your previous claim, x86 makes good microkernels very difficult to construct.

      • Performance is a problem but it isn't the problem. The distributed enforcement of policy is potentially a harder problem than even performance.

        For example, on a monolithic kernel, ensuring that no process (except a specified list) is both setuid and talks to the network is (relatively) easy because different parts of the kernel can trust and rely on each others behavior. In a microkernel setting, these sorts of policies have to be encoded into how the different services interact with each other. That sor

    • by putaro ( 235078 ) on Wednesday May 22, 2013 @09:42AM (#43793269) Journal

      I looked at it a while back with an eye towards doing some work on it, but I'm interested in file systems and large storage and Hurd was limited to a max of 4GB per file because all files were memory mapped all the time and Hurd only runs on 32-bit architectures. So, for me, the amount of work before I could do something interesting was pretty steep.

      I think the main reason that microkernels don't have great performance is because not much work has been put into them. I worked on Apple's Copland OS back in the mid-90's (the "failed" OS before OS X). Copland was a true microkernel and there were a number of performance optimizations that we'd put in. Had it shipped, we probably would have started making some modifications to the CPUs to support the microkernel better as well.

      A big issue for performance is switching between processes. If you have to make multiple process switches for each kernel call that can get slow due to things like reloading the MMU tables, etc. There are a lot of different paths that could be taken. I could imagine a micro kernel, for example, written in Java or similar language running in a VM that enforced fine-grained memory controls, e.g. at the object level. If you used this for memory protection between trusted (e.g. OS level) servers you could avoid the hit of reloading the CPU's page maps. User space separations could be enforced by the CPU for better security.

    • Managing the trust graph is why it's hard. Security is always hard. On a monolithic kernel we just say: Uhm, yeah, I trust all these drivers and whatever, even though I probably shouldn't because... well... That's how it works. GNU/HURD/HIRD has a more modular approach that pushes the drivers out of kernel space, but it has some design flaws ( letting a directory node provide its own ".." -- Yikes! ), and the number of developers is next to non-existent.

      Furthermore modern processors are designed for monolithic kernels. Just like x86 has a bunch of cruft from when ASM coders wanted more complex instructions (for less / easier coding), Features like Multiple Execution Ring Levels are missing. ARM gives me Two Rings. AMD x86 gives me Two Rings. Intel x86 gives me 4 rings! A ring level essentially is a hardware supported security level. Each ring allows another "mode" of security. So, with only two rings, I can create an OS that has userspace and kernel mode. With 3 rings I can have Kernel, Trusted Driver/Module/Interface, and Userspace. The barriers required to easily create a secure microkernel don't exist. With only 2 rings we have to decide if userspace or kernel mode is where a module belongs -- They don't belong in either! We Need The One Ring to be an intermediary between Ring Zero (which rules them all) and give Ring 2 to the userland, and in the darkness bind them.

      Everyone's using monoliths, hardware makers give us 2 rings to make that happen. Hell the hardware even prevents adoption of new (more secure) programming paradigms. Even the virtual memory addressing system in modern chipsets is designed to work best with C. I'm working on a more secure language with separate call and data stacks, and code-pointer overwrite protections for heap data, but the x86 / x64 / ARM platforms I'm working on are built for single stacks, and thus stack smashing or buffer overflow is an inherit design flaw. Segmented memory would be great for securing functions on a per call basis -- Swapping Stacks at will, Super easy Co-Routines... but those bits were sacrificed to the More Memory God, and the registers became a part of the virtual addressing system. On 16 bit code I can do some neat things that I can't do on 32bit mode code without a huge headache, because the hardware doesn't support me doing it.

      So, that's why it takes so long. Because we're trying to do stuff in software that the hardware doesn't support. These things are more secure and are great for modularity, but the hardware's designed to do it faster the monolith / C way. Note that to a program it won't matter about whether the filesystem is uber modular, or the device drivers are not in ring 0. Hell, eventually I'll port a C compiler to the multi-stack code.

      Note: I don't work on GNU/HURD/HIRD, just develop my own OSs. Yeah, I could work on Linux or other POSIX OSs, but why? That's not going to advance the state of the art in Operating Systems at all. A reliable design is grand for production systems, but to make the leap from the 80's, we're going to need some new hardware to help us out. Got Viruses? Blame the Chip Maker, Language Implementer (not designer), and Operating System. Seriously, they're all doing it WRONG if security is the goal. With a separate call and data stacks on chip, One Ring more, you could actually have the damn security you want.

      • Not sure I follow the comment about more than two rings.

        Wouldn't a 2 ring system with an IOMMU be sufficient? That way drivers could sit in ring 1, but still have access to the piece of hardware required.

        This may not be a sane question: I have read a fair but, but I've never tried to write a kernel.

    • Also, why is a microkernel OS so apparently difficult to construct?

      In the final analysis, a modular message-passing architecture posed performance problems they were never able to adequately solve, pretty much as the nay-sayers predicting when microkernels were first proposed.

    • by tlhIngan ( 30335 ) <slashdot&worf,net> on Wednesday May 22, 2013 @11:42AM (#43794575)

      Also, why is a microkernel OS so apparently difficult to construct?

      They aren't. There are many microkernel OSes out there that are successful, like QNX (which has made plenty of noise about how it runs nuclear reactors and such). Hell, even Windows was completely microkernel at one point.

      The main problem is performance. This comes from two problems - repeated kernel requests, and IPC.

      Kernel requests happen because device drivers are run at application level (which provides great isolation). However, device drivers tend to require a lot of stuff at the kernel level (which is why they're typically in the kernel...) - things like interrupts, physical memory access, DMA, memory allocations (both physical and virtual), and such. Each one of those things it can't do alone (because well, it's an application - if applications can do those things, your microkernel is no better than DOS - the goal is to isolate things from each other). So it becomes an kernel API call to request an interrupt, to register an event object (the interrupt handler runs in the driver server as an interrupt thread), to get memory mappings installed, etc. Each API call is a system call in the end, which are generally expensive things because they require context saving and switching (some microkernel OSes use "thread migration" to mitigate this) and so forth.

      The second problem is IPC. All the servers are isolated from each other and can only communicate through IPC mechanisms. So a microkernel has to end up being a message routing and forwarding service as well. Let's say an application wants to read a file it has open. It calls read(), which traps into the kernel (system call, after all), which the kernel then sends a message to the server which can handle the call (filesystem), so it passes the message to the filesystem server and then switches back to user mode so the filesystem server can handle it. The filesystem server then translates it to a block and issues a read to the partition driver (which if it's a separate server is yet another user-kernel-user transition), which then goes to the disk driver (u-k-u). From here, it goes to the bus handler (because said disk can be on SATA, IDE, USB, Firewire) where the transfer actually happens, and then the message winds it way back to the disk driver, the partition driver, the filesystem driver, then the application.

      Switching from user to kernel is expensive - generally requires generating a software interrupt (system call) which triggers into the kernel's exception handler which then has to decode the request. Switching back is generally cheaper (usually just a return instruction which sets the proper mode bits), but you're still taking several mode switches per API call.

      No big surprise, these things add up into a ton of cycles.

      Microkernel OSes have developed means to alleviate the issue - thread migration being a big one (typically a server is implemented as a thread waiting on a mailbox, it gets the message then handles it). Thread migration means the application's thread context isn't saved, but migrated to the kernel, then passed onto the servers as necessary so instead of having to wake up threads and run the server loop, it becomes more expensive function calls, almost like RPC except the thread that called it is where it's executing on.

      In a monolithic OS like Linux, all those messages and IPC are reduced down to function calls (usually through function pointers) - so the application making the system call becomes the only transition - the virtual filesystem handles the call, calls the filesystem driver, which calls the partition driver, which calls the disk driver, which calls the bus driver, ... and then they return up the stack like a typical subroutine call.

      Oh, and Windows NT 3.51 did this as well. Guess what? Graphics performance sucked, which is why in NT 4, Microsoft moved the graphics driver into ring0 (kernel mode), thus creating the ability for poorly written graphics drivers to crash the entire OS. But, graphics are faster because you're not shuffling so much messages around. I think Windows has steadily put more and more of the graphics stack in the kernel since then, as well.

  • Academic Use (Score:5, Informative)

    by Giant Electronic Bra ( 1229876 ) on Wednesday May 22, 2013 @09:17AM (#43793029)

    If you're interested in understanding microkernel OS architectures, then Hurd might be useful to experiment with. Other than that its pretty close to unusuable as there isn't even basic SATA and USB support (IE you're going to have to install on OLD hardware, or much more likely in a VM where you can supply virtualized IDE).

    Honestly, while I certainly don't want to rain on anyone's pet project Hurd has mostly become pointless. Its user space really offers nothing beyond what Linux or other POSIX *nix user spaces offer, and while microkernels are interesting concepts they've never proven to be terribly practical in most applications. Even in terms of microkernel design Hurd is dated. I'd think it would be much more interesting to work on future-looking OSes, say something with a Plan 9-like user space and some more modern experimental kernal with features designed around high core counts and heterogeneous compute resources. Not sure what that is, but I'm sure there are people out there working on stuff like that.

    • Even in terms of microkernel design Hurd is dated

      Well, that would be the place to focus. If Hurd focused on being the best microkernel project (vs. Minix or whatever) then they would attract lots of help from academia. Is there something preventing this?

      I wonder, too, since a (the?) major issue with microkernels is the cost of message passing, if some of the newer/alternative distributed architectures (which have an inherent message passing delay anyway) wouldn't be a better fit for Hurd than x86 hypervis

      • Yeah, I don't know. I can't even pretend to be any more than superficially informed about modern OS design. There was a day when I worked on bare metal and RTOSes, ported FORTH to new processors and such things, but its been 25 years now. I understand that people have developed some more useful communications techniques and that a lot of the issue is CPU designs that assume a monolithic kernel architecture and aren't kind to things like microkernels. I know from casual skimming there are various areas of ac

    • I agree. However, I think "mostly pointless" is the most moronic phrase. You like viruses? Keep using a single stack for code and data and having no fine grained memory access barriers... Think it's "mostly pointless" to try and solve the malware issue? Well, fuck you then. Say a solution is found, it won't be in a monolithic kernel design. We need at least one more layer between Users and Master of the Universe. Hell, we could even have another level under userspace for "plugins", wouldn't it be g

      • My my, and we must suppose you kiss your mother with that mouth too!

        I know of no general principle which would make me conclude that message passing is safer than making system calls. In fact they offer pretty much exactly the same sorts of dangers. Much the same argument was touted by virtualization technology providers, and it hasn't proven particularly hard for exploit developers to worm their way from application to guest OS to hypervisor. I'm not at all convinced that microkernels are inherently any sa

    • by naasking ( 94116 )

      If you're interested in understanding microkernel OS architectures, then Hurd might be useful to experiment with.

      Not even. Mach is a horrible microkernel. I have no idea why GNU/Hurd hasn't switch to something more serious, like an L4 variant.

  • As a theoretical design they're very clean and simple to understand. In reality however due to all the message passing and context switching they're dog slow and when every bit of performance matters thats just unacceptable.

    • by slim ( 1652 )

      Surely in the past "every bit of performance mattered" more than it does today? You can compensate for slow software by throwing faster hardware at the problem. Today we have faster hardware.

      That said, I'm not volunteering to use a slower kernel full-time.

    • Remember all the complaints in performance from gamers when they lost all that speed going from a shared memory OS (windows 9x) to a protected memory OS?

      Microkernels are a similar problem but without a big corporation to force users kicking and screaming into the modern age. I would like to have Multics like features; my CPUs are mostly idle today. Being able to replace RAM, storage, CPUs without shutdown or even turning off half the computer for most of the day...

      It's not like we don't have more CPU powe

    • by cpghost ( 719344 )
      Not that slow today, provided you use the right microkernel. Look at L4Ka::Pistachio [l4ka.org], for example, if you're looking for very fast context switching and message passing in registers without overhead. Now, if you talk Mach, then you're right.
    • Re: (Score:2, Interesting)

      by Anonymous Coward

      This is rediculous. Modern computers are faster than most people need. We have the cycles to go micro kernel everywhere. There are phone OS that run on a micro kernel! If a phone can do it, so can your PC.

      Consider if android upgraded to a micro kernel. Sure, they couldn't sell A9 chips anymore for tablets, but beyond that it would be awesome. The sound server could restart when something crashed, etc. Tablets and phones are a great example of something that should be always up. No one wants to reboot

  • What, if any, are the advantages that a user would notice of GNU/Hurd over Gnu/Linux?

  • by ikhider ( 2837593 ) on Wednesday May 22, 2013 @01:15PM (#43795453)
    This is good news. I am glad people are working on BSD(s), Hurd, Minix, and other systems because it ensures technological diversity. It would be a sad state if only GNU/Linux and proprietary systems were developed. If we have a thriving ecosystem of vaious operating systems and kernels, that bodes far better for advancement than a monoculture.

"I prefer the blunted cudgels of the followers of the Serpent God." -- Sean Doran the Younger