Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
GNU is Not Unix Debian Operating Systems

Debian GNU/Hurd 2013 Released 264

jrepin writes "The GNU Hurd is the GNU project's replacement for the Unix kernel. It is a collection of servers that run on the Mach microkernel to implement file systems, network protocols, file access control, and other features that are implemented by the Unix kernel or similar kernels (such as Linux). The Debian GNU/Hurd team announces the release of Debian GNU/Hurd 2013. This is a snapshot of Debian 'sid' at the time of the Debian 'wheezy' release (May 2013), so it is mostly based on the same sources. Debian GNU/Hurd is currently available for the i386 architecture with more than 10,000 software packages available (more than 75% of the Debian archive)."
This discussion has been archived. No new comments can be posted.

Debian GNU/Hurd 2013 Released

Comments Filter:
  • Re:Need Clarity (Score:5, Informative)

    by Anonymous Coward on Wednesday May 22, 2013 @08:59AM (#43792859)

    What are the benefits of using GNU/Hurd 2013?

    There aren't any.

  • Re:Need Clarity (Score:5, Informative)

    by Pecisk ( 688001 ) on Wednesday May 22, 2013 @09:00AM (#43792877)

    Debian Wheezy - Linux kernel, GNU tools, 100% of software compiled for i386/64.
    Debian GNU/Hurd 2013 - Hurd kernel, GNU tools, 75% of software compiled for i386/64 (I'm ready to assume it doesn't have support for other platforms but might be wrong).

    Hurd has been conceptual official kernel of GNU project for years (But then Linux came and put Hurd on backburner). Thanks to renewed interest it's development has picked up and therefore we have some actual distribution running with it.

    Main problem for Hurd would be support for hardware who needs closed parts (firmware, binary drivers) as Hurd propably is GPL3 which essentially forbids usage of such things without disclosure to user, essentially killing any chances of having binary Nvidia driver supported. Still, most of open source stuff can be ported to be used with it.

  • Academic Use (Score:5, Informative)

    by Giant Electronic Bra ( 1229876 ) on Wednesday May 22, 2013 @09:17AM (#43793029)

    If you're interested in understanding microkernel OS architectures, then Hurd might be useful to experiment with. Other than that its pretty close to unusuable as there isn't even basic SATA and USB support (IE you're going to have to install on OLD hardware, or much more likely in a VM where you can supply virtualized IDE).

    Honestly, while I certainly don't want to rain on anyone's pet project Hurd has mostly become pointless. Its user space really offers nothing beyond what Linux or other POSIX *nix user spaces offer, and while microkernels are interesting concepts they've never proven to be terribly practical in most applications. Even in terms of microkernel design Hurd is dated. I'd think it would be much more interesting to work on future-looking OSes, say something with a Plan 9-like user space and some more modern experimental kernal with features designed around high core counts and heterogeneous compute resources. Not sure what that is, but I'm sure there are people out there working on stuff like that.

  • Re:Need Clarity (Score:4, Informative)

    by Anonymous Coward on Wednesday May 22, 2013 @09:18AM (#43793041)

    http://en.wikipedia.org/wiki/GNU_Hurd

    Lets start with that. Basically a different kernel (BSD and linux are 'monolitic' kernels, MacOSX is a hybrid).

    The idea is everything as much as possible runs in its own process and funnels thru an IPC. The actual kernel does not do much other than scheduling and memory management and IPC.

    The idea is you can upgrade your network stack without rebooting the computer. This was very appealing 20 years ago when rebooting to 15+ mins for some bigger hardware. When you can reboot a computer in under 30 seconds it is not as interesting. It is becoming a bit more interesting today with companies wanting '0 downtime' with SLA's. It was also possible that you could run stuff on another machine and it be considered part of the OS. Cool stuff but in practice it ended up being slower than direct calls in many key instances.

    Now the downside, they have been working on this since 1987. So work is slow, updates few, resets of the project seem to happen every 3-5 years. At this point you have 4 major OS's to choose from that are all very good (2 of them basically being Unix and one that is a very good clone).

    The end user POV of say MacOSX vs Linux vs BSD vs Hurd. Not much. At this point Hurd is basically a research project. Oh I am sure there are a few out there who use it for 'production use'. But not many.

  • Re:Need Clarity (Score:5, Informative)

    by SirGarlon ( 845873 ) on Wednesday May 22, 2013 @09:21AM (#43793071)

    There are probably no inherent benefits to using Hurd over Linux - and there are certainly many reasons for picking Linux over Hurd, support being just one of them.

    At this stage of Hurd's development, parent is correct. For daily desktop use, Linux is clearly mature enough and Hurd is very probably not.

    From the perspective of design, Hurd has some good ideas, as the GNU Web site explains [gnu.org]. My favorite is:

    the Hurd goes one step further in that most of the components that constitute the whole kernel are running as separate user-space processes and are thus using different address spaces that are isolated from each other. This is a multi-server design based on a microkernel. It is not possible that a faulty memory dereference inside the TCP/IP stack can bring down the whole kernel, and thus the whole system, which is a real problem in a monolothic Unix kernel architecture.

    So there are design features of the Hurd that make it attractive to developers. I can foresee the Hurd maturing to the point where embedded device makers would seriously consider it, for example.

  • Re:Need Clarity (Score:5, Informative)

    by Anonymous Coward on Wednesday May 22, 2013 @09:25AM (#43793101)

    Debian Wheezy - Linux kernel, GNU tools, 100% of software compiled for i386/64.

    Wheezy is also available for other CPU architectures, e.g. ARM and MIPS. And, as a preview, you can use it with a FreeBSD kernel on i386 and amd64 instead of the normal Linux kernel.

    Debian GNU/Hurd 2013 - Hurd kernel, GNU tools, 75% of software compiled for i386/64 (I'm ready to assume it doesn't have support for other platforms but might be wrong).

    You're right, in fact it's only i386, not i386 and amd64.

  • by Pinhedd ( 1661735 ) on Wednesday May 22, 2013 @09:48AM (#43793319)

    >I could imagine a micro kernel, for example, written in Java or similar language running in a VM that enforced fine-grained memory controls, e.g. at the object level. If you used this for memory protection between trusted (e.g. OS level) servers you could avoid the hit of reloading the CPU's page maps. User space separations could be enforced by the CPU for better security.

    Microsoft Research has done a lot of work on this exact idea. They even produced a usable operating system

    http://en.wikipedia.org/wiki/Singularity_(operating_system) [wikipedia.org]

  • Re:Need Clarity (Score:5, Informative)

    by LoRdTAW ( 99712 ) on Wednesday May 22, 2013 @09:56AM (#43793413)

    Wheezy is a GNU/Linux operating system based on the Linux kernel. GNU/Hurd is the GNU operating system based on the Herd kernel. Commonly people simply call GNU/Linux "Linux" but Linux is the kernel which GNU runs atop of.

    If you do a bit of research on Hurd the benefits are quite intriguing. One interesting bit is since Hurd is a microkernel, the concept of kernel-space and user-space disappears as everything runs in userspace. The kernel only worries about memory management, process/thread scheduling and message passing. Services are provided by "servers" running in userspace and talk to each other via messages. Users no longer need root access to do simple tasks like installing software, mounting disks, accessing hardware or other tasks which require root access or sudo because they live in kernel space. Instead they talk to the servers directly. The idea is that by moving services to user space, the need to grant users any type of root access (setuid, su or sudo) is removed. There is still a security hierarchy with a "root" user who has full control of the system, but users never need access to that user or group. No need for root access means less chance that the root account can be compromised. Imagine the problem of "privileged ports" disappearing because those services (ftp, http, etc.) no longer need any sort of root access. They are simply allowed to read/write certain files/directories and access the network. If that service is compromised, it can't gain root access.

    Some have said microkernels are not necessary or that they impose a larger overhead in the form of message passing. That argument was valid ten plus years ago but today we have quad core 1.5GHz cell phones and PC hardware that is so fast that its stagnated the market. Linus Torvalds famously argued against the microkernel with Andrew Tanenbaum (Minix creator who inspired Torvalds) who is in favor of them. Eric S. Raymond once said about Plan 9 (The planned successor to Unix that failed) "There is a lesson here for ambitious system architects: the most dangerous enemy of a better solution is an existing codebase that is just good enough.". Hurd may never see production use and like Plan 9 be relegated to a research or pet project of a handful of developers interested in operating system design. I hope it succeeds.

  • Comment removed (Score:5, Informative)

    by account_deleted ( 4530225 ) on Wednesday May 22, 2013 @10:20AM (#43793669)
    Comment removed based on user account deletion
  • by serviscope_minor ( 664417 ) on Wednesday May 22, 2013 @10:23AM (#43793695) Journal

    If they dumped Hurd now it would be a complete loss of face

    Yay it's the daily make shit up about the FSF/RMS thread!

    http://blog.reddit.com/2010/07/rms-ama.html [reddit.com]

    TL;DR

    http://lists.gnu.org/archive/html/bug-hurd/2010-08/msg00000.html [gnu.org]

    Seriously, is it hard to google RMS Hurd before posting crap?

  • by Ogi_UnixNut ( 916982 ) on Wednesday May 22, 2013 @10:41AM (#43793939) Homepage

    There is Debian, with its GNU/BSD version:

    http://www.debian.org/ports/kfreebsd-gnu/ [debian.org]

    And Gentoo has their variant:

    http://www.gentoo.org/proj/en/gentoo-alt/bsd/fbsd/ [gentoo.org]

    Those are the only two I know about :)

  • by tlhIngan ( 30335 ) <slashdot.worf@net> on Wednesday May 22, 2013 @11:42AM (#43794575)

    Also, why is a microkernel OS so apparently difficult to construct?

    They aren't. There are many microkernel OSes out there that are successful, like QNX (which has made plenty of noise about how it runs nuclear reactors and such). Hell, even Windows was completely microkernel at one point.

    The main problem is performance. This comes from two problems - repeated kernel requests, and IPC.

    Kernel requests happen because device drivers are run at application level (which provides great isolation). However, device drivers tend to require a lot of stuff at the kernel level (which is why they're typically in the kernel...) - things like interrupts, physical memory access, DMA, memory allocations (both physical and virtual), and such. Each one of those things it can't do alone (because well, it's an application - if applications can do those things, your microkernel is no better than DOS - the goal is to isolate things from each other). So it becomes an kernel API call to request an interrupt, to register an event object (the interrupt handler runs in the driver server as an interrupt thread), to get memory mappings installed, etc. Each API call is a system call in the end, which are generally expensive things because they require context saving and switching (some microkernel OSes use "thread migration" to mitigate this) and so forth.

    The second problem is IPC. All the servers are isolated from each other and can only communicate through IPC mechanisms. So a microkernel has to end up being a message routing and forwarding service as well. Let's say an application wants to read a file it has open. It calls read(), which traps into the kernel (system call, after all), which the kernel then sends a message to the server which can handle the call (filesystem), so it passes the message to the filesystem server and then switches back to user mode so the filesystem server can handle it. The filesystem server then translates it to a block and issues a read to the partition driver (which if it's a separate server is yet another user-kernel-user transition), which then goes to the disk driver (u-k-u). From here, it goes to the bus handler (because said disk can be on SATA, IDE, USB, Firewire) where the transfer actually happens, and then the message winds it way back to the disk driver, the partition driver, the filesystem driver, then the application.

    Switching from user to kernel is expensive - generally requires generating a software interrupt (system call) which triggers into the kernel's exception handler which then has to decode the request. Switching back is generally cheaper (usually just a return instruction which sets the proper mode bits), but you're still taking several mode switches per API call.

    No big surprise, these things add up into a ton of cycles.

    Microkernel OSes have developed means to alleviate the issue - thread migration being a big one (typically a server is implemented as a thread waiting on a mailbox, it gets the message then handles it). Thread migration means the application's thread context isn't saved, but migrated to the kernel, then passed onto the servers as necessary so instead of having to wake up threads and run the server loop, it becomes more expensive function calls, almost like RPC except the thread that called it is where it's executing on.

    In a monolithic OS like Linux, all those messages and IPC are reduced down to function calls (usually through function pointers) - so the application making the system call becomes the only transition - the virtual filesystem handles the call, calls the filesystem driver, which calls the partition driver, which calls the disk driver, which calls the bus driver, ... and then they return up the stack like a typical subroutine call.

    Oh, and Windows NT 3.51 did this as well. Guess what? Graphics performance sucked, which is why in NT 4, Microsoft moved the graphics driver into ring0 (kernel mode), thus creating the ability for poorly written graphics drivers to crash the entire OS. But, graphics are faster because you're not shuffling so much messages around. I think Windows has steadily put more and more of the graphics stack in the kernel since then, as well.

  • Re: Need Clarity (Score:2, Informative)

    by Anonymous Coward on Wednesday May 22, 2013 @12:02PM (#43794741)

    You just described how language works. Things are called something because people call them that, regardless of whether or not that is fair or technically correct.

  • The performance hit comes from the hardware isolated process model used by modern microprocessors.

    Lets be clear: the performance hit comes from the expensive x86/x64 trap handling. RISC processors are on the order of 30 cycles. x86/x64 is on the order of 2,000-3,000 cycles. The braindead x86 architecture is the only reason microkernels haven't already "taken over the world".

    The L4 and EROS/CapROS microkernels did a lot of small hacks to reduce the above overhead, and they got some pretty decent performance even out of x86. But contrary to your previous claim, x86 makes good microkernels very difficult to construct.

"More software projects have gone awry for lack of calendar time than for all other causes combined." -- Fred Brooks, Jr., _The Mythical Man Month_

Working...