Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?
GNU is Not Unix Debian Operating Systems

Debian GNU/Hurd 2013 Released 264

jrepin writes "The GNU Hurd is the GNU project's replacement for the Unix kernel. It is a collection of servers that run on the Mach microkernel to implement file systems, network protocols, file access control, and other features that are implemented by the Unix kernel or similar kernels (such as Linux). The Debian GNU/Hurd team announces the release of Debian GNU/Hurd 2013. This is a snapshot of Debian 'sid' at the time of the Debian 'wheezy' release (May 2013), so it is mostly based on the same sources. Debian GNU/Hurd is currently available for the i386 architecture with more than 10,000 software packages available (more than 75% of the Debian archive)."
This discussion has been archived. No new comments can be posted.

Debian GNU/Hurd 2013 Released

Comments Filter:
  • Need Clarity (Score:3, Interesting)

    by CrimsonKnight13 ( 1388125 ) on Wednesday May 22, 2013 @08:54AM (#43792809) Homepage
    Would anyone mind explaining to me the key differences between Debian Wheezy & Debian GNU/Hurd 2013? What are the benefits of using GNU/Hurd 2013?
  • by Pinhedd ( 1661735 ) on Wednesday May 22, 2013 @09:34AM (#43793173)

    Microkernel operating systems aren't inherently difficult to construct but there's a very noticeable tradeoff between the performance of a hybrid/monolithic kernel and the security/stability of a microkernel.

    The performance hit comes from the hardware isolated process model used by modern microprocessors. Whenever an application needs to do something outside of its own scope, such as request additional memory, access shared resources, or interface with a device driver it makes a system call. In a monolithic system this requires the processor to switch from the running task to the kernel task, perform the requested action, and then switch back to the running task. If the kernel needs to access the tasks memory, it can access it through segmentation or shared memory with ease because the kernel in a monolithic system has no access constraints.

    In a microkernel system the processor switches from the running task to an interprocess messaging task (part of the microkernel), which then copies the message to the requested server's buffer, switches to the server task, processes the message, switches back to the messaging task, copies the response back to the original client's buffer, and then switches back to the client task.

    Task switches are very expensive in terms of CPU cycles, so minimizing them is key to obtaining performance. Hybrid and Monolithic kernels have a massive performance edge on modern processors because they perform a fraction as many task switches and memory operations whenever a system call is performed.

  • by putaro ( 235078 ) on Wednesday May 22, 2013 @09:42AM (#43793269) Journal

    I looked at it a while back with an eye towards doing some work on it, but I'm interested in file systems and large storage and Hurd was limited to a max of 4GB per file because all files were memory mapped all the time and Hurd only runs on 32-bit architectures. So, for me, the amount of work before I could do something interesting was pretty steep.

    I think the main reason that microkernels don't have great performance is because not much work has been put into them. I worked on Apple's Copland OS back in the mid-90's (the "failed" OS before OS X). Copland was a true microkernel and there were a number of performance optimizations that we'd put in. Had it shipped, we probably would have started making some modifications to the CPUs to support the microkernel better as well.

    A big issue for performance is switching between processes. If you have to make multiple process switches for each kernel call that can get slow due to things like reloading the MMU tables, etc. There are a lot of different paths that could be taken. I could imagine a micro kernel, for example, written in Java or similar language running in a VM that enforced fine-grained memory controls, e.g. at the object level. If you used this for memory protection between trusted (e.g. OS level) servers you could avoid the hit of reloading the CPU's page maps. User space separations could be enforced by the CPU for better security.

  • Managing the trust graph is why it's hard. Security is always hard. On a monolithic kernel we just say: Uhm, yeah, I trust all these drivers and whatever, even though I probably shouldn't because... well... That's how it works. GNU/HURD/HIRD has a more modular approach that pushes the drivers out of kernel space, but it has some design flaws ( letting a directory node provide its own ".." -- Yikes! ), and the number of developers is next to non-existent.

    Furthermore modern processors are designed for monolithic kernels. Just like x86 has a bunch of cruft from when ASM coders wanted more complex instructions (for less / easier coding), Features like Multiple Execution Ring Levels are missing. ARM gives me Two Rings. AMD x86 gives me Two Rings. Intel x86 gives me 4 rings! A ring level essentially is a hardware supported security level. Each ring allows another "mode" of security. So, with only two rings, I can create an OS that has userspace and kernel mode. With 3 rings I can have Kernel, Trusted Driver/Module/Interface, and Userspace. The barriers required to easily create a secure microkernel don't exist. With only 2 rings we have to decide if userspace or kernel mode is where a module belongs -- They don't belong in either! We Need The One Ring to be an intermediary between Ring Zero (which rules them all) and give Ring 2 to the userland, and in the darkness bind them.

    Everyone's using monoliths, hardware makers give us 2 rings to make that happen. Hell the hardware even prevents adoption of new (more secure) programming paradigms. Even the virtual memory addressing system in modern chipsets is designed to work best with C. I'm working on a more secure language with separate call and data stacks, and code-pointer overwrite protections for heap data, but the x86 / x64 / ARM platforms I'm working on are built for single stacks, and thus stack smashing or buffer overflow is an inherit design flaw. Segmented memory would be great for securing functions on a per call basis -- Swapping Stacks at will, Super easy Co-Routines... but those bits were sacrificed to the More Memory God, and the registers became a part of the virtual addressing system. On 16 bit code I can do some neat things that I can't do on 32bit mode code without a huge headache, because the hardware doesn't support me doing it.

    So, that's why it takes so long. Because we're trying to do stuff in software that the hardware doesn't support. These things are more secure and are great for modularity, but the hardware's designed to do it faster the monolith / C way. Note that to a program it won't matter about whether the filesystem is uber modular, or the device drivers are not in ring 0. Hell, eventually I'll port a C compiler to the multi-stack code.

    Note: I don't work on GNU/HURD/HIRD, just develop my own OSs. Yeah, I could work on Linux or other POSIX OSs, but why? That's not going to advance the state of the art in Operating Systems at all. A reliable design is grand for production systems, but to make the leap from the 80's, we're going to need some new hardware to help us out. Got Viruses? Blame the Chip Maker, Language Implementer (not designer), and Operating System. Seriously, they're all doing it WRONG if security is the goal. With a separate call and data stacks on chip, One Ring more, you could actually have the damn security you want.

  • Re:Need Clarity (Score:5, Interesting)

    by IRWolfie- ( 1148617 ) on Wednesday May 22, 2013 @10:24AM (#43793711)
    Naturally I assume you of course call android by the name Linux as well.
  • Re:Need Clarity (Score:4, Interesting)

    by paulpach ( 798828 ) on Wednesday May 22, 2013 @10:26AM (#43793739)

    I'd just like to interject for a moment. What you're referring to as Linux, is in fact, GNU/Linux, or as I've recently taken to calling it, GNU plus Linux. Linux is not an operating system unto itself, but rather another free component of a fully functioning GNU system made useful by the GNU corelibs, shell utilities and vital system components comprising a full OS as defined by POSIX.

    Both GNU components and the Linux kernel come with a written [] permission [] to call it whatever you want. I can take all these components, distribute them even verbatim and call them "yomama", and I would be fully compliant with the licenses. The difference is that no one would have a clue what I am talking about.

    For this reason it is correct to call it Linux, or Android or Ubuntu or any other name (subject to trademark laws of course). Just use the name most people are familiar with so they know what you are talking about. Calling these systems GNU is merely a courtesy, a form of respect you pay to the GNU project, not a requirement in any way, and not "the right way" but merely your preference.

  • Re:Need Clarity (Score:5, Interesting)

    by umeboshi ( 196301 ) on Wednesday May 22, 2013 @10:54AM (#43794111)

    Excellent point! I'll remember that one. I just cut to the chase and call all my systems debian.

  • by Anonymous Coward on Wednesday May 22, 2013 @01:12PM (#43795433)

    This is rediculous. Modern computers are faster than most people need. We have the cycles to go micro kernel everywhere. There are phone OS that run on a micro kernel! If a phone can do it, so can your PC.

    Consider if android upgraded to a micro kernel. Sure, they couldn't sell A9 chips anymore for tablets, but beyond that it would be awesome. The sound server could restart when something crashed, etc. Tablets and phones are a great example of something that should be always up. No one wants to reboot them.

  • Hurdles for HURD (Score:5, Interesting)

    by unixisc ( 2429386 ) on Wednesday May 22, 2013 @05:11PM (#43797623)

    Main problem for Hurd would be support for hardware who needs closed parts (firmware, binary drivers) as Hurd propably is GPL3 which essentially forbids usage of such things without disclosure to user, essentially killing any chances of having binary Nvidia driver supported. Still, most of open source stuff can be ported to be used with it.

    Yeah, that is what would make it pretty much a non starter on the desktop, since it's probably GPL3 - or else, its rationale for existence separate from Linux is as strong as the rationale for East Germany or North Korea existing. Since binary blobs would be banned here, they'd be limited to Intel & AMD GPUs, however bad, and then, on top of that, run X, and GNOME 3.whatever in fallback mode, or in real mode if the drivers are liberated. In short, the best use of HURD, where it would be almost guaranteed to work right, is in CLI mode, if one is like RMS and lives in an emacs world. In which case, the login script could just as well include that, one goes into emacs, and then is off doing everything that one does there.

    Just wondering if the "Libre-" Linux crowd will celebrate this, or release a list of 50 reasons why Debian doesn't pass the purity test and therefore, Debian Hurd can't be endorsed? I sure wish gNewSense comes up w/ a HURD distro based on this one.

Scientists are people who build the Brooklyn Bridge and then buy it. -- William Buckley