Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
GNU is Not Unix Operating Systems Software

First Program Executed on L4 Port of GNU/HURD 596

wikinerd writes "The GNU Project was working on a new OS kernel called HURD from 1990, using the GNU Mach microkernel. However, when HURD-Mach was able to run a GUI and a browser, the developers decided to start from scratch and port the project to the high-performance L4 microkernel. As a result development was slowed by years, but now HURD developer Marcus Brinkmann made a historic step and finished the process initialization code, which enabled him to execute the first software on HURD-L4. He says: 'We can now easily explore and develop the system in any way we want. The dinner is prepared!'"
This discussion has been archived. No new comments can be posted.

First Program Executed on L4 Port of GNU/HURD

Comments Filter:
  • by TangoCharlie ( 113383 ) on Friday February 04, 2005 @03:48AM (#11570174) Homepage Journal
    What are the relative benefits of L4 vs the Mach Microkernel? Better performance? As I understand it, MacOS X's microkernel is also based on the Mach microkernel... would it make any sense for Apple to look at L4?
  • Linux (Score:3, Interesting)

    by mboverload ( 657893 ) on Friday February 04, 2005 @03:56AM (#11570206) Journal
    Linus provided them a better, simpler kernel so they basicly scrapped HURD for linux, if I remember correctly from "Revolution OS"

    BTW, Revolution OS is a great movie, even my non-nerd friends loved it. You can pick it up here: http://www.amazon.com/exec/obidos/ASIN/B0000A9GLO/ revolutionos-20/103-9235316-0475036 [amazon.com]

  • Re:It hurds (Score:5, Interesting)

    by mirko ( 198274 ) on Friday February 04, 2005 @03:57AM (#11570212) Journal
    The Hurd Project was started in 1983 [gnuart.net] (it's an instrumental featuring the speech where Stallman explained the origin of the GNU project).
    Now, 22 years later, a definitve breakthrough has been performed.
    I see this as an excitement :
    1. They kept working on it THAT long despite slandering and scepticism such as yours
    2. The rest of the software library (glib, bash, etc.) is already ready
    3. With Linux, Hurd and BSD amongst others, we are slowly getting back to the same variety we had 20 years ago, when we had to exchange basic listings and to port these onto various platforms (Sinclair, Commodore, Amstrad, Sharp...)


    Now, we will see it emerge and, why not, get sufficient audience to become unavoidable. In 20 years from now, it'll be like it's an opportunity as weel as any other so it's not missed, it just took time to emerge, like my favourite whisky [scotchwhisky.com].
  • by Anonymous Coward on Friday February 04, 2005 @04:04AM (#11570234)
    Probably not. The Darwin kernel is really a monolithic layer over the top of a microkernel, not a proper microkernel system. Historically, at least, you gave up too much speed to do a proper microkernel, so monolithic kernels were de rigeur in any application outside the OS laboratory. Just because Darwin is written atop of Mach, it doesn't necessarily follow that Darwin uses a microkernel; and the design of Darwin is that of a monolithic kernel, not a microkernel.

    The Hurd is an interesting design. With luck, it will demonstrate both that the performance hit is no longer of major importance, and that a true microkernel has advantages over monolithic kernels. Only time will tell, of course, if those advantages are going to be properly exploited; but I must admit to curiosity as to what might be implemented above the Hurd that would not be possible (or would be significantly harder) with Linux.

  • by js7a ( 579872 ) <`gro.kivob' `ta' `semaj'> on Friday February 04, 2005 @04:09AM (#11570251) Homepage Journal
    L4 has only seven system calls, compared to several dozen in Mach. It fits in about 32KB, too, which is very much smaller than Mach.

    But the small size doesn't make most systems faster. Running the same Unix API, L4 adds execution time overhead beyond the default monolithic Linux kernel, about 5% [psu.edu]. (Does anyone know the figure for Linux-on-Mach? I know it's much greater than 5%....) However, there are some significant advantages having to do with debugging, maintainability, SMP, real time gaurentees, memory management, configurability, robustness, etc. Detailed discussion here [cbbrowne.com].

    From the overview: [tu-dresden.de]

    Kernels based on the L4 API are second-generation -kernels. They are very lean and feature fast, message-based, synchronous IPC, simple-to-use external paging mechanisms, and a security mechanism based on secure domains (tasks, clans and chiefs). The kernels try to implement only a minimal set of abstractions on which operating systems can be built flexibly.

    Other links: L4KA homepage [l4ka.org], background info [unsw.edu.au], more info with some historical L3 links [tu-dresden.de].

    Frankly, I think L4 is very much the right way to do things. I wish I could say the same for other parts of HURD.

  • by ghoti ( 60903 ) on Friday February 04, 2005 @04:12AM (#11570264) Homepage
    Not sure I agree. It took Linux a long time to be recognized as a viable alternative to other Unices. I don't think this can be easily done again. And I doubt that Hurd would have any noticeable advantages over Linux. It's also free, it runs the same software (99.9% or so ...), and it's a Unix (or, well, Not Unix).

    So why not have the people working on Hurd work on something new instead, or work on improving Linux? Competition can also hurt, by splitting up the resources into many small parts ...
  • Great (Score:5, Interesting)

    by Pan T. Hose ( 707794 ) on Friday February 04, 2005 @04:17AM (#11570277) Homepage Journal
    When the first programs run, it is just a matter of time before there is a functional L4 port of Debian GNU/Hurd [debian.org] (or just Debian GNU?). I really like the design of the Hurd, but what I'd like to see the most are not the "POSIX capabilities" but the real capabilities [cap-lore.com] as described in the 1975 paper by Jerome Saltzer and Michael Schroeder, The Protection of Information in Computer Systems [cap-lore.com]. (For those who don't know what am I talking about, I recommend starting from the excellent essay What is a Capability, Anyway? [eros-os.org] by Jonathan Shapiro, and then reading the capability theory essays [cap-lore.com] by Norman Hardy. As a sidenone I might add that I find it amusing that people who say that there are other advantages than only Digital Restrictions Management of using TCPA/Palladium-like platforms usually quote security features, which have already been implemented in the 1970s, only better and with no strings attached. Those TCPA zealots are usually completely ignorant of the existance of such operating systems as KeyKOS [upenn.edu] or EROS [eros-os.org] with formal proofs of correctness [psu.edu] without all of the silliness [cam.ac.uk].) Are there any plans to have a real capability-based security model available in the Hurd?
  • by tod_miller ( 792541 ) on Friday February 04, 2005 @04:21AM (#11570297) Journal
    I can (well, almost) hear you asking yourselves "why?". Hurd will be out in a year (or two, or next month, who knows)

    Courtesy of Gooooooogle [google.com]

    The thing is, GNU/HURD will still be... Linux. Don't shout, yes I know, the thing is, people call Linux, Linux.

    Then people say, aaahaaaa Mr Bond... it is really GNU/Linux.

    Well I am not so sure anymore. Debian think so, but why not call it GNU/Gimp/OOo/Java/Perl/apache/*all other installed apps*/Linux.

    Is GNU software is great, and calling Linux+GNU 'Linux' is wrong, but calling any installation of the Linux kernel 'Linux' is correct regardless of other software.

    If I call my OS Linux, I do so without reffering to the installed user space apps, however necessary they might be.

    I think the person who is most keen to see HURD is Linus himself! After all, he has been waiting since 1990!

    I do hope that /. in 2006 doesn't have a new flame topic:

    HURD v Linux (and HURD will never sell with that name - IMHO)

    HURD running on top of the GNU Mach microkernel first booted in 1994 and became GNU's official kernel

    The development of the GNU/Hurd has important emotional and practical value to GNU fans because in 1990s GNU had not completed any kernel and used the Linux kernel out of necessity. Thus, a number of GNU fans feel that they will have "pure GNU" only with the Hurd kernel.

    I do think that some of the GNU ramblings are a bit ungrateful to Linux kernel. Without Linux it would be in 5 years that people would wake up to *nix OS's types, and in 5 years we would be where we were in 1994.

    I see one thing, with the FUD over linux, will the same FUD establish itself over the huge sprawling software base of GNU?

    Will GNUimp get sued by Adobe? How will this GNUism evolve?

    And the final question on every /. lips is, in regard to anything, when will HURD run Linux? ;-)
  • Re:Linux (Score:1, Interesting)

    by Anonymous Coward on Friday February 04, 2005 @04:35AM (#11570327)
    Linux was far simpler to get started with because it was an old and proven kernel architecture. So while Linux got to run away with the lion's share of development effort, microkernels have taken a good long time getting their ins and outs fully researched and prototyped. And so now we've ended up with Linux being the far more complex project, with gobs of code for special cases and hardware and whatnot.

    It's still entirely possible that with further development L4-Hurd will become the choice for the long run, especially if a lot of the best stuff of the work on Linux - namely device drivers - can be ported over. But it definitely has hefty shoes to fill.
  • Re:Mods... (Score:5, Interesting)

    by Billly Gates ( 198444 ) on Friday February 04, 2005 @04:49AM (#11570354) Journal
    I actually started the lame Duke Nukem posts as a joke, but oddly I had Mozilla mentioned as well. The netscape code finally after 2 years had to be deleted and rewritten.

    After a slow start Mozilla is finally ready and moving fast.

    Hopefully the same fate will happen with Hurd as soon as developers come and take it seriously. Its a selfulling prophesy in free software.

    DNF? Well its proprietary so who knows
  • by Mark_MF-WN ( 678030 ) on Friday February 04, 2005 @04:54AM (#11570363)
    HURD will never ever be where Linux is...
    By that logic, no one should ever start a new software project that isn't already being met (however inadequately) by some other piece of software. Why did Linus start writing Gnu/Linux, when there were already great operating systems like Windows/Dos, and Unix/Unix?

    Fankly, I think it's a great thing that BSD and HURD will be putting some pressure on Linux to be the best. Competition makes them strong, and the cross-fertilization of ideas makes them stronger still.

    Besides, HURD may end up being superior to Linux in some domains, such as high-reliability systems (think banking servers), driver development, OS research, shared systems, and the like.

  • Re:Benchmarks? (Score:3, Interesting)

    by Chanc_Gorkon ( 94133 ) <gorkon&gmail,com> on Friday February 04, 2005 @05:00AM (#11570376)
    Neat concept, but what if your graphics driver goes out?? Will it respawn automagically? Whaty if the hard drive controller's driver dies? Sometimes a neat concept ends up not being very practical. I would rather have the OS die if the hard drive controller's driver kicks off as there's less of a probablility of hard drive corruption. If the driver code for the hard drive dies and the kernel keeps running, would you not have lost alot data?
  • flame of the day (Score:3, Interesting)

    by Tumbleweed ( 3706 ) * on Friday February 04, 2005 @05:09AM (#11570401)
    Look, congrats and all, but if I'm going to run a pointless operating system, it's going to be one that's actually impressive, like MenuetOS [menuetos.org] .
  • Re:Benchmarks? (Score:2, Interesting)

    by meburke ( 736645 ) on Friday February 04, 2005 @05:10AM (#11570403)
    I may be out of date, but wasn't this Tannenbaum's contention: that microkernel was possibly superior to monolithic architecture because of the stability of the kernel space?

    I'm a little excited by the possibility of a solid Open MK, but a little dismayed at the thought that I may have to re-read Tannenbaum and Wirth (Oberon Project) to figure out what's going on. Does anyone have a link to an overview/comparison of kernel architectures? If so, this old fart thanks you.
  • by dustmite ( 667870 ) on Friday February 04, 2005 @05:35AM (#11570454)

    Indeed, ouch, I find that very disappointing, I'll join the "Dilbert boycott". How patronising too, their lame psychological manipulation strategy: "As an engineer like you ..." .. isn't that how you try manipulate 6-year olds? The BSA's tactics disgust me in general.

    I used to like Dilbert, but I cannot stand any comic strip that whores itself out to corporate interests in this way. A comic strip is not an advertising platform.

  • Re:Benchmarks? (Score:5, Interesting)

    by Malor ( 3658 ) on Friday February 04, 2005 @05:37AM (#11570457) Journal
    If the system is able to stay up without further drive access, that could potentially allow you to copy data still in RAM. If the OS simply instantly failed when the HD controller went, then any data in RAM would absolutely be lost.

    Software failure is more common than hardware. In many cases, drivers can be restarted. Your specific example is probably the toughest one I can think of offhand... you'd have to have a copy of the HD controller cached somewhere to be able to restart it. (since, obviously, you can't load it from HD :)). But most drivers wouldn't be that hard to restart... video and network are two very good examples. I have seen many 2.4 kernel crashes from what appeared to be network-driver failures. Presumably, a microkernel might have survived whatever the problem was.

    You also, of course, have the advantage of each driver/process running in its own address apace, which would probably make very complex code, like the 2.6 Linux kernel, more manageable.

    Just as an offhand observation, I kind of wonder if the 2.6 Linux kernel isn't approaching the level of diminishing returns... it's gotten so complex that it's getting pretty tough to cleanly improve without blowing a lot of stuff up. A microkernel design would probably have made maintenance easier, and *probably* would have given us more stable systems now.

    But they didn't go that way, and restarting Linux kernel development would be pretty stupid, IMO. :-)
  • Re:Let's see here (Score:3, Interesting)

    by tm2b ( 42473 ) on Friday February 04, 2005 @05:57AM (#11570499) Journal
    Given the fact that some features in OS X took Apple over 12 years to get into a shipping product (development on Copland started in 89).
    Copland was abandoned, thank god.

    Mac OS X actually was shipping in 1989 - it was just called NeXTStep [wikipedia.org] back then and didn't have Classic. We actually had two NeXTstations in our house back in college in the early 90s, a cube and a slab.

    Too bad they had to gussy it up to make it look more like Mac OS 9- to be accepted by the faithful, it was a much more elegant design before that.
  • Re:Benchmarks? (Score:5, Interesting)

    by lokedhs ( 672255 ) on Friday February 04, 2005 @06:03AM (#11570518)
    But they didn't go that way, and restarting Linux kernel development would be pretty stupid, IMO. :-)
    In a way, you could see the new HURD to be a restart of the Linux kernel development. I.e. a new, better(?), kernel. And I wouldn't call it stupid, quite the contrary. New development is always good.

    The funny thing is that back when Linux was started, it could been seen as a restart of the HURD kernel development. What goes around comes around. :-)

  • Re:Let's see here (Score:5, Interesting)

    by ThousandStars ( 556222 ) on Friday February 04, 2005 @06:18AM (#11570550) Homepage
    Arguably, Apple took even longer, since it was looking at next-generation operating systems before Copland development actually started. In addition, NeXT began (IIRC) in 1986.

    Also, not only did OS X take a long time to develop, it took an even longer time to become usable. The first desktop version, 10.0, was released in Mar. 2001, and it sucked. Actually, it worse than sucked, it was closer to a beta than a release. I consider it more of a developer's preview. The next version, 10.1, released in Sept or Oct 2001, was usable but still too slow, particularly for the hardware at that time. The first version I would call good, and good enough for the casual user, was Jaguar, 10.2.

    Most estimates of the cost of developing OS X in its present are around $1 billion. (Cost of acquiring NeXT was $420M, plus all the development time and money. I think part of the Copland money was counted in there too.) That's a whole lot of development time, money and effort to throw out for a hypothetical, potential and probably minor speed increase. Given the further elaboration above, I agree with the parent's implied answer.

    Still, one could argue that much of the time the parent and I count as "working" on OS X didn't really count (i.e. Copland, which failed, and NeXT, much of which didn't make it into OS X), but these timelines were still important in making today's OS X what it is.

  • Re:Benchmarks? (Score:1, Interesting)

    by Anonymous Coward on Friday February 04, 2005 @06:38AM (#11570602)
    if u were in the middle of a write then it doesn't matter whether u were in kernel or user space. a BSOD or a dead driver, your disk doesn't care.
  • Re:Benchmarks? (Score:2, Interesting)

    by Anonymous Coward on Friday February 04, 2005 @06:51AM (#11570655)
    The only module that was converged with the kernel was the graphics subsystem. The NT kernel still remains a microkernel design to this day.
  • Re:Let's see here (Score:2, Interesting)

    by sp67 ( 159134 ) on Friday February 04, 2005 @06:59AM (#11570679)
    Whoa there, cowboy, hold your horses!

    1) OS X has nothing to do with Copland
    2) OS X is not based on the Classic Mac OS
    3) Classic apps and the Carbon libraries running on OS X do not directly access the Mach microkernel
    4) very few things in OS X and its apps actually know there is a Mach microkernel in there

    So given all these things, Apple would certainly NOT have to rewrite much if it were to switch microkernels. Hell, even most drivers are written at such a high level that they wouldn't be affected.

    So why would't it make sense?

    I really hope you answer this.
  • by QuietRiot ( 16908 ) <cyrus&80d,org> on Friday February 04, 2005 @07:16AM (#11570725) Homepage Journal

    It took Linux a long time to be recognized as a viable alternative to other Unices.

    Your point? The world now knows there are viable alternatives, and they can be had for historical lows on price.

    I don't think this can be easily done again.

    The world's got practice. It's no longer in the same state it was in '91. Back at that time, very few people had unix machines on their desk or at home. Unix ran in the computer room at work or school and you connected to the system but did little in the way of administration. Millions have been introduced to "the unix-like way of life" (TULWOF), superuser status, and have developed desires to exploit the powers of their machines in an infinite number of ways. The world is primed to be wowed again.

    I see our future selves laughing at our current fascination with Linux like we now look at time we spent with DOS. We'll see someday how horribly inflexible it was compared to what's coming in this next generation of operating systems. Your post shows you know very little about the Hurd and what possibilities it will allow. One cannot currently imagine all the fun things people are going to do with it (them?) X years from now.

    And I doubt that Hurd would have any noticeable advantages over Linux.

    Exactly not the case. There are *profound* advantages [to "the Hurd"].

    If and when a usable system comes to fruition is the question. Developers. Developers. Developers. Get them excited and you'll soon be doing things with your machine you'll never even have considered possible. Maybe not yourself, but people will be doing things they never dreamt possible. There are fundamental differences that are difficult to comprehend having experienced only monolithics. Granted, most of the advantages are not so much at the user level, but from a system administration perspective. Guys working "in the computer room" will probably have much more to be excited about than somebody with a user account. If you know what "having root" is like, the possibilities coming with the Hurd's architecture will be much more meaningful than they would to a typical user. However "typical user accounts" will be much more powerful on a box running the Hurd. Even low level stuff like filesystems floats up into "userland" allowing you the ability to customize your environment to great extents without affecting other users on the same machine.

    So why not have the people working on Hurd work on something new instead, or work on improving Linux? Competition can also hurt, by splitting up the resources into many small parts ...

    Maybe more people should work on the current telephone system instead of wasting their time with VoIP. Maybe you should have worked harder at your old job instead of trying to find a new, better job? The Hurd is to Linux users like Linux is to DOS users. If Linux (as currently implemented) lives in N-space, the Hurd lives in N+1.

    Resources get split up; sure. Consider however how the body of developers grows every day as more and more are introduced to TULWOF. None of us get to justify or dictate how others spend their free time. Get excited about the underdog. Linux has enough developers, don't you think? Will developments made on a new system with completely different rules positively effect Mr. Torvalds pet project? Most certainly I presume. I see the relationship as symbiotic. The Hurd takes on the huge body of software that has been developed due to "the Linux revolution" of the last decade and Linux takes from the Hurd (besides the jealousy that I can only predict will develop eventually) new techniques and perhaps, somehow, some type of hybrid approach to the kernel. There's no telling really; I can only imagine good things coming to both camps. Your attitude of discouraging work on such projects, done freely by others, I see as sel

  • by osierra.com ( 832341 ) on Friday February 04, 2005 @07:36AM (#11570787) Homepage Journal
    This is best illustrated by the parable of the OSs and the gun:
    • With Unix you shoot yourself in the foot.
    • With DOS you keep running up against the one-bullet barrier.
    • With MS-Windows the gun blows up in your hand.
    • With MacOS it's easy to shoot yourself in the foot -- just point and shoot.
    • With SVR4 the gun isn't compatible with your foot.
    • With Linux generous programmers from around the world all join forces to help you shoot yourself in the foot for free.
    • With HURD you'll be able to shoot yourself in the foot Real Soon Now.
  • by Grab ( 126025 ) on Friday February 04, 2005 @07:58AM (#11570848) Homepage
    Too true.

    This is precisely my problem with RMS's theory of freeness. The original reason he developed his whole GNU ideology was due to not being able to get hardware interfaces to work correctly. In other words, he wanted to get some work done and was prevented from doing so by the software he needed not being available. RMS being RMS, he decided he would solve the problem himself, and found that the info he needed to hack the drivers wasn't available. Now there are two possible AND EQUALLY VALID solutions here: either the suppliers make information freely available so that RMS can hack his drivers; OR the suppliers ship decent software in the first place.

    Now granted, if all the docs and source for everything was available to everyone then the world certainly would be a better place - but ultimately what counts is having the tools you need to do your job. RMS (and hence clearly the Hurd developers) have confused this "freeness" with being an objective in itself, when really it is just a tool to let other people achieve their objectives.

    This is as true in the physical domain as in software. If I want to do some woodwork, I buy a chisel, or borrow one off a friend. I don't give a shit if the specs for the chisel and the process by which it's made are posted on a website somewhere - I want to dig a hole in a bit of wood! If the chisel with its own website is a blunt piece of crap, I'll get one that's sharp and does the job properly.

    Frankly, the only reason RMS (and others) can sustain their GNU agenda is that they don't have (and in some cases have never had) real jobs. You know, jobs with deadlines, where you can be made redundant or fired if you're not pulling your weight on a project, and where you don't get a load of time that you can arbitrarily spend on any scheme that takes your fancy. Checking RMS's resume, his background is all Bell Labs and similar "think-tanks". In Ivory Tower Land, such principles are fine - but in the real world, we just want to do stuff, thanks all the same. If your ideologically-perfect OS doesn't work as well as Windows, or if your ideologically-perfect application doesn't work as well as the MS equivalent, I'll ditch it without a moment's hesitation.

    And this is where the Hurd people have fallen down. In their pursuit of the ultimate in gold-plating, they've utterly missed the point of delivering something that people can use. I don't think it's an exaggeration to say that Hurd will never succeed - I don't see how it can, because they've proven time after time that serving their user base is much less important to them than their ideology. And if you screw over your user base, man, you're dead.

    Grab.
  • by ehack ( 115197 ) on Friday February 04, 2005 @08:40AM (#11570971) Journal
    Actually, Multics was deployed commercially, I used it at a university site and it was amazing how well it worked compared to the IBM timesharing systems that were the alternative then in scientific computation (no Vaxen at that time). I think there were some commercial issues rather, the system was mature and deployed.
  • by Anonymous Coward on Friday February 04, 2005 @09:16AM (#11571106)
    The real power to a microkernel comes from the modularity. It is much easier to maintain several small programs than one large one.

    I recall RMS saying in Revolution OS that this turned out to be a serious disadvantage due to the difficulty in trying to debug the orchestration and multiple message flow between these many programs. This had something to do with things like having a context which was over a given message which could be subtly modified by some other event in the system. Whereas monolithic kernels like Linux could debug this type of thing in a more straight forward manner.
  • by amightywind ( 691887 ) on Friday February 04, 2005 @09:29AM (#11571180) Journal

    It has always been a mystery to me that so many difficult problems have been solved by so many brilliant GNU teams (gas, gcc, gdb, gld, bfd, glibc, emacs, m4, grub, bash ...) that developing a good kernel has eluded them. The people that have developed these packages are certainly skilled enough. They have also demonstrated tremendous drive. Any ideas?

  • by Bloater ( 12932 ) on Friday February 04, 2005 @10:32AM (#11571622) Homepage Journal
    > but what about the people who wanted to actually get some work done in that time frame

    Easy, they just used Linux, so where's the problem? Now you could use Linux and get your work done, meanwhile they've been creating the next generation of OS technology that much Linux kernel code will probably be ported to so you can make an evolutionary step to a system with very, very high reliability features. Plus extreme flexibility. :) How good is that?
  • Re:Benchmarks? (Score:3, Interesting)

    by Kjella ( 173770 ) on Friday February 04, 2005 @10:46AM (#11571755) Homepage
    Contrast it with systems such as Windows and Linux where drivers are in kernel space and it is impossible to have a stable or secure system with poor drivers, and in fact most of the problems with Windows and Linux crashing is caused by buggy drivers running in kernel space.

    And that's also why drivers in Linux (at least in my experience) are far superior and "just work". Seriously, I don't want my GFX card/network card/printer/webcam/whatever driver to crap out at any time, only to get "just restart it". Yes, they're buggy to begin with, and they take the house down, so you fix them. If I understand it correctly, the HURD kernel would have a fixed driver interface (being in its own keyspace and all), and what would that get you? Buggy closed-source drivers. Exactly what Linus is against, and mostly for good reasons.

    Kjella
  • by master_p ( 608214 ) on Friday February 04, 2005 @11:11AM (#11572043)
    Microkernels would be helped tremendously if 80x86 CPUs did not only have rings but also regions...because a faulty driver can write to anything inside its ring and lower-priviliged rings.

    The ring protection should have been inside the page table. A page descriptor has enough room for that. It should have been like this:

    Page Descriptor with Ring Protection:

    bit 0: page present/missing
    bits 1-3: other
    bits 4-5: current ring
    bits 6-7: Read ring
    bits 8-9: Write ring
    bits 10-11: eXecute ring
    bits 12-31: page frame

    So in order for some piece of code in a certain page A to read from another page B, it should be that A.page_ring = A.read_ring. The same goes for write and execute access.

    This mechanism has many advantages over the current one:

    a) ring protection is per page and not per segment. Segments become totally irrelevant for protection.

    b) there is no need for ring gates. A process in ring 3 can execute code of ring 0 if the kernel page allows it. There is also no need for special instructions like 'syscall'.

    c) higher-privileged code can have const data be read from lower-privileged code.

    d) device drivers can easily be isolated from the kernel.

    e) there is no need for special 'virus' protectiobn bit, since executable code can have write ring set to 0 and execute ring to 3 (which means an application can not execute data).
  • timely (Score:3, Interesting)

    by iggymanz ( 596061 ) on Friday February 04, 2005 @11:12AM (#11572057)
    These microkernels running services make much more sense on a processor with multiple cores - the main problem on a traditional "single-threaded" processor is there is way too much OS overhead 25-30% with the microkernel strategy, compared to a monolithic kernel. So in 5 to 10 years, as the HURD moves forward galacially like the plot to Dr. Who, this will be a good foundation for the new generation of processors.
  • Re:Let's see here (Score:3, Interesting)

    by LWATCDR ( 28044 ) on Friday February 04, 2005 @11:34AM (#11572333) Homepage Journal
    The question then becomes... Why?
    After the port would OS/X be, faster, more stable, more secure, or more portable?
    I would bet it could be done. The bigger questions is should it be done.
  • Re:"mafia tactics" (Score:2, Interesting)

    by cvdwl ( 642180 ) <cvdwl someplace around yahoo> on Friday February 04, 2005 @01:45PM (#11573928)
    "Mafia tactics": you pay us or we hurt you physically.

    BSA tactics: you pay them or we hurt you financially.

    Yes, the BSA is enforcing legal licenses (albeit, IMHO, draconian, and legal only under our current business-subservient patent system), but otherwise it really is the same thing. The mafia provided a service; a group of thugs wouldn't drop by and ruin your business. The BSA prevents a group of lawyers from stopping by and ruining your business.

    The cost of litigation, even when you are in the right, is a far more dangerous weapon, in this business climate, than a baseball bat. A lawyer can impoverish you for years, a baseball bat is likely just to involve some transient pain and medical expenses.

  • by forlornhope ( 688722 ) on Friday February 04, 2005 @02:01PM (#11574147) Homepage
    Yes, there is a research proposal on the subject of creating a compatibility layer to run Linux drivers directly in HURD user-space. They are actually thinking about making that the default or using OS-kit or making their own driver infrustructure. I think in the end youll see both the Linux drivers and the OS-kit drivers with OS-kit being the perfered. But this is all from memory and I'm too lazy to look it up.

Happiness is twin floppies.

Working...