Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
GNU is Not Unix Operating Systems Software

First Program Executed on L4 Port of GNU/HURD 596

wikinerd writes "The GNU Project was working on a new OS kernel called HURD from 1990, using the GNU Mach microkernel. However, when HURD-Mach was able to run a GUI and a browser, the developers decided to start from scratch and port the project to the high-performance L4 microkernel. As a result development was slowed by years, but now HURD developer Marcus Brinkmann made a historic step and finished the process initialization code, which enabled him to execute the first software on HURD-L4. He says: 'We can now easily explore and develop the system in any way we want. The dinner is prepared!'"
This discussion has been archived. No new comments can be posted.

First Program Executed on L4 Port of GNU/HURD

Comments Filter:
  • Mods... (Score:3, Funny)

    by GreyWolf3000 ( 468618 ) on Friday February 04, 2005 @02:46AM (#11570165) Journal
    Please mod down any posts that mention Duke Nukem: Forever.

    Except this one, of course.

  • by ggvaidya ( 747058 ) on Friday February 04, 2005 @02:46AM (#11570166) Homepage Journal
    ... if GNU/HURD comes out before Longhorn?
  • by nocomment ( 239368 ) on Friday February 04, 2005 @02:47AM (#11570169) Homepage Journal
    Maybe the second program should be a better web server.
  • by spongman ( 182339 ) on Friday February 04, 2005 @02:48AM (#11570173)
    that 1st program wasn't a web server by any chance, was it?
  • by TangoCharlie ( 113383 ) on Friday February 04, 2005 @02:48AM (#11570174) Homepage Journal
    What are the relative benefits of L4 vs the Mach Microkernel? Better performance? As I understand it, MacOS X's microkernel is also based on the Mach microkernel... would it make any sense for Apple to look at L4?
    • by Anonymous Coward
      The L4 kernel is built around the x86 (really all modern CPUs) concept of processor "rings". The kernel itself resides at ring0, drivers at ring1, and so on.

      The big thing is that in ring0, no one can interrupt you. So a kernel can run many cycles uninterrupted by applications. Mainly, this is used for scheduling and precise timing.

      What L4 seeks to do is bring all levels of processing out to ring4. By doing this, they minimize the amount of code that cannot be preempted. (I'm sure you've heard about t
      • by Anonymous Coward on Friday February 04, 2005 @04:07AM (#11570392)
        Well, this is fairly wrong but some of the truth is there.

        The x86 uses rings but everything else just uses the supervisor vs user state (since that is all anyone uses the x86 for: rings 0 (supervisor) and 3 (user)).

        You can be interrupted in ring 0 (on x86) or other architectures' kernel privilege level. They usually have an interrupt state flag that needs to be set but, as far as I know, this never has to do with privilege level (except that most interrupts turn it off so that you can clear the interrupt).

        There is no "ring 4". On x86 it is "ring 3" (there are 2 bits for the ring level) and other chips just have "user mode" (hence, this is the generic term for this state).

        Resource starvation and priority inversion have nothing to do with the notion of CPU privilege levels and can both occur on L4.

        The real power to a microkernel comes from the modularity. It is much easier to maintain several small programs than one large one. Plus, it means that any problem in one of them harms nobody else (and that process can later be restarted instead of bringing the whole system down like Linux would with a bug or faulty driver). Additionally, a lean microkernel can stay resident in CPU cache so all kernel code can be run without memory latency overhead (only memory access and device access causes a problem).

        The disadvantage is that the additional level of indirection in the message-passing between processes takes longer than just jumping to the kernel to execute a function and then returning (it isn't quite that simple but you get the idea).
        • Well, this is fairly wrong but some of the truth is there.

          A 'ring' of truth, perhaps?
        • Microkernels would be helped tremendously if 80x86 CPUs did not only have rings but also regions...because a faulty driver can write to anything inside its ring and lower-priviliged rings.

          The ring protection should have been inside the page table. A page descriptor has enough room for that. It should have been like this:

          Page Descriptor with Ring Protection:

          bit 0: page present/missing
          bits 1-3: other
          bits 4-5: current ring
          bits 6-7: Read ring
          bits 8-9: Write ring
          bits 10-11: eXecute ring
          bits 12-31: page frame
    • by Anonymous Coward on Friday February 04, 2005 @03:04AM (#11570234)
      Probably not. The Darwin kernel is really a monolithic layer over the top of a microkernel, not a proper microkernel system. Historically, at least, you gave up too much speed to do a proper microkernel, so monolithic kernels were de rigeur in any application outside the OS laboratory. Just because Darwin is written atop of Mach, it doesn't necessarily follow that Darwin uses a microkernel; and the design of Darwin is that of a monolithic kernel, not a microkernel.

      The Hurd is an interesting design. With luck, it will demonstrate both that the performance hit is no longer of major importance, and that a true microkernel has advantages over monolithic kernels. Only time will tell, of course, if those advantages are going to be properly exploited; but I must admit to curiosity as to what might be implemented above the Hurd that would not be possible (or would be significantly harder) with Linux.

    • by js7a ( 579872 ) <james@nOspAm.bovik.org> on Friday February 04, 2005 @03:09AM (#11570251) Homepage Journal
      L4 has only seven system calls, compared to several dozen in Mach. It fits in about 32KB, too, which is very much smaller than Mach.

      But the small size doesn't make most systems faster. Running the same Unix API, L4 adds execution time overhead beyond the default monolithic Linux kernel, about 5% [psu.edu]. (Does anyone know the figure for Linux-on-Mach? I know it's much greater than 5%....) However, there are some significant advantages having to do with debugging, maintainability, SMP, real time gaurentees, memory management, configurability, robustness, etc. Detailed discussion here [cbbrowne.com].

      From the overview: [tu-dresden.de]

      Kernels based on the L4 API are second-generation -kernels. They are very lean and feature fast, message-based, synchronous IPC, simple-to-use external paging mechanisms, and a security mechanism based on secure domains (tasks, clans and chiefs). The kernels try to implement only a minimal set of abstractions on which operating systems can be built flexibly.

      Other links: L4KA homepage [l4ka.org], background info [unsw.edu.au], more info with some historical L3 links [tu-dresden.de].

      Frankly, I think L4 is very much the right way to do things. I wish I could say the same for other parts of HURD.

      • Mach is 3mb...I think
    • Let's see here (Score:3, Insightful)

      by ravenspear ( 756059 )
      would it make any sense for Apple to look at L4?

      Given the fact that some features in OS X took Apple over 12 years to get into a shipping product (development on Copland started in 89), and given the fact that for years Apple had suffered with a horribly buggy, non standards compliant, limited system that was the Classic Mac OS, and given the fact that Darwin with the Mach kernel is an excellent open source unix system, and given the fact that huge amounts of time and money were spent getting OS 9 and Ca
      • Re:Let's see here (Score:3, Interesting)

        by tm2b ( 42473 )

        Given the fact that some features in OS X took Apple over 12 years to get into a shipping product (development on Copland started in 89).

        Copland was abandoned, thank god.

        Mac OS X actually was shipping in 1989 - it was just called NeXTStep [wikipedia.org] back then and didn't have Classic. We actually had two NeXTstations in our house back in college in the early 90s, a cube and a slab.

        Too bad they had to gussy it up to make it look more like Mac OS 9- to be accepted by the faithful, it was a much more elegant de

      • Re:Let's see here (Score:5, Interesting)

        by ThousandStars ( 556222 ) on Friday February 04, 2005 @05:18AM (#11570550) Homepage
        Arguably, Apple took even longer, since it was looking at next-generation operating systems before Copland development actually started. In addition, NeXT began (IIRC) in 1986.

        Also, not only did OS X take a long time to develop, it took an even longer time to become usable. The first desktop version, 10.0, was released in Mar. 2001, and it sucked. Actually, it worse than sucked, it was closer to a beta than a release. I consider it more of a developer's preview. The next version, 10.1, released in Sept or Oct 2001, was usable but still too slow, particularly for the hardware at that time. The first version I would call good, and good enough for the casual user, was Jaguar, 10.2.

        Most estimates of the cost of developing OS X in its present are around $1 billion. (Cost of acquiring NeXT was $420M, plus all the development time and money. I think part of the Copland money was counted in there too.) That's a whole lot of development time, money and effort to throw out for a hypothetical, potential and probably minor speed increase. Given the further elaboration above, I agree with the parent's implied answer.

        Still, one could argue that much of the time the parent and I count as "working" on OS X didn't really count (i.e. Copland, which failed, and NeXT, much of which didn't make it into OS X), but these timelines were still important in making today's OS X what it is.

      • by Anonymous Coward on Friday February 04, 2005 @05:53AM (#11570662)

        guess nobody bothered to g**gle it: New kernel for Darwin: [uq.edu.au]

        Apple's Darwin operating system is the open source base for Mac OS X. The underlying kernel is based on Mach. This project requires implementing a replacement for Mach based on the L4ka Pistachio kernel. Since ports of both exist on similar platforms (IA32 and PPC), most of this project will consist of building an emulation layer for Pistachio which can provide system call interfaces to match those provided by the existing kernel. In addition to implementation and testing, performance evaluation will be an important aspect of this project. Since part of the project is already done and the whole thing is quite large, an important aspect wiill be defining a doable subset, in conjuction with anyone doing part of it for BE. Starting early is advised on this project so no late applications will be considered.
      • Re:Let's see here (Score:5, Insightful)

        by Anonymous Coward on Friday February 04, 2005 @06:00AM (#11570683)
        given the fact that OS X represents the most compelling reason to switch to Apple computers in years, and given the fact that in just a few years the OS has amassed a compartively huge following of developers and applications...

        Would it make sense for Apple to now completely rewrite it DOWN TO THE KERNEL LEVEL!!!

        Palm OS is on its 4th kernel. Did anyone notice? I didn't. I've been a full-time Palm developer for two years, and I couldn't even tell you which version has which kernel (except that I'm pretty sure they switched kernels when they ditched 68k processors for ARM). Did they have to "completely rewrite it down to the kernel level"? Nope, that's just the point: they did the opposite. They left it the same all the way down to the kernel level; it's just the stuff below the kernel level (and a few minor piece above it) that they changed.

        The point is, switching out kernels is not necessarily that tough a thing. Sure, it can't be done overnight, but it doesn't force you to rewrite your entire OS.

        Much more to the point, if you research it a little, you'll find that Linux has already been ported to L4Ka [l4ka.org]. And the version of Linux that was ported still runs exactly the same software as regular Linux. If some small team of researchers can port Linux to L4Ka just to give themselves a convenient development platform, then I guess Apple could do the same thing to OS X if they had any interest in doing so.

        • Re:Let's see here (Score:3, Interesting)

          by LWATCDR ( 28044 )
          The question then becomes... Why?
          After the port would OS/X be, faster, more stable, more secure, or more portable?
          I would bet it could be done. The bigger questions is should it be done.
    • by joib ( 70841 ) on Friday February 04, 2005 @03:18AM (#11570278)
      This article [l4ka.org] explains the philosphy behind L4, and how it's different from Mach.
    • As with NextStep in its days, the MacOS X "microkernel" was mainly based on Mach because it allowed fast development. The concept is called a "monolithic microkernel": a microkernel with just one "server" doing just about everything.

      You will realise how long it took the (handful of) HURD developers to port to L4. I don't think Apple has any ambitions in that direction. It just wants to sell a working system.
    • Yes, MacOS X is Mach-based. Mach, however, is not really a microkernel in the true sense of the word. Compared to L4's size, Mach is a huge monster. Somebody else already provided a link to an introductory (if old, from 1996) article [l4ka.org] by the L4 creator Jochen Liedtke.

      would it make any sense for Apple to look at L4?

      As a matter of fact, the L4KA group is looking into this. See, for example, this thesis [ira.uka.de] currently in progress.

      The main benefit is that L4 is actually a true microkernel. It has someth

  • Dilbert (Score:4, Funny)

    by john-gal ( 823997 ) <damnitall@operamail.com> on Friday February 04, 2005 @02:49AM (#11570178) Homepage
    Reminds me of the Dilbert comic strip where an old man waves a piece of paper around and says "At last, I have formed a strategy that is acceptable to all departments. Now if only there were a way to reproduce text from one piece of paper to many."
    • Dilbert == BSA whore (Score:4, Informative)

      by Anonymous Coward on Friday February 04, 2005 @03:38AM (#11570332)
      Reminds me of the Dilbert comic strip ...

      I've been boycotting Dilbert since its authors became BSA propaganda whores [bsaengineers.com].
      • by HuguesT ( 84078 ) on Friday February 04, 2005 @04:46AM (#11570475)
        Please think it through, Dilbert is right. How can you not support the BSA's actions ?

        The BSA is making sure copyrights are respected (i.e. the law). Now the only way we are going to get reasonable copyright laws is when people realize that current terms are unacceptable. If people think that they can get away with copyright infringement they wont put as much effort into voicing their opinion regarding how much they think current laws sucks.

        In other words people are now saying: "yes, copyright sucks but it doesn't affect me, I can get all the software/music/videos I want (not need) through [P2P du jour], and I can get away scott-free".

        Moreover the BSA supports Linux. Yes it does.

        It is when companies and individuals realize how much money their have to give to BSA members like Microsoft, Adobe, Apple and others and what little return they get that they'll take a long hard look at Linux and all the excellent Free applications out there.

        There is no need for a vast majority of people to give their money to run Windows or Photoshop. They can get all the software they need and more and stay on the right side of the law.

        The GPL, BSD license and the like all use the underlying copyright laws. If copyright laws are not enforced then those licenses are worthless as well.

        Dilbert is supporting the BSA and so should you. The worse the BSA treats the consumer, the more strongarmed its tactics are, the more audits it conducts, the better for Free software.

        Unless you think you have a right to freely access all the copyrighted works in the world?
        • Remember Novell's legal actions against the BSA?

          BSA protects the rights of their members, they just protect the rights of some members more than those of other members.
        • by ProfitElijah ( 144514 ) <elijah@atheist.com> on Friday February 04, 2005 @09:12AM (#11571429) Homepage

          > Please think it through, Dilbert is right. How can you not support the BSA's actions ?>/tt>

          Easily. I support its basic principles - protecting its memebers' copyrights - but its actions are indefensible. Take some of the following examples. In 2003 they sent a letter to a German university demanding they take down infringing software from their site. The software? OpenOffice. Also in 2003 it attacked Massachusetts, the only state holding out against the DoJ's settlement, for adopting an open source policy when no such policy existed. In 2000 when I was working for a small company in London, we received a letter threatening to make us "the focus of a BSA investigation" if we didn't get licenses for all the pirated software in use at our offices. We had licenses for all our proprietary software - namely Informix and Solaris. In 2002 they attempted to raid kickme.to's offices in order to find information about their customers, when kickme.to is just a redirection service with no hosted content of its own. Only last month they published a whitepaper calling for the enforced cooperation of 3rd parties (i.e. ISPs) with rights holders. In other words they want the existing, much abused, DMCA subpoena and takedown notice fortified. In 2001 they said the cost of piracy was $3 billion. In 2003 they said it was $29 billion. I guess $3 billion is not enough money to make the headlines, so they had to re-engineer their spurious mechanisms to produce a better figure.

          In short the BSA is a bully, a liar and its actions are, as the grandparent poster argued, indefensible.

        • by AbbyNormal ( 216235 ) on Friday February 04, 2005 @09:51AM (#11571799) Homepage
          Have you ever received a letter from the BSA? Coinicidentally, we only received one after we let one of our MS action pack subscriptions lapse (but purchased another). In not so many words, they threatened with forcing us to prove we've destroyed our remaining unlicensed copies of software. Ironically on the same day, we received our renewal receipt for the Action Pack. You can imagine all the "warm and fuzzies" the BSA letter gave us.

          Basically, they are a roving band of pirate lawyers looking for plunder. Mafia tactics, that border on harassment, not these "do-gooders of Copyright laws" you proclaim.
        • motivation matters (Score:3, Insightful)

          by jeif1k ( 809151 )
          Yes, the BSA is making sure copyrights are respected, and that indirectly helps the open source licenses. But the BSA has been hostile to open source, and the more open source catches on, the more power the BSA loses.

          The Dilbert cartoon does make one wonder about Scott Adam's attitudes towards issues of copyrights and freedom, and that is a justifiable reason to criticize him if it is true. That he indirectly and accidentally may or may not have a short-term positive effect on open source licenses doesn'
  • Mach was still an active CMU project when the Hurd glacier began its very slow creep from the peaks of lofty idealism towards the throng of onlookers waiting patiently for the free unix kernel they always craved to reach them. I understand there are actually a few brave souls still standing there waiting.....
  • by Max Romantschuk ( 132276 ) <max@romantschuk.fi> on Friday February 04, 2005 @02:55AM (#11570203) Homepage
    The HURD kernel is often joked about, but I for one does hope that it will eventually become a viable alternative to the Linux kernel. Competition is seldom a bad thing, especially not among free software projects.
    • Not sure I agree. It took Linux a long time to be recognized as a viable alternative to other Unices. I don't think this can be easily done again. And I doubt that Hurd would have any noticeable advantages over Linux. It's also free, it runs the same software (99.9% or so ...), and it's a Unix (or, well, Not Unix).

      So why not have the people working on Hurd work on something new instead, or work on improving Linux? Competition can also hurt, by splitting up the resources into many small parts ...
      • False Dichotomy (Score:3, Insightful)

        by warrax_666 ( 144623 )
        There's no indication that the people working on HURD would work on Linux if they couldn't work on HURD. Although you might speculate that they would, they might equally well work on one of the BSDs.

        And I doubt that Hurd would have any noticeable advantages over Linux.

        Oh, it has lots of advantages, particularly for "kernel" developers and system administrators. For developers, implementing e.g. new file systems is much, much easier than in a monolithic kernel (although FUSE has helped here it still feels

        • I look forward to the day when I can dual boot either a Hurd or Linux kernel and run *all* my *nix software, just like choosing between a 2.4/2.6 kernel today. I'd like to toy with the hurd, later, when it has something to offer, but not at the expense of running two setups. Any idea if this is possible?
      • by Max Romantschuk ( 132276 ) <max@romantschuk.fi> on Friday February 04, 2005 @05:00AM (#11570507) Homepage
        So why not have the people working on Hurd work on something new instead, or work on improving Linux? Competition can also hurt, by splitting up the resources into many small parts ...

        It's true that combining all the resources and working for The Right Thing is a good idea in theory, but one that fails in practice. The problem is that people can't seem to agree on what The Right Thing is. If they did, there would be no need for politics. For now, I see a need for both competition and politics.

        (And the places that have eliminated both are usually called dictator states.)
      • by Anonymous Coward on Friday February 04, 2005 @05:07AM (#11570526)
        So why not have the people working on Hurd work on something new instead, or work on improving Linux?

        Yes sir, I'll reassign the coding monkeys to fit your wishes... wait, what was that? they are volenteering to do this and that's what they want to do? they don't have a boss? well, that's news to me bud, cause I gots a guy right here who wants me to stop the project, yup, stop it right away, cause he wants to tell the coders what they are to do on their own free time. what's that you say? bite your shiny metal ass? well, I never!
      • So why not have the people working on Hurd work on something new

        They are working on something new: a true microkernel. They are making it backwards compatible so that people can easily use it.

        Competition can also hurt, by splitting up the resources into many small parts ...

        But since nobody knows ahead of time which part is the right one, we have to bear that cost. Microsoft and the Soviet Union believed that they were smart enough to predict everything. The Soviet Union also blossomed initially be
      • by QuietRiot ( 16908 ) <cyrus.80d@org> on Friday February 04, 2005 @06:16AM (#11570725) Homepage Journal

        It took Linux a long time to be recognized as a viable alternative to other Unices.

        Your point? The world now knows there are viable alternatives, and they can be had for historical lows on price.

        I don't think this can be easily done again.

        The world's got practice. It's no longer in the same state it was in '91. Back at that time, very few people had unix machines on their desk or at home. Unix ran in the computer room at work or school and you connected to the system but did little in the way of administration. Millions have been introduced to "the unix-like way of life" (TULWOF), superuser status, and have developed desires to exploit the powers of their machines in an infinite number of ways. The world is primed to be wowed again.

        I see our future selves laughing at our current fascination with Linux like we now look at time we spent with DOS. We'll see someday how horribly inflexible it was compared to what's coming in this next generation of operating systems. Your post shows you know very little about the Hurd and what possibilities it will allow. One cannot currently imagine all the fun things people are going to do with it (them?) X years from now.

        And I doubt that Hurd would have any noticeable advantages over Linux.

        Exactly not the case. There are *profound* advantages [to "the Hurd"].

        If and when a usable system comes to fruition is the question. Developers. Developers. Developers. Get them excited and you'll soon be doing things with your machine you'll never even have considered possible. Maybe not yourself, but people will be doing things they never dreamt possible. There are fundamental differences that are difficult to comprehend having experienced only monolithics. Granted, most of the advantages are not so much at the user level, but from a system administration perspective. Guys working "in the computer room" will probably have much more to be excited about than somebody with a user account. If you know what "having root" is like, the possibilities coming with the Hurd's architecture will be much more meaningful than they would to a typical user. However "typical user accounts" will be much more powerful on a box running the Hurd. Even low level stuff like filesystems floats up into "userland" allowing you the ability to customize your environment to great extents without affecting other users on the same machine.

        So why not have the people working on Hurd work on something new instead, or work on improving Linux? Competition can also hurt, by splitting up the resources into many small parts ...

        Maybe more people should work on the current telephone system instead of wasting their time with VoIP. Maybe you should have worked harder at your old job instead of trying to find a new, better job? The Hurd is to Linux users like Linux is to DOS users. If Linux (as currently implemented) lives in N-space, the Hurd lives in N+1.

        Resources get split up; sure. Consider however how the body of developers grows every day as more and more are introduced to TULWOF. None of us get to justify or dictate how others spend their free time. Get excited about the underdog. Linux has enough developers, don't you think? Will developments made on a new system with completely different rules positively effect Mr. Torvalds pet project? Most certainly I presume. I see the relationship as symbiotic. The Hurd takes on the huge body of software that has been developed due to "the Linux revolution" of the last decade and Linux takes from the Hurd (besides the jealousy that I can only predict will develop eventually) new techniques and perhaps, somehow, some type of hybrid approach to the kernel. There's no telling really; I can only imagine good things coming to both camps. Your attitude of discouraging work on such projects, done freely by others, I see as sel

  • Linux (Score:3, Interesting)

    by mboverload ( 657893 ) on Friday February 04, 2005 @02:56AM (#11570206) Journal
    Linus provided them a better, simpler kernel so they basicly scrapped HURD for linux, if I remember correctly from "Revolution OS"

    BTW, Revolution OS is a great movie, even my non-nerd friends loved it. You can pick it up here: http://www.amazon.com/exec/obidos/ASIN/B0000A9GLO/ revolutionos-20/103-9235316-0475036 [amazon.com]

    • Re:Linux (Score:2, Informative)

      by angelfly ( 746018 )
      yeah, I saw Rev OS, it wasn't scrapped. They just designed their kernel in a way which is which makes debugging hard, and thus really long development time. Lately it's looking good though from what I've seen in debian hurd
  • Wikipedia link (Score:5, Informative)

    by ceeam ( 39911 ) on Friday February 04, 2005 @03:04AM (#11570230)
  • by Leffe ( 686621 ) on Friday February 04, 2005 @03:09AM (#11570247)
    Let me quote from the l4-hurd mailing list (posted 02 feb):

    At Wed, 02 Feb 2005 01:12:44 -0500,
    "B. Douglas Hilton" wrote:
    > So, how much longer before Python will build on L4-Hurd? :-)

    If you mean "building" as in "compiling it", that should be possible as soon as we ported the dynamic linker, or at least made sure the dynamic linker "builds" (ie, "compiles"), if python can be cross-build.

    If you mean "building" as in "compiles _and runs_", then we are talking about a much longer time-frame :)

    With my glibc port, I can already build simple applications, but most won't run because they need a filesystem or other gimmicks (like, uhm,
    fork and exec), and I only have stubs (dummy functions which always return an error) for that now.

    So, for the time being, a measure of progress is what functionality is implemented: drivers, filesystem, signal processing, process
    management, etc. Luckily, we have so much existing knowledge to draw from (the Hurd on Mach source code, for example), that I am carefully
    optimistic that progress can kick in very quickly once we have sorted out some fundamental (low-level) design issues and got a sufficient
    understanding of the details of the system.

    Thanks,
    Marcus


    I might as well quote this too, which I think this story most likely refers to (posted on 27 jan~):


    Hi,

    with the changes of today, the glibc patch set in CVS supports startup and initialization up to the invocation of the main() function - this means important things like malloc() work.

    Of course, there is a lot of cheating going on, and the implementation is full of gaps and stubs. But this step forward means that we can do
    easy testing by just writing a program and linking it to glibc, and run it as the "bootstrap filesystem" server.

    TLS/TSD seems to work without any problems - important things like the default locale are set up correctly, and thus strerror() works. __thread variables are supported, glibc uses them itself.

    There were a couple of fixes and extensions needed in wortel and the startup code, but it wasn't so much. My understanding of the glibc code has reached an all-time high (not that this required much ;)

    If you want to reproduce all this, you need to configure, make and install the software as usual. It is important that your compiler can find the installed header files afterwards! Only then you can reconfigure your source with "--enable-libc" and try to build the C library according to the README.

    Static linking against this new libc should be possible after (manual) installation, I guess, but I always use a very hackisch and long gcc
    command line to cheat myself into a binary that I can then use as "filesystem server" (the last one in the list) in the GRUB configuration. See the README for details.

    I think that this basically concludes the first step of the initial bootstrap phase. By being able to link a program against glibc, and
    by booting all the way up to that programs main() function, we can now easily explore and develop the system in any way we want.

    The dinner is prepared! :)

    Thanks,
    Marcus


    This uses a lot of advanced words I have no idea what they could mean though, but I don't mind as long as someone does and writes an article :)

    Still a long way to go. Not much one can do except wait... or send in patches if you have kernel hacking experience!
    • by The_Dougster ( 308194 ) on Friday February 04, 2005 @03:56AM (#11570366) Homepage
      Wow, something I wrote actually trickled back into /. Amazing. I was just joking about Python, of course.

      L4-Hurd is pretty nifty, I think. Of course I run Gentoo and whatnot personally for the usability aspects, but I've been following the L4-Hurd port for a while now and this is an amazing little bit of news.

      I can't wait to start experimenting with the new features. This is really cool.

      Here's a coral cache link to the HurdOnL4 [nyud.net] Wiki page which I set up last summer. It's slightly out of date, but provides a lot of background behind whats going on and some basic information about the build and boot process.

      When you retrieve the CVS sources, read the README and all the docs because they contain the most up-to-date information available about building the system.

  • When I see X, KDE, & Postgres ported to Hurd then I'll believe it.
  • Great (Score:5, Interesting)

    by Pan T. Hose ( 707794 ) on Friday February 04, 2005 @03:17AM (#11570277) Homepage Journal
    When the first programs run, it is just a matter of time before there is a functional L4 port of Debian GNU/Hurd [debian.org] (or just Debian GNU?). I really like the design of the Hurd, but what I'd like to see the most are not the "POSIX capabilities" but the real capabilities [cap-lore.com] as described in the 1975 paper by Jerome Saltzer and Michael Schroeder, The Protection of Information in Computer Systems [cap-lore.com]. (For those who don't know what am I talking about, I recommend starting from the excellent essay What is a Capability, Anyway? [eros-os.org] by Jonathan Shapiro, and then reading the capability theory essays [cap-lore.com] by Norman Hardy. As a sidenone I might add that I find it amusing that people who say that there are other advantages than only Digital Restrictions Management of using TCPA/Palladium-like platforms usually quote security features, which have already been implemented in the 1970s, only better and with no strings attached. Those TCPA zealots are usually completely ignorant of the existance of such operating systems as KeyKOS [upenn.edu] or EROS [eros-os.org] with formal proofs of correctness [psu.edu] without all of the silliness [cam.ac.uk].) Are there any plans to have a real capability-based security model available in the Hurd?
  • by anti-NAT ( 709310 ) on Friday February 04, 2005 @03:19AM (#11570286) Homepage

    How much time would it take to port it over ?

  • by jeif1k ( 809151 ) on Friday February 04, 2005 @03:31AM (#11570323)
    A commercial company might take old code, given it a new name, and shipped it as a brand new thing. But GNU starts a brand-new, hot project based on better microkernel architecture and they use for it a name that people already associate with failure.

    The L4Ka-based kernel is a new project that sounds like it has a lot of promise and may address problems that both Linux and commercial kernels have with modularity and extensibility. This new kernel should get a snazzy new name to get that message across.
  • flame of the day (Score:3, Interesting)

    by Tumbleweed ( 3706 ) * on Friday February 04, 2005 @04:09AM (#11570401)
    Look, congrats and all, but if I'm going to run a pointless operating system, it's going to be one that's actually impressive, like MenuetOS [menuetos.org] .
  • timely (Score:3, Interesting)

    by iggymanz ( 596061 ) on Friday February 04, 2005 @10:12AM (#11572057)
    These microkernels running services make much more sense on a processor with multiple cores - the main problem on a traditional "single-threaded" processor is there is way too much OS overhead 25-30% with the microkernel strategy, compared to a monolithic kernel. So in 5 to 10 years, as the HURD moves forward galacially like the plot to Dr. Who, this will be a good foundation for the new generation of processors.
  • by Animats ( 122034 ) on Friday February 04, 2005 @12:40PM (#11573863) Homepage
    These guys started with L4, which been used to run a modified Linux for years. [tu-dresden.de] About a half dozen other operating systems have been ported to run on top of L4. So it's not that big a deal.

    The Hurd website, wiki, etc. haven't been updated in years.

    At a more fundamental level, there's a design disaster in the making here. L4 seems to make the same mistake Mach made with interprocess communication - unidirectional IPC. This design error is called "what you want is a subroutine call, but what the OS gives you is an I/O operation". This is a crucial design decision. Botch this and your microkernel performance will suck.

    QNX gets it right - the basic message-passing primitive is MsgSend, which sends a message and blocks until a reply is received (or a timeout occurs). The implementation immediately transfers control to the destination process (assuming it's waiting for a message), without a trip through the scheduler. That's crucial to getting good performance on real work from a microkernel.

    Mach botched this. Mach IPC is pipe-like, with one-way transmission. And that's a major reason Mach was a flop. (Note that the version of Mach used for the MacOS isn't the final "pure Mach", it's a Berkeley BSD UNIX kernel with Mach extensions.)

    Why does this matter so much? Because if send doesn't block, when you send, control continues in the sending process. Later, presumably, the sending process blocks waiting for a reply. But who runs next? Whoever was ready to run next. If you're CPU-bound and there are processes ready to run, every time you do a message pass, you lose your turn and your quantum, and have to wait. So programs with extensive IPC activity grind to a crawl on a loaded system.

    But if message passing is tightly integrated with scheduling, a message pass doesn't hurt your thread's CPU access. Control continues in the new process with the same quantum (and in QNX, the same priority by default, which avoids priority inversions in real time work). Now message passing is only slightly more expensive than a subroutine call, and can be used for everything.

    There is a big literature about Mach, Minix and related underperforming academic microkernels, while the key architectural details of the commercial microkernels that work (basically QNX and IBM's VM) aren't well publicized. But you can dig the information out if you work at it.

"An idealist is one who, on noticing that a rose smells better than a cabbage, concludes that it will also make better soup." - H.L. Mencken

Working...