First Program Executed on L4 Port of GNU/HURD 596
wikinerd writes "The GNU Project was working on a new OS kernel called HURD from 1990, using the GNU Mach microkernel. However, when HURD-Mach was able to run a GUI and a browser, the developers decided to start from scratch and port the project to the high-performance L4 microkernel. As a result development was slowed by years, but now HURD developer Marcus Brinkmann made a historic step and finished the process initialization code, which enabled him to execute the first software on HURD-L4. He says: 'We can now easily explore and develop the system in any way we want. The dinner is prepared!'"
Mach Microkernel vs L4 (Score:4, Interesting)
Linux (Score:3, Interesting)
BTW, Revolution OS is a great movie, even my non-nerd friends loved it. You can pick it up here: http://www.amazon.com/exec/obidos/ASIN/B0000A9GLO/ revolutionos-20/103-9235316-0475036 [amazon.com]
Re:It hurds (Score:5, Interesting)
Now, 22 years later, a definitve breakthrough has been performed.
I see this as an excitement
Now, we will see it emerge and, why not, get sufficient audience to become unavoidable. In 20 years from now, it'll be like it's an opportunity as weel as any other so it's not missed, it just took time to emerge, like my favourite whisky [scotchwhisky.com].
Re:Mach Microkernel vs L4 (Score:4, Interesting)
The Hurd is an interesting design. With luck, it will demonstrate both that the performance hit is no longer of major importance, and that a true microkernel has advantages over monolithic kernels. Only time will tell, of course, if those advantages are going to be properly exploited; but I must admit to curiosity as to what might be implemented above the Hurd that would not be possible (or would be significantly harder) with Linux.
Re:Mach Microkernel vs L4 (Score:5, Interesting)
But the small size doesn't make most systems faster. Running the same Unix API, L4 adds execution time overhead beyond the default monolithic Linux kernel, about 5% [psu.edu]. (Does anyone know the figure for Linux-on-Mach? I know it's much greater than 5%....) However, there are some significant advantages having to do with debugging, maintainability, SMP, real time gaurentees, memory management, configurability, robustness, etc. Detailed discussion here [cbbrowne.com].
From the overview: [tu-dresden.de]
Other links: L4KA homepage [l4ka.org], background info [unsw.edu.au], more info with some historical L3 links [tu-dresden.de].
Frankly, I think L4 is very much the right way to do things. I wish I could say the same for other parts of HURD.
Re:Competition is a Good Thing (Score:3, Interesting)
So why not have the people working on Hurd work on something new instead, or work on improving Linux? Competition can also hurt, by splitting up the resources into many small parts
Great (Score:5, Interesting)
In the words of Linus... (Score:2, Interesting)
Courtesy of Gooooooogle [google.com]
The thing is, GNU/HURD will still be... Linux. Don't shout, yes I know, the thing is, people call Linux, Linux.
Then people say, aaahaaaa Mr Bond... it is really GNU/Linux.
Well I am not so sure anymore. Debian think so, but why not call it GNU/Gimp/OOo/Java/Perl/apache/*all other installed apps*/Linux.
Is GNU software is great, and calling Linux+GNU 'Linux' is wrong, but calling any installation of the Linux kernel 'Linux' is correct regardless of other software.
If I call my OS Linux, I do so without reffering to the installed user space apps, however necessary they might be.
I think the person who is most keen to see HURD is Linus himself! After all, he has been waiting since 1990!
I do hope that
HURD v Linux (and HURD will never sell with that name - IMHO)
HURD running on top of the GNU Mach microkernel first booted in 1994 and became GNU's official kernel
The development of the GNU/Hurd has important emotional and practical value to GNU fans because in 1990s GNU had not completed any kernel and used the Linux kernel out of necessity. Thus, a number of GNU fans feel that they will have "pure GNU" only with the Hurd kernel.
I do think that some of the GNU ramblings are a bit ungrateful to Linux kernel. Without Linux it would be in 5 years that people would wake up to *nix OS's types, and in 5 years we would be where we were in 1994.
I see one thing, with the FUD over linux, will the same FUD establish itself over the huge sprawling software base of GNU?
Will GNUimp get sued by Adobe? How will this GNUism evolve?
And the final question on every
Re:Linux (Score:1, Interesting)
It's still entirely possible that with further development L4-Hurd will become the choice for the long run, especially if a lot of the best stuff of the work on Linux - namely device drivers - can be ported over. But it definitely has hefty shoes to fill.
Re:Mods... (Score:5, Interesting)
After a slow start Mozilla is finally ready and moving fast.
Hopefully the same fate will happen with Hurd as soon as developers come and take it seriously. Its a selfulling prophesy in free software.
DNF? Well its proprietary so who knows
Re:Well worth the wait ... (Score:5, Interesting)
Fankly, I think it's a great thing that BSD and HURD will be putting some pressure on Linux to be the best. Competition makes them strong, and the cross-fertilization of ideas makes them stronger still.
Besides, HURD may end up being superior to Linux in some domains, such as high-reliability systems (think banking servers), driver development, OS research, shared systems, and the like.
Re:Benchmarks? (Score:3, Interesting)
flame of the day (Score:3, Interesting)
Re:Benchmarks? (Score:2, Interesting)
I'm a little excited by the possibility of a solid Open MK, but a little dismayed at the thought that I may have to re-read Tannenbaum and Wirth (Oberon Project) to figure out what's going on. Does anyone have a link to an overview/comparison of kernel architectures? If so, this old fart thanks you.
Re:Dilbert == BSA whore (Score:5, Interesting)
Indeed, ouch, I find that very disappointing, I'll join the "Dilbert boycott". How patronising too, their lame psychological manipulation strategy: "As an engineer like you ..." .. isn't that how you try manipulate 6-year olds? The BSA's tactics disgust me in general.
I used to like Dilbert, but I cannot stand any comic strip that whores itself out to corporate interests in this way. A comic strip is not an advertising platform.
Re:Benchmarks? (Score:5, Interesting)
Software failure is more common than hardware. In many cases, drivers can be restarted. Your specific example is probably the toughest one I can think of offhand... you'd have to have a copy of the HD controller cached somewhere to be able to restart it. (since, obviously, you can't load it from HD
You also, of course, have the advantage of each driver/process running in its own address apace, which would probably make very complex code, like the 2.6 Linux kernel, more manageable.
Just as an offhand observation, I kind of wonder if the 2.6 Linux kernel isn't approaching the level of diminishing returns... it's gotten so complex that it's getting pretty tough to cleanly improve without blowing a lot of stuff up. A microkernel design would probably have made maintenance easier, and *probably* would have given us more stable systems now.
But they didn't go that way, and restarting Linux kernel development would be pretty stupid, IMO.
Re:Let's see here (Score:3, Interesting)
Mac OS X actually was shipping in 1989 - it was just called NeXTStep [wikipedia.org] back then and didn't have Classic. We actually had two NeXTstations in our house back in college in the early 90s, a cube and a slab.
Too bad they had to gussy it up to make it look more like Mac OS 9- to be accepted by the faithful, it was a much more elegant design before that.
Re:Benchmarks? (Score:5, Interesting)
The funny thing is that back when Linux was started, it could been seen as a restart of the HURD kernel development. What goes around comes around. :-)
Re:Let's see here (Score:5, Interesting)
Also, not only did OS X take a long time to develop, it took an even longer time to become usable. The first desktop version, 10.0, was released in Mar. 2001, and it sucked. Actually, it worse than sucked, it was closer to a beta than a release. I consider it more of a developer's preview. The next version, 10.1, released in Sept or Oct 2001, was usable but still too slow, particularly for the hardware at that time. The first version I would call good, and good enough for the casual user, was Jaguar, 10.2.
Most estimates of the cost of developing OS X in its present are around $1 billion. (Cost of acquiring NeXT was $420M, plus all the development time and money. I think part of the Copland money was counted in there too.) That's a whole lot of development time, money and effort to throw out for a hypothetical, potential and probably minor speed increase. Given the further elaboration above, I agree with the parent's implied answer.
Still, one could argue that much of the time the parent and I count as "working" on OS X didn't really count (i.e. Copland, which failed, and NeXT, much of which didn't make it into OS X), but these timelines were still important in making today's OS X what it is.
Re:Benchmarks? (Score:1, Interesting)
Re:Benchmarks? (Score:2, Interesting)
Re:Let's see here (Score:2, Interesting)
1) OS X has nothing to do with Copland
2) OS X is not based on the Classic Mac OS
3) Classic apps and the Carbon libraries running on OS X do not directly access the Mach microkernel
4) very few things in OS X and its apps actually know there is a Mach microkernel in there
So given all these things, Apple would certainly NOT have to rewrite much if it were to switch microkernels. Hell, even most drivers are written at such a high level that they wouldn't be affected.
So why would't it make sense?
I really hope you answer this.
Parent needs a glass hat (Score:5, Interesting)
Your point? The world now knows there are viable alternatives, and they can be had for historical lows on price.
The world's got practice. It's no longer in the same state it was in '91. Back at that time, very few people had unix machines on their desk or at home. Unix ran in the computer room at work or school and you connected to the system but did little in the way of administration. Millions have been introduced to "the unix-like way of life" (TULWOF), superuser status, and have developed desires to exploit the powers of their machines in an infinite number of ways. The world is primed to be wowed again.
I see our future selves laughing at our current fascination with Linux like we now look at time we spent with DOS. We'll see someday how horribly inflexible it was compared to what's coming in this next generation of operating systems. Your post shows you know very little about the Hurd and what possibilities it will allow. One cannot currently imagine all the fun things people are going to do with it (them?) X years from now.
Exactly not the case. There are *profound* advantages [to "the Hurd"].
If and when a usable system comes to fruition is the question. Developers. Developers. Developers. Get them excited and you'll soon be doing things with your machine you'll never even have considered possible. Maybe not yourself, but people will be doing things they never dreamt possible. There are fundamental differences that are difficult to comprehend having experienced only monolithics. Granted, most of the advantages are not so much at the user level, but from a system administration perspective. Guys working "in the computer room" will probably have much more to be excited about than somebody with a user account. If you know what "having root" is like, the possibilities coming with the Hurd's architecture will be much more meaningful than they would to a typical user. However "typical user accounts" will be much more powerful on a box running the Hurd. Even low level stuff like filesystems floats up into "userland" allowing you the ability to customize your environment to great extents without affecting other users on the same machine.
Maybe more people should work on the current telephone system instead of wasting their time with VoIP. Maybe you should have worked harder at your old job instead of trying to find a new, better job? The Hurd is to Linux users like Linux is to DOS users. If Linux (as currently implemented) lives in N-space, the Hurd lives in N+1.
Resources get split up; sure. Consider however how the body of developers grows every day as more and more are introduced to TULWOF. None of us get to justify or dictate how others spend their free time. Get excited about the underdog. Linux has enough developers, don't you think? Will developments made on a new system with completely different rules positively effect Mr. Torvalds pet project? Most certainly I presume. I see the relationship as symbiotic. The Hurd takes on the huge body of software that has been developed due to "the Linux revolution" of the last decade and Linux takes from the Hurd (besides the jealousy that I can only predict will develop eventually) new techniques and perhaps, somehow, some type of hybrid approach to the kernel. There's no telling really; I can only imagine good things coming to both camps. Your attitude of discouraging work on such projects, done freely by others, I see as sel
Re:More interested in development (Score:5, Interesting)
Re:More interested in development (Score:3, Interesting)
This is precisely my problem with RMS's theory of freeness. The original reason he developed his whole GNU ideology was due to not being able to get hardware interfaces to work correctly. In other words, he wanted to get some work done and was prevented from doing so by the software he needed not being available. RMS being RMS, he decided he would solve the problem himself, and found that the info he needed to hack the drivers wasn't available. Now there are two possible AND EQUALLY VALID solutions here: either the suppliers make information freely available so that RMS can hack his drivers; OR the suppliers ship decent software in the first place.
Now granted, if all the docs and source for everything was available to everyone then the world certainly would be a better place - but ultimately what counts is having the tools you need to do your job. RMS (and hence clearly the Hurd developers) have confused this "freeness" with being an objective in itself, when really it is just a tool to let other people achieve their objectives.
This is as true in the physical domain as in software. If I want to do some woodwork, I buy a chisel, or borrow one off a friend. I don't give a shit if the specs for the chisel and the process by which it's made are posted on a website somewhere - I want to dig a hole in a bit of wood! If the chisel with its own website is a blunt piece of crap, I'll get one that's sharp and does the job properly.
Frankly, the only reason RMS (and others) can sustain their GNU agenda is that they don't have (and in some cases have never had) real jobs. You know, jobs with deadlines, where you can be made redundant or fired if you're not pulling your weight on a project, and where you don't get a load of time that you can arbitrarily spend on any scheme that takes your fancy. Checking RMS's resume, his background is all Bell Labs and similar "think-tanks". In Ivory Tower Land, such principles are fine - but in the real world, we just want to do stuff, thanks all the same. If your ideologically-perfect OS doesn't work as well as Windows, or if your ideologically-perfect application doesn't work as well as the MS equivalent, I'll ditch it without a moment's hesitation.
And this is where the Hurd people have fallen down. In their pursuit of the ultimate in gold-plating, they've utterly missed the point of delivering something that people can use. I don't think it's an exaggeration to say that Hurd will never succeed - I don't see how it can, because they've proven time after time that serving their user base is much less important to them than their ideology. And if you screw over your user base, man, you're dead.
Grab.
Re:Why Multics went under (Score:4, Interesting)
Re:Mach Microkernel vs L4 (Score:2, Interesting)
I recall RMS saying in Revolution OS that this turned out to be a serious disadvantage due to the difficulty in trying to debug the orchestration and multiple message flow between these many programs. This had something to do with things like having a context which was over a given message which could be subtly modified by some other event in the system. Whereas monolithic kernels like Linux could debug this type of thing in a more straight forward manner.
Why has a GNU kernel been so elusive? (Score:3, Interesting)
It has always been a mystery to me that so many difficult problems have been solved by so many brilliant GNU teams (gas, gcc, gdb, gld, bfd, glibc, emacs, m4, grub, bash ...) that developing a good kernel has eluded them. The people that have developed these packages are certainly skilled enough. They have also demonstrated tremendous drive. Any ideas?
Re:More interested in development (Score:3, Interesting)
Easy, they just used Linux, so where's the problem? Now you could use Linux and get your work done, meanwhile they've been creating the next generation of OS technology that much Linux kernel code will probably be ported to so you can make an evolutionary step to a system with very, very high reliability features. Plus extreme flexibility.
Re:Benchmarks? (Score:3, Interesting)
And that's also why drivers in Linux (at least in my experience) are far superior and "just work". Seriously, I don't want my GFX card/network card/printer/webcam/whatever driver to crap out at any time, only to get "just restart it". Yes, they're buggy to begin with, and they take the house down, so you fix them. If I understand it correctly, the HURD kernel would have a fixed driver interface (being in its own keyspace and all), and what would that get you? Buggy closed-source drivers. Exactly what Linus is against, and mostly for good reasons.
Kjella
To Intel engineers/who's interested (Score:3, Interesting)
The ring protection should have been inside the page table. A page descriptor has enough room for that. It should have been like this:
Page Descriptor with Ring Protection:
bit 0: page present/missing
bits 1-3: other
bits 4-5: current ring
bits 6-7: Read ring
bits 8-9: Write ring
bits 10-11: eXecute ring
bits 12-31: page frame
So in order for some piece of code in a certain page A to read from another page B, it should be that A.page_ring = A.read_ring. The same goes for write and execute access.
This mechanism has many advantages over the current one:
a) ring protection is per page and not per segment. Segments become totally irrelevant for protection.
b) there is no need for ring gates. A process in ring 3 can execute code of ring 0 if the kernel page allows it. There is also no need for special instructions like 'syscall'.
c) higher-privileged code can have const data be read from lower-privileged code.
d) device drivers can easily be isolated from the kernel.
e) there is no need for special 'virus' protectiobn bit, since executable code can have write ring set to 0 and execute ring to 3 (which means an application can not execute data).
timely (Score:3, Interesting)
Re:Let's see here (Score:3, Interesting)
After the port would OS/X be, faster, more stable, more secure, or more portable?
I would bet it could be done. The bigger questions is should it be done.
Re:"mafia tactics" (Score:2, Interesting)
BSA tactics: you pay them or we hurt you financially.
Yes, the BSA is enforcing legal licenses (albeit, IMHO, draconian, and legal only under our current business-subservient patent system), but otherwise it really is the same thing. The mafia provided a service; a group of thugs wouldn't drop by and ruin your business. The BSA prevents a group of lawyers from stopping by and ruining your business.
The cost of litigation, even when you are in the right, is a far more dangerous weapon, in this business climate, than a baseball bat. A lawyer can impoverish you for years, a baseball bat is likely just to involve some transient pain and medical expenses.
Re:Linux Driver Portability (Score:3, Interesting)