Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
News

Cross Platform Packaging: A Dream Or Something More? 87

stevenl writes "A new project on sourceforge has just been set up for a cross-platform packaging standard. Whilst there isn't much there at the moment, plans are to produce a standard that will allow people to use it even if they have no binary utilities or a compiler to compile one with, and it's expected to be platform independent whilst still being lightweight. What's people's opinions of the cross-platform aspect taking off, or will we see another situation like we have with DPKG - great packaging sysetm, but not widely used due to the inferior (but still good) RPM and proprietary things like installshield?" Frankly, apt-get [?] does just about everything that I need - but I'm curious as to what people about something like this actually working - is it a pipe dream? Or possible?
This discussion has been archived. No new comments can be posted.

Cross Platform Packaging: A Dream Or Something More?

Comments Filter:
  • by Anonymous Coward
    http://www.nullsoft.com/free/nsis/

    Makefiles are handled for you if you have a half decent IDE. Windows has many good IDEs.
  • Microsoft has proved over the last few days why web apps will never work. If MS is unreachable to the general public, the general public will not be able to run thier apps. This includes businesses which would have spent the past 4 days paying employees to do nothing. Local apps will always be present due to the possibility of things like this happening.
  • by Anonymous Coward
    (1) I know a lot more about linux than you. (2) I dont care what Linus likes. (3) Whoever moderated you to insightful is an utter git.
  • by Anonymous Coward
    That stevenl submitted the story and the only developer on the project is one Steven Lord ?

    There's not even anything there to download yet ! News for nerds I guess ..

  • Actually, it really isn't (or shouldn't be) that difficult to check for dependencies. What are the dependencies of a software program? 99% of the time, with what I use, its libraries or other binaries. Now, for 4 years of my career as a software developer, I worked on a project where I created an installer for various pieces of internet related software using InstallShield. I saw numerous versions of InstallShield come and go, but the one thing that remained constant was the need to check for libraries. You can check for libraries rather easily on a unix system. Take a peak at LD_LIBRARY_PATH, (or any of its derivatives) and whala! You've got the search path for any library. Look at PATH, and you know where to look for any binaries on a system that mater. If they're not in the search path, then require the user to direct you to them.

    Now, I said that this shouldn't be too difficult to do. But that's not the case, now then is it? Ok, so we've got these libraries. Cool. We know where the binaries are, right? Yup, but what version are they? Ok, while there are some version numbers built into the names of the libraries (you may have to look at some symbolic links, but nothing too difficult), but what about programs? How do we have any idea what version they are?

    I'm sad to say that Microsoft solved this problem a long time ago by allowing the integration of version information into the string tables of its executables and libraries. This is the standard way of handling things. Now, sure, you could do the same thing in Unix, but nobody does. Or at least nobody with any influence. Now, sure, you can query a binary for its version information, but which flag is it again? --version, right? Or is it -version? Err.. -V, no, that's not right. -v. Crap, that's 'verbose'. Well, which is it?

    The sad truth is that this just isn't feasible right now. Its going to take a lot more than a project with good intentions to get people to start putting version information into their binaries, and even when they do, there's whole lot of people out there that have old binaries and will see absolutely no need to purchase an update to their OS (yup, some people buy their Unix, like Irix, Solaris, etc.) just to get version information put into their binaries. Its going to take even more than a good idea to get companies like Sun and SGI to recompile all of their code and change all of their Makefiles just to take advantage of these new whiz-bang version features.

    Now, don't get me wrong. I think that the ability to maintain "packages" without the need of a database would be wonderful. I've dealt with my fair share of RPM headaches caused by taking the rode less traveled and compiling things from scratch. But I think that before we develop a new packaging system, there are other more important problems that need to be addressed first. Attack the problem at the root, don't go for the branches or you'll never win any ground.

  • Use stow :)

    It's not perfect (because of the fact that the things you install don't auto-handle dependencies) but it gives you a nice package style view.

    For example, to install IBMs JDK on my box, I just stuck it in /usr/local/stow and then typed stow IBMJDK1.3

    And stow then symlinks the files etc straight into /usr/local/ - bin/java etc - and as soon as you want to upgrade you can remove the symlinks and so on.

    It stops cruddification of /usr/local at the very least. It's still not perfect (for example, xmms plugins), but it's not horrific.

    The best bet would be something like this but to generate packages - if I could pack something into /usr/local/packages/IBMJava1.3 and then do something like stow that would create a mini-debian package and then I could install that, I would have even less trouble. And it would work well for all software that uses configure or anything similar.
  • So the Linux install would compile the application based on the kernel/etc.

    Or not. I don't necessarily install only source packages, even on BSD or Linux...

    ...and, from what I see on the Ethereal mailing lists, I'm not the only one; there are plenty of people who install binaries of Ethereal, for example.

  • by Guy Harris ( 3803 ) <guy@alum.mit.edu> on Saturday January 27, 2001 @03:29PM (#476834)
    Wouldn't it be cool to include ALL OS's? Not just the *NIX's (getting one package manager to correctly handle both BSD and Linux is a complicated task as it is), but Mac, Windows, etc?

    I don't know whether you'd ever get all the *NIXes to adopt one package format (heck, not even all of the *NIXes that use the Linux kernel use the same package format, so far; is it even possible to generate an RPM that works, for a given instruction-set architecture, on all distributions that use RPM?).

    It's probably even more unlikely that you'd get Windows, or MacOS Classic (MacOS X might be considered "one of the *NIXes", although it may be different enough from other *NIX-flavored OSes that it'd be even less likely that it'd adopt some standard package format).

    If they could get something that would reliably install stuff under Win2K (InstallShield really doesn't cover it),

    It might be possible to have tools such as Easy Software Product's Package Manager [easysw.com] (as mentioned in another posting; ESP are the folks who do CUPS) work with various non-*NIX packaging tools, as well as handling the various *NIX package formats it now handles (debs, RPMs, SVR4 packages, IRIX packages of some sort, HP-UX packages of some sort, source tarballs).

    Some tools for packaging on Windows include MindVision's Installer VISE [mindvision.com] (available for Windows and MacOS), for which "qualifying shareware and freeware developers" can get a free license [mindvision.com] (it's what the GTK+ and GIMP for Windows [user.sgic.fi] uses), and Nullsoft's "SuperPimp" Install System [nullsoft.com], which is also free. (I've not used either of them, so I can't say how good or bad they are.)

    and do compiling for makefiles (I don't even know if there is something to do makefiles in Windows anyway),

    Well, there's a tool called nmake, which comes as part of a package called "Visual C++" [microsoft.com] from some company up in Redmond, Washington that has done some software for Windows [microsoft.com]; its makefiles aren't exactly like those for the various *NIXes (but those aren't all the same, either - you have System V make, Sun's make which is a superset of SV make, GNU make, Berkeley make, etc.).

    It's not clear that it's a package manager's job to deal with the differences between the "make"s on various platforms.

  • Replying to myself, here is the URL:

    http://www.easysw.com/epm/index.html [easysw.com]

    Want to make $$$$ really quick? It's easy:
    1. Hold down the Shift key.

  • by VanL ( 7521 ) on Saturday January 27, 2001 @12:08PM (#476836)
    There are already a number of projects seeking to remedy this situation. The most advanced, IMO, is Easy Software Products' EPM package manager software. Already stable (up to version 3.2) and able to build .debs, .rpms, .tgz, swinstall/depot for HP-UX, pkg for Solaris, and inst/tardist for Irix. If we were all to switch to something like this, the binary format wouldn't be a big deal at all.

    Want to make $$$$ really quick? It's easy:
    1. Hold down the Shift key.

  • Indeed - right now the only way is a proprietary installer format (e.g. InstallShield's java-based installer, which installs its own JRE as necessary).

    EPM, posted earlier, looks pretty good for the Unix world, since it already supports quite a few platforms such as Debian, Red Hat, Solaris, HP-UX and Irix. This is unlikely to ever cover Windows - Microsoft has its own recommended installer format (Windows Installer, .msi) so most packages are migrating to that anyway.
  • Thanks for the pointer to stow (lhttp://freshmeat.net/appindex/1999/09/23/9381254 46.html) - it ooks very useful for software that comes in .tar.gz files or proprietary installers. I like the way it just adds the minimal set of features necessary, and encodes the installed files as symlinks for easy removal later.

  • In some ways, the problem can be reduced to hacking executable formats and making them coexist. (Of course, it helps if the executable format actually was designed to help do this.) For example, it is possible for a .EXE file on a PC to support DOS, Win16, and Win32 all at the same time.

    NeXTStep (now MacOS X) has "obese" binaries that can support lots of different architectures. However, I don't know if the packages that NeXTStep uses support the execution of non-trivial code at install time. Also, I don't know if obese binaries were parseable by Mach-O based implementations and those where OpenStep was just a library (like the Win32 version).

    Probably the most portable "execution" formats that can be understood across platforms are HTML, Javascript, and Java. You could distribute an installer that was just a web page, used Javascript to sniff the platform, then executed the installer for the right platform. (This would be considered a privileged operation and the user might be asked to confirm it, and the applet or script snippet that was implementing this would have to have its code signed.) Check out InstallAnywhere [zerog.com], which uses some of these techniques (at least the last time I checked, anyway).

  • by Ex Machina ( 10710 ) <[moc.liamg] [ta] [smailliw.nahtanoj]> on Saturday January 27, 2001 @11:56AM (#476840) Homepage
    Its called the BSD ports system. It really shouldn't be that hard to get it to work on Linux.
  • Consider this:

    Client machines
    ---------------
    - AMD 1.2 Gz
    - super-fast graphics
    - 1GB RAM
    - ATM interface
    - 0 disk

    Client is basically a hot-rod x-terminal with a big fast pipe. No hard disk, only 1 very large ram disk. Files would be accessed through NFS. All programs would be run off of the application server(s), or the web.

    Programs could be cached on the ramdisk. The machine would never be powered off, so its cache would become rich with programs and data.

    If eventually the OS crashes, you'd restart, and have a virgin machine.

    Server machine
    --------------
    Sky's the limit

    The high-speed networking of tomorrow will make a great many things possible.

    domc
  • by domc ( 11897 )
    Installshield sucks.

    domc
  • Also, think of the scalibility of this.

    A small network could have one or two application servers.

    A network of a million users could have a separate server(s) for each app.

    domc
  • There is nothing that would prevent you from having a hard disk in the client. For many applications a local disk is not necessary.

    Diskless workstations would make the most sense on a large corporate network where there many desktops to maintain. In such an organization there is little need for local storge.

    domc
  • I'm sure they will have a slim web version of word but that was not the point. The point was syncronized information, like a central online repository of documents. This allows you to do some work at the office, stop at a terminal and use the web version for a while and finally finishing up on word at home. The point of .NET is not online apps, its online information syncronized everywhere. I'm sure this is the future, but its good even for linux. The API is network based, not OS based and a linux app could conform to the calender interface that every other windows/mac calender program conforms to. This way, you log on to any computer and use any program but its still the same information. Information is the key, not which program a user uses to interface with the information. They might need to use the web calender program for a while but get the real work done in the office, or perhaps their mobile, whatever. I actually can't wait...
  • What are you downloading ? There are no files on source forge. This is vapourware
  • First, I think that idea of a unified, cross-platform packaging system is a great idea that isn't ready for prime time yet. Personally I don't think the technomogy is quite there yet. And the market isn't conceptually ready for it, either.

    Second, these package wars are non-productive. Saying that RPM is inferior to DPKG does not help anyone. Especially when it's not true. Neither RPM or DPKG are significently superior to each other. When you break them down they are pretty much the same, just with some minor differences in implimentation. They are both very good tools.

    ---

  • There's no chance of some upstart package format becoming a standard in Windows.

    If they could get something that would reliably install stuff under Win2K (InstallShield really doesn't cover it)

    Microsoft is now releasing most of their software (Office 2k, the new Visual Studio.NET) in .msi packages. I think the new version of Paint Shop Pro uses MSI too. It's a growing standard.

    [...] and do compiling for makefiles (I don't even know if there is something to do makefiles in Windows anyway), I'd definitely get this package manager.

    Go and grab one the hundreds of projects that compile under both Win32 and UNIX. You'll notice that they use separate makefiles for Win32 - autoconf is not required (assuming you were talking about autoconf in "compiling for makefiles").
  • TO build a packaging system on top of this like you said would be trivial. Here for example is a simple way to do it.

    In each app directory a file could be placed by the maintainer (lets call it appinfo) this could be a simple perl hash or an xml document describing the package and the dependencies as well as the home page of the document. Here is simplified example. If you download the source yourself and install it yourself then encap can still make the links for you and the pakager will know about it simply by reading the /opt directory. It would be simple to build your own appinfo file too in case other packages depend on it.

    name = myapp

    version = 1.001

    url = http://myapp.com/installer/

    depends = { name = somelib,version=2.001, url = http://someurl }
    depends = {name = anotherlib, version=3.02 ,url=http:/anotherurl}

    Like I said the transport mechanism could be httpsync [mibsoftware.com]

    The encap (or like) program could simply invoke httpsync and download the package, check for dependencies and keep calling httpsync to install the dependencies if they don't exist.

    The beauty of it all is that the work is already done. Httpsync as well as CVSUP, rsync etc already exist all you need is a relatively simple perl script and you are done.

    The tricky part is to get maintainers who are willing to compile the app and serve it using whatever.
  • This is very easy to do. Here is a way I have thought of I'm sure other people can come up something even better.

    1) Every application must reside in it's own directory in /opt or /usr/local.
    2) The directory must be named nameofapp.ver
    3) Inside the app directory ther may exist ./man ./etc ./var ./lib
    4) ./var is actually a symlink to /var/nameofapp.ver
    5) all the files are them symlined to /usr/bin /man /etc or whever (there are already a couple of systems that do this).

    The beauty of this system is that it's very easy to implement because it's just a matter of specifying a --prefix.
    If you need to roll back to a previous version you simply relink the old directory.
    You don't need a database just doing a ls on the /opt directory tells you every package installed.
    You don't need to compile in fact you can use httpsync or cvsup to fetch the files directly from a url.

    Easy, simple, human readable, human fixable what else can you ask for.
  • What I would like to see is a format that is based on moving source around and not the binary images. Kind of like tar balls with ./configure scripts but the way most configure scripts are run, you unpack every thing and then after it churns a long time you find out you need xyz which you don't have. I would like to see standard Makefile target (or some other system) that checks for requirements so 'make requirements' will go out and tell you real quick that you have to go pick up a bunch of new stuff.

    If I remember correctly, the AT&T 3b2 used to do this back in the System 5 R 2 days but its packages were based on cpio.
  • This sounds a lot like like the Architecture Neutral Distribution Format that failed in the early 90's. Does anyone remember that ? Of course that was during the height of the so called "Unix wars" so I guess it was doomed back then.
  • . I just imagine that the actual code will be first d/loaded, and then executed locally

    Not for me. No thankyou. As cheap as hard drives are getting, you want me to use bandwidth rather than have a copy of vi on my system?
  • But why replace my hard disk with your hard disk...the security implications are staggering.
  • Oh, i get it now! And since I work at home with just two machines, then they'd need a server. As long as its personal, and intranet, and as fast as going to hard disk is now, I'd still say "only when the servers are cheaper than buying a hard disk".
  • Your argument approaches irrelevancy if and when networking becomes "pervasive". Does Picard "run a program" on the computer? No, he just tells it what to do, and magically it works. At some point in the future, it is not only not improbably, but probably definate, that we will have pervasive, seamless, computing all around us. But by that time the very concept of installing a "software package" might become obsolete also ;)
  • DRI is nothing like KGI! Read some of the specs before you complain about it. KGI was essentially an fbdev system (ie. no hardware acceleration, or at least not 3d), while DRI is essentially *only* 3d acceleration for X. Not exactly the same.

    As for release dates, you should have come to expect open-source-style lack-of-release-dates by now. What he "announced" was a *projection*, not a promise. That's why it's different from when Microsoft misses a release date.

    I don't agree with everything Linus does either, but if you're going to complain, say something valid.
    --------
    Genius dies of the same blow that destroys liberty.
  • Actually, we've been looking into adding support for Windows-native distributions in EPM. The current offerings (InstallShield, etc.) are not suitable for large projects and can't be automated like they can under UNIX...
  • Not gunzip, but regular old unzip can. The ZIP format allows an ignored preamble, which is where a self-extracting file puts the extraction code.

    The problem is that oftentimes these files are not ZIP files, or, worse, are a proprietary installer program. Then you can't really access it under Linux at all.

    Since the most recent versions of Windows have built-in support for ZIP files, it shouldn't be necessary any longer to distribute self-extracting files.
  • If you want to uncompress an ARJ, then you should use unarj [freshmeat.net], but I don't know if it can handle SFXs.
  • It is an irony of the computing industry, that when someone sees multiple factions in software for providing a function, that people decide to solve the problem by introducing another faction.

    Consider the tar and cpio factions, and "pax", the faction to end all (archiver) factions.
  • It's official. This is the first submission to use the word "whilst" twice.
  • I'm a build/release engineer. Besides making sure the builds go smoothly (not an easy task), I am also responsible for writing installation programs. My company releases its software across multiple platforms, Unix, Linux, SunOS, WinNT/2k, etc. It would be wonderful if I could write on installation script that handles everything, but I know it simply can't be done. There would be so many OS conditionals in the code that it would be cumbersum to maintain. Besides, when programmers in this crowd talk about cross-platform, they mean it works on RedHat, SuSE, FreeBSD, and possible SunOS - never even considering the Windoze world. Like it or not, Windoze is here to stay, and anybody trying to sell software is foolish to ignore it.

    I'll check out the project, perhaps I'll offer assistance, but I won't bet my career on it.
  • Because a zip file won't configure the program. Installing an application is more than just dropping files onto a computer.
  • I don't know if any of you all have used this, but it really works quite nicely. Having a background in windows (yeah i know...) i've always noticed that installing stuff is a much bigger hassle on a *nix boxor. I happened to come across Installshield Java Edition when i was installing JPython on my solaris; i was quite impressed to have an inuitive and easy to use GUI based interface that worked identically in windows as it did in linux or solaris or anything else with a JVM. There is/was a free evaluation available at http://www.installshield.com/java/ . Its certainly worth a peek.
  • You are right that losing network is fatal in this scenario. However everything is networking in very quick manner, and I am for example unable to do most of my work (including word processing) without network connections. What's the use to have working word processor if all your resources are unreachable? Few years ago I gathered my sources from books, newspapers and so on, today I won't bother. Information in net is more easy to obtain, it's up-to-date (when used correcly) etc.. I mean you can edit your current work etc, but real effective work requires network. And number of people depending their work on network is raising rapidly. Sun's slogan "Network is the computer" is closer today than ever before. Tomorrow it's even closer.
  • I don't know why I feel like saying this, it's just inciting a troll mod but:

    need I point out:
    stevenl writes "A new project...

    The post was written by the person of the name signified in bold.
  • How about this... In order to preserve the source of a closed source system:

    Package two files as one. One file is the encoded source code (for closed source) or just a tgz with all the source (for open source) we'll call this Part A. Then add Part B, an operating system emulator that is made to do only a single thing: run a fake os, very small, and execute a compiler that is built into the emulator to compile the source and then spit out the final binary onto the system...

    For instance. Very simple hypothetical example.

    I write a program called "DecBin" that converts decimal numbers to binary. I take the source files and put them in a .tgz file. Then I choose my compiler. There is a program that takes the chosen compiler (on the source system's disk) and puts it into the emulator... then it packs the emulator/compiler with the tgz of the source into a single binary file for a specified OS.

    Now whenever somebody wants to install my program, they execute the binary. The binary unpacks the tgz and then runs its emulator/compiler on the source code and then spits out the binary as a.out or something.

    Just an idea...
  • Wouldn't that make the file sizes bigger if you were to include all OSes?


  • How does this relate to Open Packages [openpackages.org]?

    I know that Open Packages is a Unified BSD Package Collection, but how do these compare?
  • by TimR ( 88739 ) on Saturday January 27, 2001 @12:22PM (#476871)
    This can be quite a useful tool but I fear that it will have to be somewhat limited in its feature set. Such a platform independent tool will need to cater to the lowest common denominator in much the same way that Java must. For example, how do you provide for Windows Registry entries in *nix?

    Also, the executables that would get distributed will likely be tailored to its host platform. Many programs will utilize the differences between these platforms. Again, I bring up the example of the Windows Registry. Many applications in Windows depend upon the installer setting up Registry keys that are accessed by the executable. How do you rectify this in *nix? Or the MacOS? Or with the ever growing number of embedded apps?

    I'm afraid that program placement, management, configuration, etc. hits so close to the core of what makes a platform that this project will be difficult to complete (and be useful).

    Here's a metric we can use to see if ever succeeds: Will developers throw away InstallShield and rpm to use this?

  • > I don't even know if there is something to do makefiles in Windows anyway

    Any operating system can have a utility for compiling projects with makefiles. A makefile is, after all, just a set of instructions to be passed along to the compiler about how to compile (what options to use, etc.) the project. Any decent C/C++ compiler will have a make utility (for Windows/DOS, DJGPP [delorie.com] comes to mind).
  • So, according to you, we'll have moved from a predominantly server/dumb-terminal computing situation to a predominantly desktop computing situation to a predominantly server/dumb-terminal computing situation, but with multiple operating systems (thus requiring non-processor-native, i.e., slow, code)?
  • That has (in a sense) already been done. Behold, the RPM Browser for Windows [zdnet.com].
  • I don't see any mention of whether this is UNIX-specific or all-inclusive. Is a packaging system that uses the same specifications for UNIX/DOS (or Windows, if you prefer)/MacOS/etc. feasable. Certainly it's possible (there's gcc for DOS, for example) to have UNIX tools compiled for filesystems organized in different ways, but this is a packaging specification.

    Also, how would it handle dependancies? A widely available version of X11 for BeOS, for example, puts files in different places than X11 for Linux does (AFAIK).

    This doesn't sound feasable for anything other than a strictly UNIX platform, and then what do we have? Yet Another Packaging System.
  • Did anybody tried it?I'm donwloading as I type but I found strange that for a packager that will work with 4 different package managers they post a source file.Not that is too hard to type make,make_install but I really would've appreciated a deb,rpm as a first demo_of_concept. Just wondering.
  • by Dominic_Mazzoni ( 125164 ) on Saturday January 27, 2001 @04:27PM (#476877) Homepage
    Styrofoam Peanuts are the ultimate packaging material. They can be used to package software for Linux, Windows, MacOS, *BSD, BeOS, and more.
  • Shouldn't worry about it, someone else moderated me down as flamebait.


    --

  • Can gunzip or something open a self extracting zip file?
  • ...is a promising one-click web based multi platform installation solution that has just recently been released. Of course it installs Java applications, but Java's multiplatform already so it's perfect. It still has its limitations compared to native code but that gap is narrowing with every release. (I am not affiliated with Sun but I am a Java coder :)
  • That seems like a pretty stupid idea. How the hell is Linux suppose to get the drivers if they have been placed in a EXE file? That is just stupid, and wasting Windows users bandwidth.
  • What's the difference with OpenPackages.org [openpackages.org]?

    --
  • Finally a BSD/ports like distribution for Linux.
  • does apt-get work on windows? on the mac? the requirement here is cross-platform as far as I know apt-get doesn't even work across Linux distros...

    Anyway, if you can't assume access to a compiler, the only thing you could do would be to ship binaries for every platform. And as you can probably tell, that would suck. I would say just use java and .jar files, but it seems like you want to be using C++. Another option would be to have the installer go grab the files off a server if it doesn't have the ones it needs for the current platform.

    Of course, that would require that you have binaries for that platform, but, its better then nothing.

    Amber Yuan 2k A.D
  • This is pretty much what we're doing with The ROX Project [sourceforge.net].

    You can do away with much of the symlinking by updating the tools.

    Eg, have a help command which simply looks inside the 'Help' subdirectory of the application (in fact, the filer just opens that directory).

    Likewise, you can make your shell 'run' a directory by running the file 'AppRun' inside.

    You can even include the source code and make AppRun a shell script that compiles a binary for your platform automatically if it doesn't yet exist.

    Also, this means that the source, help and binaries never get out of sync!

    --

  • It's a great idea, but how do you get people to adopt it? It should be designed to handle RPMs, and distributed as a replacement for the rpm utility. Better yet, rpm could just be extended to do this sort of thing.

    While it might be true that RPM is used a lot more than DEBs, that doesn't mean Debian users are just going to abandon their packaging system. In fact, Debian users are probably more religious about it. And they're probably right too, DEBs work by far better and the infrastructure is already in place.

    So, while it might sound like a great idea to extend RPM to do DEB-like stuff it is too late (assuming that it is technically possible).

  • NuSphere MySQL provides a portable multi-platform (read Linux, UNIX, and Windows) installation via open source tools. The key is providing a mini web server to use to bootstrap. This allows the installation to use the browser on the platform for the UI and Perl scripts that are invoked by the mini web server. Since the scripts can be platform sensitive we are able to have a single install that works on all platforms (and even autostarts). Download and give it a try on Linux and Windows if you are skeptical.

    The biggest issue with RPM we find is that everyone who builds them seems to think that every program should be placed in some set of hardwired directories. To provide flexibility for the user to choose their own installation root (or perhaps even install the software multiple times on the same machine for different purposes we found RPMs sorely lacking.

    Installation can be done by command line, remotely, and by someone without root privs in a directory where the user has write access and provide apache, perl, php and mysql function for that user.

    To gain widespread IT acceptance, open source products are going to need to get much more sophisticated about how and where they are installed.

    Britt...

  • by duffbeer703 ( 177751 ) on Saturday January 27, 2001 @12:54PM (#476888)
    What we need is a packaging system that can correctly detect whether or not dependent packages are installed without having to have a database. The package manager needs to be aware of the differences between platforms.

    The problem is, inevitably, the database will get out of sync the moment you have to compile something from source because no .deb or .rpm file is available right then, or because you have a local patch to fix a bug you need which isn't important enough for enough other people for the author(s) to fix right now (or maybe is to complicated for them to figure out how to roll it back in without breaking things for other people that you don't happen to need to worry about). Even buying software that uses install shield or some other installer will mess up everything.

    Once the database is out of sync, then new problems come up, and those are easily fixed by forcing an install or installing from source, and then it just gets worse.

    Without a database, it would mean the installer would have to have a way to detect whether the dependent thing is installed or not, and in the correct version. I won't say that would be easy, but it is what would be needed. Until then, based on my past experiences with Redhat's RPM, I won't at all be interested in a fancy packaging system.

  • Wouldn't it be cool to include ALL OS's? Not just the *NIX's (getting one package manager to correctly handle both BSD and Linux is a complicated task as it is), but Mac, Windows, etc?

    If they could get something that would reliably install stuff under Win2K (InstallShield really doesn't cover it), and do compiling for makefiles (I don't even know if there is something to do makefiles in Windows anyway), I'd definitely get this package manager.

  • I like the idea. When I wanted to package OpenSSH at work, it's one thing to build it on several platforms... But it is quite another to package it.

    I like the idea of not having to know how to make an HP Swinstall package, a Sun pkgadd package, and others. Right now, I need to sholder tap someone to have a package made for a packaging system I am unfamilier with.

    As for Windows flavors... I find myself wondering why even port anything to Windows... let alone package it. But of course, that is entirely biased. :)
  • Ever tried to download a driver set for a piece of hardware, only to find that the package is many megabytes large, and contains drivers for every OS known to man? Well, I'm sick of it too.

    It's not about packaging for as many OS's as possible in one package, but to create many packages for many OS's/distributions from a single cvs repository with a single tool.
  • Portage is already being used in some distros. Check Out Gentoo Linux! [gentoo.org]
  • ... that doesn't punt on managing multiple versions of the same "thing" and multiple parallel instances of the same thing running at the same time, with all the runtime configuration problems that go with it.

    For example: it is impossible to create a package of, say, a sophisticated perl module without having to effectively include a whole perl distribution with it. Why? Because it is impossible to have two different versions of the same module available at the same time, at least not without serious trickery

    These are the really hard problems, and as long as they are being ignored, this new packager is just so much wasted effort.

  • by transami ( 202700 ) on Saturday January 27, 2001 @03:49PM (#476894) Homepage
    The prblem with install incompatabilities is really only one of our own making. The problems primarily lies with differing directory structures, system batch files (almost solely involving the init sequence) and available libraries and their version conflicts. All of these can be easily overcome. The last is easily fixed by allowing all versions of libraries to be installed such that they have different names and/or reside in different locations. In this way applications can be very specific about the versions of libaries they use. And programs wont break because a new version of a library has been installed Iin the Windows world this is know as .DLL Hell. This is not such a great problem with Liunx. Next, the batch files, primarily the one involving the init sequence, are really a minor issue as theer is agood bit of standardization here, but not enough, and a solid fixed standard needs to be put foward and followed. I don't see how else to get around this other than participating in a standard. Finally, the various directory structures cause problems for applications because they can't find files, not knowing where they are located. Imagine for a moment that wasn't a directory structure at all and all files resided in a single directory. In such a caseeach and every file must have a unique name. Well, this is actually how it is now if you just think of a path as merely a files name. And so the problem lies in the fact that the names don't stay the same from system to system. This can be fix by either everyone following a strict FHS (file hiearchy standard) or by having a single repository directory where a uniquely named link is placed pointing to the needed files of a package. Think abou it!
  • How do you resolve the problem where a binary looks in a specific place for its own data files? Or, let's say some game is compiled to save its high scores to /var/games/name-of-game/scores. There is no way to change this after compilation. And if every program can be easily told to look elsewhere for files, it seems that one would have a security problem.
    --
  • Actually, this specific one (68MU111B.EXE) is a self-extracting ARJ file. I'm sort of familiar with ARJ (hell, that's what I used to get DOOM2: 5 spanned arj files on five floppies), however, the ARJ program itself was DOS. I don't know if you'd have any luck with this specific one, and for this and other reasons, may your respective deity help you if you own a VIA chipset with integrated sound.
  • by AFCArchvile ( 221494 ) on Saturday January 27, 2001 @01:14PM (#476897)
    Ever tried to download a driver set for a piece of hardware, only to find that the package is many megabytes large, and contains drivers for every OS known to man? Well, I'm sick of it too. Just recently, I had to download a sound driver for the VIA Southbridge, and the driver package (68MU111B.EXE) was 8.91MB (yes, almost 9MB, that's no lie). That self-extracting package contained drivers for Win9x, Win98 Second Edition, Windows ME, Windows NT 3.51, Windows NT 4.0, Windows 2000, OS2, DOS, and Linux (Caldera 2.2 and RedHat 6.x, specifically). If those drivers had been packaged separately, the drivers themselves would be 240 KB, 84 KB, 79.2 KB, 164 KB, 6.32 MB, 79.2 KB, 240 KB, 415 KB, and 139 KB; respectively. The largest sized driver set in that package is the NT4 driver set, weighing in at over 6 megabytes (most of that size being the separate setup program). If VIA had split up the drivers by OS and used a less user-friendly strategy (ditching the setup program in favor of .INF files), then I would only have to download 79.2 KB.

    This is one of the reasons why I hate VIA: because they do everything so bass-ackwards.

  • While at first it seems to be a wonderful idea, it is in fact not. Keeping something like tar or bz2, zip, etc... standard is important but not a packaging technique or manager. Different operating systems are too different, and would never be accepted on non-open platforms. If a standard packaging system is to be done, do it for linux and unix like operating systems only. Then if non-open platforms like it, they can adopt it on their own. I personally think the best thing that could happen would be for debian, redhat, the linux standard base, and others to get together, take features and components from rpm, deb, and create something new that everyone can agree on. From a developer and corporate standpoint, making packages for rpm, deb, and tar is too much of a hassle and not practical. Everyone needs to put aside their differences in opinions and come to a compromise that can benefit everyone. There ought to be a standard packaging system for binary data and stick to tar for source. If average computer users are to ever migrate to linux, this has to be done. Enough with all the ghettoness and disorder.
  • Of course, if the code was being d/l'd from the web every time you ran it, you wouldnt need a packaging program. It would allocate ram/disk space and then when you close it, it would delete it. Although, some terminals would have the option of caching data....Hmmm, then every program could be put into a seperate filesystem, and all you would need was a simple decompression program. Since every program was in a seperate filesystem, there would be no program confilcts because, as far as each program is concerned, it's by itself in it's own little filespace

    Just a thought...

    -Bucky
    The few, the proud, the conservative.
  • by bucky0 ( 229117 ) on Saturday January 27, 2001 @12:21PM (#476900)
    I disagree. Although some programs can be run over the Web, why would you want to? If I had the option to run a word processer program, or a word processer applet, I'd chose the program any day. Why? Because of the speed/security aspects. I really dont like the idea of running executable code from the web everytime i want to do something, can you imagine the fun that hackers would have with that?

    There's more programs that couldnt/shouldnt be run from the web than ones that should. Can you imagine trying to run Quake over the web? What about other processer intensive applications?(seti@home, apache ect...). I dont see a day when people will give up performance/security just so that there can be a unified OS......I just cant see it. /rant
    -Bucky
    The few, the proud, the conservative.
  • The running of programs has nothing to do with the fact that different versions of linux are forks, they all use the *same* librarys. The problem is linux librarys are being developed alot faster than BSD's.

    The BSD people stick with the "tried and true" librarys that work, and work they do, but they dont get all the new features that revamping those librarys can bring. Maybee you dont want those features, good, stick with BSD.

    For those of us that like new features, we will stick with linux, and its ever expanding set of features. The programs that tend to "not work across distributions" are those compiled with the higher level API stuff anyways. Core API's all stay the same and are very compatable across
    distributions.

    Maybee we should call distributions what they truly are, OS's with different versions of the exact same librarys, except that would be a crappy name, so howabout we call them distributions . . .
  • I envisage more of a combination of the two. 'Dumb terminals' of the future will probably be a hell of a lot more powerful than the top flight PC's of today. I just imagine that the actual code will be first d/loaded, and then executed locally, so that as far as the user is concerned the program is running very quickly, and on the web. Sort of like a super charged Java Applet on speed. I think that cross platform packaging systems would be very useful indeed in such an environment, and may form the backbone of it.

    KTB:Lover, Poet, Artiste, Aesthete, Programmer.

  • by Kiss the Blade ( 238661 ) on Saturday January 27, 2001 @12:02PM (#476903) Journal
    We all know that the future does not lie with any particular OS. We have been repeatedly told this for some time now. Other platforms have been doing thier best to grab this, as we can see with Solaris (Java) and Windows (.NET). In the future, which OS you run will not be important. When I use my computer, I already spend the majority of my time on the web. In 10 or 15 years, things will be fast enough for entire applications to be run on the web, and my local terminal will just be used to store data and connect to the web.

    The thing that people forget is that this is a good thing. What Linux needs is to develop a cross platform packaging system such as this so that the web can utilise it, and so that the Linux system is at the centre just when these new developments are taking off. The future is OS independant. If Linux is to survive in such a world, it needs to be independant too.

    KTB:Lover, Poet, Artiste, Aesthete, Programmer.

  • What would be the use of a cross platform installer if you aren't ever going to be installing anything?
  • by perdida ( 251676 )

    You know, if you are going to imposter-ize me, you should wear ankle guards, because when I find you I am going to unleash an army of rabid she-gnomes on you.

    -perdida
  • It's a great idea, but how do you get people to adopt it? It should be designed to handle RPMs, and distributed as a replacement for the rpm utility. Better yet, rpm could just be extended to do this sort of thing.
  • This is a major software engineering challenge, let alone the usual hurdles with trying to agree on standards in this business, but none-the-less very intriguing to me and probably lots of my other nerdy counterparts.

  • Invent a cross platform packaging system and a cross platform apt-get type system. But you know, every distrobution seems to use different things for generating GUI's so doing this would be really complicated.
  • That and the shared library things, documentation, and po files. Packaging puts everything in the right place for you, thats all.
  • Since all the registry calling are done via the APIs, you would need to create a set of APIs for Linux to handle this. Since *nix seem to despite the very idea of the registry, (binary format, unified, obscure), another possible way is to do something like this: create a directory which will be used as a storage room for text files. The registry entries will be kept in a .reg format (to keep it compatible), with each application having it's unique file. The main problems is speed, registry calling is extremely fast because of the nature of the registry, you'll need to phrase the text file on every call, which will be slow. Some one else would've to calculate how much slower it will be. Another problem is that you'll need to copy a lot of registry entires from windows, and make them avilable for programs. I think that the best way to implement it would be to create one default file (just export a freshly installed registry to a .reg file), and when the program change a key, it will go to its unique file, and when it call it the next time, it will get its own key, not the one in the default registry. But this has performance issues all over it.
  • Is it even clear that a package manager should be dealing with compilation? Remember, most (95%?) Win32 users don't have a compiler. Also, I've never seen an installer on Win32 run the compiler for me. A Linux environment is quite different from a Win32 one. A smart cross-platform installer would know when to install an application and when to compile it, taking into consideration that platform's users. So the Linux install would compile the application based on the kernel/etc, and the Win32 would install itself without compilation, and come with the necessary redistributables. Greg
  • Can I modify what I said about smart installers then? And change it to compile when the user wants to compile the application, and install it when the user doesn't want to compile or can't compile it.

    Also, I agree with you about Linux. I would actually prefer dl'ing precompiled applications. But if the source is all that is included it should still be easy for me to install.

    Greg
  • This is basicly what I have used on several large unix based networks.

    I have used an NFS capable variation of /s from the UW Madison CS department. [wisc.edu]

    At one of my sites I setup A set of tools not unlike stow [gnu.org] and graft [gormand.com.au] that would build sets of software for anyone to use. The set of tools would automaticly reconfigure users enviroment like encap (can't remeber were that is from). It would however do it in the filesystem so that you could appropreately control the revisions or toolset that a scritpt was coded to use. A.K.A. #!/home/gulfie/u/project_uts/bin/perl -w

    It is not a packaging system as such, it is more of a software installation system, but a packaging system on top of this would be almost trivial... I like trivial it is more likly to be gotten correct.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...