Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Ubuntu Linux

Shuttleworth Says Snappy Won't Replace .deb Linux Package Files In Ubuntu 15.10 232

darthcamaro writes: Mark Shuttleworth, BDFL of Ubuntu is clearing the air about how Ubuntu will make use of .deb packages even in an era where it is moving to its own Snappy ('snaps') format of rapid updates. Fundamentally it's a chicken and egg issue. From the serverwatch article: "'We build Snappy out of the built deb, so we can't build Snappy unless we first build the deb,' Shuttleworth said. Going forward, Shuttleworth said that Ubuntu users will still get access to an archive of .deb packages. That said, for users of a Snappy Ubuntu-based system, the apt-get command no longer applies. However, Shuttleworth explained that on a Snappy-based system there will be a container that contains all the deb packages. 'The nice thing about Snappy is that it's completely worry-free updates,' Shuttleworth said."
This discussion has been archived. No new comments can be posted.

Shuttleworth Says Snappy Won't Replace .deb Linux Package Files In Ubuntu 15.10

Comments Filter:
  • why bother? (Score:5, Funny)

    by Anonymous Coward on Sunday September 06, 2015 @07:52PM (#50469099)

    The functionality will be built in to the next version of systemd.

    • Re:why bother? (Score:5, Informative)

      by Anonymous Coward on Sunday September 06, 2015 @08:41PM (#50469309)

      why is this modded troll?
      Lennart the great mastermind has announced it on his blog: http://0pointer.net/blog/revis... [0pointer.net]

      • sooner or later.

        now why would someone let guys who want to do that make their bootup system? it will have it's own kernel soon enough too and it's going to be forking time again for all the distros

      • Re: (Score:3, Insightful)

        by Anonymous Coward

        Wow. Now i really am starting to get why people hate this guy.

        • Re:why bother? (Score:4, Interesting)

          by caseih ( 160668 ) on Monday September 07, 2015 @02:08AM (#50470211)

          Maybe you should actually read what he wrote before jumping on the hate bandwagon. He's absolutely right that for many years and applications traditional package systems fall down. That's not to say they aren't important. They are and will continue to be. But they have their limitations when it comes to fast moving software like libre office on a nice stable slow moving distro like the lts releases of Linux distros.

          As a matter of fact docker is really one attempt to solve this problem. Coreos is based on this idea. Chromeos also eschews packages entirely. Now snappy.

          And as experimental distros like snappy try things, new utilities will have to be created to manage the images. This is what Poettering is talking about. In the meantime you're free to not use any of this. It's just a bunch of ideas, many of which happen to be really good, and natural extensions of the traditional package model. It's exciting stuff.

          • by caseih ( 160668 )

            Sigh. That should have read, for many types of applications. Not many years. Google's swiping keyboard is pretty good but always makes a few mistakes.

          • Re:why bother? (Score:5, Insightful)

            by serviscope_minor ( 664417 ) on Monday September 07, 2015 @07:23AM (#50470851) Journal

            No they don't fall down. I've heard this claim and it's frankly not true (or rather, true in a very very limited set of cases).

            For first party packages (distro provided) it's business as usual: ubuntu seems to have no trouble tracking the latest firefox builds and there's a fresh deb available via apt-get update && apt-get upgrade in a very timely manner. Likewise there's the fresh and still LO packegs available depending on whether you want stability with timely security updates or bleeding edge.

            So, demonstrable, fast moving packages are not a problem.

            What about third party ones?

            Basically it's the same. Add a PPA for the third party repo and it just works. Now, if the third partd dev doesn't want to keep up to date with system libraries which may change, then they might choose to ship their own .so files. That has the downside of not tracking security updates, but since linux package managers are the only system where arbitrary packages tend to get security updates to arbitrary libraries anyway all that does is lower the performance to that of every other OS on the planet.

            And some programs do this: they provide a third party .deb or PPA and dump files in /opt/foo completely isolated from the system files in/usr. That works very well too.

            One of the ways that packaging falls down traditionally is for multiple versions of the same package installed concurrently. Part of this is because some programs themselves are not build for that (e.g. expecting files in /etc), however most package can be persuaded otherwise and there are in fact package managers that solve this problem.

            The other way is if a program needs a complex system relying on multiple non-default configured packages to be set up. At that point, it's often easier to ship an entire system image.

            However, doing system images for everything seems a tad wasteful.

            The other thing that is hapening is Zawinski's cascade of attention defecit teenagers. Yeah, I know packaging isn't perfect in general and deb is not perfect in a number of specific ways. But the people who want to dump everything and start afresh often seem to be quite unaware of teh state of the art. The results is that the new systems are usually better in some ways, but inevitably worse in a number of ways that the author didn't think of but have been hammered out and working well for 20 years in other systems.

            It's sad because to someone who's around for a long time, software doesn't so much as advance as take an awful lot of steps sideways. You get big fat brand new shiny systems which just plain do a bad job of previously solved problems.

            This seems to be the same: many of the reasons for doing away with packages are flat-out wrong which strongly implies that the people replacing packages don't really understand packages properly and are therefore likely to make a bunch of new mistakes which have previously been solved perfectly fine. So even if they solve some problems (I have no doubt they will), they'll also unsolve a bunch.

          • It's exciting stuff.

            Is there really a part in there you consider exciting?

        • Re: (Score:3, Funny)

          by Eunuchswear ( 210685 )

          Yes, how dare he develop software that people can use if they want to, the bastard.

          • Youre assuming that "convenient for distro developers" translates into "valuable for users".
            More over you are ignoring that having it adopted as a dependency by things like gnome largely tied most distros hands. Dedicated non-gnome distros are the only ones that made a choice. Any distro that wants to ship gnome (an utterly unrelated product) did not choose systemd as there was no choice available.

      • Re: (Score:3, Insightful)

        Comment removed based on user account deletion
  • Will Launchpad build the snaps after it builds the debs?
  • How? (Score:5, Insightful)

    by khasim ( 1285 ) <brandioch.conner@gmail.com> on Sunday September 06, 2015 @08:05PM (#50469147)

    "The nice thing about Snappy is that it's completely worry-free updates," Shuttleworth said.

    I don't think it was the PACKAGE that caused people to worry about an update.

    For example, Shuttleworth said that if there is a security vulnerability, like a Heartbleed flaw, the way Ubuntu fixes the issue is with a .deb package.

    Isn't that an issue with the code itself?

    The great thing about .deb packages was that the OFFICIAL ones underwent a lot of testing to try to catch problems BEFORE they were deployed. NOT because they were magical .deb packages.

    • Exactly - it is almost if they were trying to give a bad name to OS software.

      There are very good reasons that Debs act like they do - and even M$ is now adopting the repository approach (but of course if the code isn't open, it can't prevent bad things from happening).

      One could make the argument that all software should be it's own blob - no dependencies because hardrive are now huge - but having 6 different versions running reduces the chance that someone else will be facing the same bug as you are - and m

      • Re:How? (Score:5, Insightful)

        by 0123456 ( 636235 ) on Sunday September 06, 2015 @08:24PM (#50469225)

        I've seen software that depends on bugs to function

        Back in the 90s, I had to intentionally reproduce Microsoft bugs in my Windows drivers, or various apps that had never been run with non-Microsoft drivers would fall over...

        But, yeah, let's make Linux do things the Windows way, so you have sixteen copies of different versions of zlib.dll spread across your disk, all with different security holes. Because you know it makes sense!

    • Re:How? (Score:5, Interesting)

      by phantomfive ( 622387 ) on Sunday September 06, 2015 @08:28PM (#50469251) Journal

      The great thing about .deb packages was that the OFFICIAL ones underwent a lot of testing to try to catch problems BEFORE they were deployed. NOT because they were magical .deb packages.

      I think they are still standing on Debian's shoulders here, and their Snap files are being automatically created from based on the .debs. The main feature of a Snap file is it combines all the libraries in a single archive. All the dependencies, everything. It installs them locally, not for the whole system, kind of like an .app file on OSX.

      If that seems like it would take a lot of disk space, Ubuntu is hoping disk deduplication will take care of that.

      • Re:How? (Score:4, Interesting)

        by Anonymous Coward on Sunday September 06, 2015 @09:03PM (#50469381)

        All the dependencies, everything. It installs them locally, not for the whole system, kind of like an .app file on OSX.

        So.. they install things in Linux containers (or namespaces) and then call it "snappy"? So why not just link everything statically?

        Anyway, I don't get it. You can do that already. but you still need to get those apps to communicate with outside world, which means leaky containers at best.

        Furthermore, in case of heartbleed, it would mean EVERY single application that uses OpenSSL would have to get rebuilt instead of just getting fixed library and rebooting.

        • Re:How? (Score:4, Interesting)

          by phantomfive ( 622387 ) on Sunday September 06, 2015 @09:09PM (#50469419) Journal
          You are right, but I don't know the answers to those questions.
          I think there is definitely room in the Linux world for a self-contained App container. I don't think it's a good idea to make every package in your package management system self-contained, though.
  • This also doesn't follow the Unix philosophy. Replaces a tool everyone is familiar with too. But I see no foaming at the mouths this time.

    • This also doesn't follow the Unix philosophy.

      What part of the Unix philosophy doesn't it follow?

      Replaces a tool everyone is familiar with too

      It is a package manager, competing with plenty of other package managers out there. If you use this instead of Yum, it's not going to affect which GUI you use.

    • by DeVilla ( 4563 ) on Sunday September 06, 2015 @09:17PM (#50469457)

      As others have already pointer out, you are wrong for assuming this is like systemd, so I won't further beat that horse.

      However I think it's foolish for Shuttleworth to go down this path. It's inevitable that systemd will start to require that it get's it's hooks into package management. Long story short, the way fixes are applies to systems is fundamentally broken. Whether it's because someone can't find a way to tell what needs to be restarted or can't impose a way to restart all services without down time or can't find a way to apply changes to all containers or whatever half thought out problem is the excuse, it's broken. And the only fix will be to bundle it into the logic of systemd. Amongst other things, a package format will need to be mandated because supporting multiple formats is stupid or hard or out-of-scope ... you name it.

      No one has been able to oppse the systemd maintainers except the kernel developers when it comes users space interfaces. Canonical hasn't been able to stand its ground against these developers in the past. I doubt they will in the future either. Shuttleworth is creating another failure.

    • pray tell what is the standard package manager for "the unix way"?

      There never was one

      hence, no problem

  • by pubwvj ( 1045960 ) on Sunday September 06, 2015 @10:14PM (#50469635)

    "completely worry-free updates"

    Those are very scary words when ever someone utters them because they seem to fail to comprehend the fact that testing is not perfect. I have real work to do. When they F*sk my system with an update that fails and it loses my data or prevents me from working, just once, it can be a huge disaster for me. Multiply that times all the users. Not an issue for the developer. Completely worry-free updates. Not.

  • Famous Last Words (Score:5, Interesting)

    by JustAnotherOldGuy ( 4145623 ) on Sunday September 06, 2015 @11:27PM (#50469843) Journal

    "The nice thing about Snappy is that it's completely worry-free updates"

    Any time anyone says something is "completely worry-free", that's your cue to worry. Ask me how I know.

  • Like upstart.

    we're only thrilled to hear, what poettering will introduce. Because redhat will adapt it and then everyone starts using it, because if its poetteringware, it's quasi standard, isn't it?

Software production is assumed to be a line function, but it is run like a staff function. -- Paul Licker

Working...