Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Ubuntu Operating Systems Linux

Ubuntu 16.04 LTS Will Bring Snap Packages For Up-To-Date, More Secure Apps (neowin.net) 127

An anonymous reader points us to a report on Neowin: Canonical, Ubuntu's parent company, has announced that Ubuntu 16.04 LTS (Long Term Support) will come with support for the snap packaging format and tools. As a result, end users will get more up-to-date apps, something that proved tricky in the past due âoethe complexity of packaging and providing updates,â which prevented updates to some apps being delivered. Snaps will make the Ubuntu platform more unified, developers will more easily be able to create software for PC, Server, Mobile, or IoT devices. The other major benefit of snaps is that that they're more secure than software installed through deb packages. Snaps are isolated from the rest of the system, meaning that malware packaged with a snap won't be able to affect your Ubuntu installation.
This discussion has been archived. No new comments can be posted.

Ubuntu 16.04 LTS Will Bring Snap Packages For Up-To-Date, More Secure Apps

Comments Filter:
  • by TechyImmigrant ( 175943 ) on Wednesday April 13, 2016 @01:25PM (#51901161) Homepage Journal

    This is like static linking. Just link in all the code from all the libraries your program uses. Back to the simple life.

    • So outdated libraries you have to rely on third parties to update in sync? That doesn't sound terribly safe.

      • by allo ( 1728082 )

        You mean like most windows programs?

      • It alters the risk profile.

        Shared libraries require that all the library code be present making ROP attacks easier and when there's a vulnerability in the library it is present in all the programs that link to the shared library.

        Static libraries require only the code that used to be present, making ROP attacks harder. When there's a vulnerability in a set of versions of the library it is only exposed if the vulnerable code is linked and the a vulnerable version is linked.

        So neither is perfect, but I like st

  • by edittard ( 805475 ) on Wednesday April 13, 2016 @01:27PM (#51901179)

    Another outbreak of â disease.

    • Yes... it looks like just another implementation (outbreak) of an app store. An app store where "security" means secured against us (the owners of the devices and computers) more than against any bad actors. I hope to be proven wrong, but that is usually where these sandboxed environments end up.

  • Why? (Score:5, Insightful)

    by binarylarry ( 1338699 ) on Wednesday April 13, 2016 @01:28PM (#51901187)

    When you think about what sucks in Ubuntu right now, are apt and deb really the worst offenders that need work?

    • Just curious, what do you think "sucks" in Ubuntu right now? Hopefully this won't be a complaint about the Amazon shopping lens (off by default in 16.04) or Unity (because at least seven other DEs are supported by Canonical).
      • How does Amazon shopping lens interfere with my bash shell session?

        • Re:Why? (Score:5, Funny)

          by fisted ( 2295862 ) on Wednesday April 13, 2016 @02:12PM (#51901555)

          The same way it interferes with your redundant RAID array.

          • by jaxn ( 112189 )

            I literally watched someone google the word "redundant" for about 15 minutes in a meeting once. He was skipping from dictionary site to dictionary site reading the definitions of redundant. I wanted to stop the meeting and address it, but I feared my humor might be lost on the bunch.

      • Re: (Score:1, Interesting)

        by RghtHndSd ( 4221695 )
        Here is one that I ran into a week ago. Had some odd behavior, and I want to reinstall ubuntu without losing the files. Here are the instructions: https://help.ubuntu.com/commun... [ubuntu.com]

        ... choose manual partitioning ("Something-else" option), then select Ubuntu system partition, set its mount point as "/". Be sure to keep the same format type, the same size, and untick the "Format" checkbox or all data on "/" will be deleted!. Also set other partitions (/boot, /home... see DiskSpace) if needed.

        This is an extremely dangerous way to reinstall.

        • by jofas ( 1081977 )
          Why? You're choosing to go under the friggin hood, of course you'll incur a higher risk.
        • Why? That has been the standard way to do what you are trying to do on linux since forever. The way installers, like the Ubuntu installers, work under the hood is 1) they format the disk, 2) they set up a staging area, and 3) they install everything the system needs using the package manager. Afterward they will do some initial config to get the system to boot. The package manager will only install files that belong to its packages. So it won't 1) delete or empty directories that have already been created,

          • I read the complaint as there not being a safe "reinstall" button that sets the installer to use the same partitions and not format anything.

            • by Hylandr ( 813770 )

              You kids and your 'installers'. I admit they are nice now, but in 1997 you had to know how to do things and some methods were easier or safer than others.

              I think this is what RghtHndSd was referring to.

        • by amiga3D ( 567632 )

          If you're afraid then stay away from the command line. It's dangerous and scary.

      • Honestly, and I don't know if this is still the case but, how about when "upgrading" to the newest release the recommended way is to reinstall the damn system since their update methods are / were nowhere near reliable enough.

        I'll take Debian thanks, upgrading from release to release is easily do able, with well documented errata and workarounds posted.

        Then there is the fact that they feel the need to take Debian UNSTABLE, screw around with it a bit and probably introduce some extra bugs, and then call it S

        • Re:Why? (Score:4, Insightful)

          by F.Ultra ( 1673484 ) on Wednesday April 13, 2016 @04:59PM (#51903147)
          Well each to their own I suppose, myself I have upgraded numerous servers and desktops all the way from 8.04 to 14.04LTS (and 15.10 for the desktops) without any problems what so ever. Not claiming that you didn't experience problems, just that it's not universal. I bet that you can find installs where Debian could not be upgraded without problems or any other distribution for that matter.
          • by jrumney ( 197329 )
            Servers generally don't have major issues with upgrades, but desktops - definitely around the timeframe where they switched from XFree86 to Xorg there was some breakage where for a few versions every new release would break booting to the desktop for one reason or another, until Xorg reached the point a couple of years later where it worked in pretty much all situations without a config file.
            • Of course but that is not Ubuntu specific which was my point, XFree86 to Xorg transition happened on all distributions. Myself I was burned several times with Gentoo when GCC changed their ABI in the 3.x series (or if it was in the 2.x series, don't really remember).
        • by igloo-x ( 642751 )

          LTS to LTS distribution upgrades (e.g,. from 12.04 to 14.04) aren't usually possible on launch day. At least with 12.04 to 14.04, you had to wait until 14.04.1, which came a couple of months after.

          Though to be honest, if your operation hinges on in-place distribution upgrades on production systems, you've got bigger problems.

          • Though to be honest, if your operation hinges on in-place distribution upgrades on production systems, you've got bigger problems.

            As someone who was originally hhired for IT janitor work. I find myself in the uncomfortable situation of suddenly being responsible for the ancient linux infrastructure. I would like to know what you think the proper procedure for upgrading a production server would be. Personally I would do an in place upgrade after I've tested it on a cloned system, restored from a recent b

        • Also, it seems ubuntu needs to restart after every batch of updates nowadays. I thought this was only a problem with windows.

    • I see one advantage in snap apps. The current situation is depicted in this nice xkcd https://xkcd.com/1200/ [xkcd.com] perhaps in the future we will have more isolated applications.

      • by Junta ( 36770 )

        Except this is more about isolating their on-filesystem footprint, but they still at runtime pretty much have run of things. So a package can't 'update' the system-wide openssl with a malicious version as a side-effect, but it still could happily access files in your home directory to be able to log on to whatever (also could lay down a library in user directory and manipulate LD_LIBRARY_PATH for children so it could try to opportunistically spawn a browser with a different library path to be nefarious)

        Conv

      • by jofas ( 1081977 )
        This is a fallacy. The reverse is also true: the admin account won't be able to access your personal accounts online.... by itself. If you choose to have local applications retain your credentials, that's your problem, not root's.
      • Re:Why? (Score:4, Insightful)

        by binarylarry ( 1338699 ) on Wednesday April 13, 2016 @02:35PM (#51901793)

        The answer to the comic's problem isn't a new package manager, it's two factor authentication.

      • Actually that cartoon is not the current situation on Ubuntu. By default root login is disabled and the user account has sudo privileges. So in a sense the user account is the admin account.

        Unless you want to authenticate (password, fingerprint, whatever) for every single action you take, at some point the computer needs to be in an "unlocked" mode. Best practice is always to lock it when one steps away from the keyboard, even for a moment.

        If you're really paranoid, you could set up a script to monitor t

    • Re:Why? (Score:5, Insightful)

      by SumDog ( 466607 ) on Wednesday April 13, 2016 @01:49PM (#51901391) Homepage Journal

      Exactly. Package management with dependencies is what Linux does RIGHT! Look at Android and it's horrible "you have to update the whole damn firmware because nothing is managed by packages!"

      Apks and MacOS app containers add so much redundancy. If you have a library with a security problem, you can now have 50 of that library with a security problem scattered everywhere. In the Linux world with proper package management, if develops use the system libraries, then their apps get those security updates.

      • Re: (Score:2, Interesting)

        by Burz ( 138833 )

        Who gives a shit about an extra 3-5 MB of libraries per app when traditional repos are compiling apps against library revisions that the authors never tested with? This has added a lot of unpredictability to running Linux systems (especially desktops).

        OS X is a model of successful packaging and distribution and its about time some Linux distros were taking inspiration from it.

        OTOH, even not considering library revisions, Linux packaging never was "proper" for applications beyond the server space or lab. It

        • Re: (Score:3, Insightful)

          by Anonymous Coward

          It's not about the 3-5 MB (though in many cases we are talking more like 10-30 MB per app, which quickly ends up several GB overall), but as said in the previous post about the 100s of security holes that will be updated months late if ever.
          I.e. it brings the Android model of security (which is: just close your eyes hard enough to not see the holes) to Linux.

      • Apks and MacOS app containers add so much redundancy.

        With great redundancy comes great portability. I appreciate that Linux is a large collection of small packages and that package management in Linux is a true wonder of software engineering, however it is far from perfect. Very far from perfect. If you happily stay within the exact version and release of a package that someone provided you then everything is usually quite peachy even between upgrades of major distro versions.

        However as soon as you put that comfort blanket aside you can quite quickly find you

      • I agree wholeheartedly. As a developer, Linux distributions are a god send, you simply build debs and rpms for your application and send out to customers. With something like Windows (which mimics the approach that we see here kind of) you now have to keep track of each and every library that you use, and make sure that you fetch, compile and distribute to all your clients when there is a security hole in either one of them.
      • by jrumney ( 197329 )
        Android has incremental updates now. My last system update was about a 6MB download.
    • by grumbel ( 592662 )

      The lack of a packaging format for third party apps has been one of the biggest and most persistent problems in the Linux landscape for ages. I have no idea of the quality of the work Ubuntu is doing here and it does seem to duplicate the work going on with xdg-app, but I really wouldn't mind getting rid of tarballs and shelf extracting shell scripts. The monolithic dependency trees that package managers require at the moment just don't scale and never provided a good way for third party apps to plug into t

      • Whatever you're smoking is something that's stronger than anything legal in Colorado.

        Third party app packaging support for Linux is something that's been around for ages. RPM for Red Hat and friends, dpkg for Debian/Ubuntu, and other lesser-known facilities for other systems. In actuality, probably 80+ percent of the stock repositories of the popular Linux distros are "third party" apps which have been built as distro-specific system packages - they're just part of the official repos, that's all. Other than

      • by Burz ( 138833 )

        The monolithic dependency trees that package managers require at the moment just don't scale and never provided a good way for third party apps to plug into them (just adding random third party repos is inherently fragile and insecure).

        Exactly. And it resembles a whole host of other maladies [slashdot.org] brought on by what amounts to a sick subculture. Seriously, this is a class of system developers who cannot whip up one shred of empathy for application developers.

        The best thing you can do for app developers is to nail down all your core APIs (a rich set of them) and define everything else as "option" or "3rd party". That way the app dev can at least start out writing apps that check for the OS version. Yes, for most apps the dependency check should

        • by armanox ( 826486 )

          Well, I do believe that is the idea of a "stable" release. The trend of rolling release that has become so popular these days is what you should be against. If I release software for RHEL 5, I can be pretty certain that the libraries will not change for the life of RHEL 5. However, Fedora changes the kernel version, and then we get Arch and Gentoo that are constantly updated, plus based on the Fedora Project's Google+ page more and more people are just running Rawhide claiming that traditional releases a

      • Re: (Score:2, Interesting)

        by Anonymous Coward

        Packaging format is not an impediment. Nothing prevents you from installing your app and all its dependencies into a subdirectory like /opt/my-project and distributing a tarball. Using the ELF ${ORIGIN} linker macro, you can even ship subtrees that can be transparently moved around as a bundle anywhere on the filesystem, just like OS X applications. So, for example, you could create my-project.zip or my-project.tgz that unpacks to my-project, and reliably execute the binaries straight out of my-project/bin

    • by Kjella ( 173770 )

      When you think about what sucks in Ubuntu right now, are apt and deb really the worst offenders that need work?

      By far no. But sometimes all those dependencies drag you into cascading updates that you don't want. Like I once wanted a new feature in KDevelop, but really I didn't want to touch the rest of my KDE apps because they were working fine. But KDevelop insisted on updating the KDE libraries, which lead to apt-get insisting on new versions of almost everything. If a "snap" could let me install that one application in isolation that's certainly a feature I could like. And maybe the other way around too, if I rea

    • by hitmark ( 640295 )

      Yes and no.

      The basic problem with deb, and rpm, and similar package manager, are that they get hung up on the package name and version.

      Meaning that you can only have one version of a particular name installed at any one time.

      Want to install another version? Either you remove the old version or you change the name to avoid a conflict. But changing the name plays havoc with the dependencies tracking.

      If deb and/or rpm was changed so that the manager could handle tracking multiple versions of the same package n

      • Not really. Deb and rpm can handle multiple versions just fine, as long as the underlying software supports multiple versions. Remember, a package is just a collection of files with some instructions where to put them. So if you try to install two files with the same name in one place, you are going to have problems. In other words, it's not the versioning, it's the inherent limitation of the filesystem itself. If a library renames itself between versions, though, it won't have problems. Very few libraries

  • Snaps! (Score:5, Funny)

    by Anonymous Coward on Wednesday April 13, 2016 @01:29PM (#51901195)

    Modern snap snappers know that only snaps can snap snaps, so give up on your insecure luddite packages and use snappy app snaps! Snaps!

    • by Anonymous Coward

      Aww, snap.

    • by aliquis ( 678370 )

      Modern snap snappers know that only snaps can snap snaps, so give up on your insecure luddite packages and use snappy app snaps! Snaps!

      I want a separate package-manager for every package I install. Surely there's one which does something better than everything else?

  • by Anonymous Coward

    they can decide to bundle specific versions of a library with their app.

    Do. Not. Want.

    • Re:Stupid appers (Score:4, Informative)

      by Junta ( 36770 ) on Wednesday April 13, 2016 @02:30PM (#51901747)

      I agree, funny thing is that application packaging can *already* bundle their own version of a library, if they cared. That's part of the whole point of the /opt/ filesystem hierarchy, to give applications their own space to apply *whatever* they want

      • It's not really the version of the library that's the problem, in the majority of cases. As a few have already mentioned, the interfaces often don't change between library versions, so older software can often compile fine against newer libraries. The problem is, most people want binary distributions. Source distributions are great, and very flexible, until you want to A) install something closed-source, or B) install something large and complex, like LibreOffice. Most people, myself included, don't care to

        • by Junta ( 36770 )

          What I'm saying is that a software provider so inclined may package their content into /opt/ and pull in dependencies of their liking. Hell up to and including glibc itself, if they are absolutely crazy enough. I know this painfully well as I have worked on a team to provide *one* set of rpms that could be installed on *any* rpm based distribution, and there was a whole lot of senseless library bundling involved to make that work, which I objected to but they did it anyway. Suddenly we were on the hook f

    • by ewhac ( 5844 )
      I can think of one case where you might actually sorta maybe kinda want this: Media applications.

      More specifically, video editing applications. 'ffmpeg' has been a moving target for years. The command line arguments and API to the various libraries is in constant flux. Part of this is good, in that ffmpeg can add new features, refine the quality and reliability of the codecs, and keep up with new developments in digital video. However, it also means that an application can't really rely on the interf

  • SNAP [wikipedia.org]

    Just sayin'

  • by Anonymous Coward

    Yeah, we already have regedit... Err gconf, now every app will come with its own set of dll.. Err .so...
    Time to try pcbsd !

    • by SumDog ( 466607 )

      and we have svchost.exe too (it's called SystemD)

      • So you don't know that svchost.exe or systemd does but still made a post.
    • Re:Winlux 4va ! (Score:4, Interesting)

      by NotInHere ( 3654617 ) on Wednesday April 13, 2016 @01:52PM (#51901413)

      Well I hope that open source packages won't switch to the new snap system, as of course it adds duplication, and now many application providers have to update one of their libraries only because of some badlock vuln or something. Some app store owners try to counter this by threatening app owners to take down their apps if they don't update the libraries. But this only gets the biggest libraries and those with most light shined on them, the small library might never get updated.

      Yes, poettering has seen correctly that the duplication can be fought with fs level dedup like btrfs is doing it, but that still doesn't solve the update problem.

      Where the snap system shines at is closed source applications and open source applications which both get shipped outside of the distro's packaging system: if adopted by all distros, you can ship cross-distro binaries without having to bother about some distro's settings for their libraries.

      And closed source applications is one of the areas the linux desktop sucks at. There are almost no linux versions of many more or less popular closed sourced applications.

      • Well I hope that open source packages won't switch to the new snap system, as of course it adds duplication, and now many application providers have to update one of their libraries only because of some badlock vuln or something. Some app store owners try to counter this by threatening app owners to take down their apps if they don't update the libraries. But this only gets the biggest libraries and those with most light shined on them, the small library might never get updated.

        Where the snap system shines at is closed source applications and open source applications which both get shipped outside of the distro's packaging system: if adopted by all distros, you can ship cross-distro binaries without having to bother about some distro's settings for their libraries.

        The other part of snaps that I think *does* make it attractive for some open source software is that the application is installed and run in a container. This is great for those web browsers that more and more think of themselves as operating systems, and to a lesser extent many other applications. Being able to control the camera and mike from outside of the container, restrict it from writing to the parent containers filesystem at all except for ~/Downloads, block all incoming connections, restrict outgoi

  • The really sweet feature is native support for 32-bit EFI. Finally all of us who bought cheap BayTrail tablets can install Ubuntu like normal people.
    • by Anonymous Coward

      What about drivers? There are instructions on the web for installing linux (including ubuntu) on bay trail laptops, which presumably apply as well to tablets. The subsequent problem that I encounter was the lack of wifi and audio drivers, which led me to revert to Windows and to largely write that hardware off. This was on an Asus X205TA (which I returned) and on a Lenovo IdeaCentre Stick 300 (which I've kept).

  • XKCD (Score:5, Funny)

    by darkain ( 749283 ) on Wednesday April 13, 2016 @02:19PM (#51901639) Homepage

    Obligatory XKCD reference: https://xkcd.com/927/ [xkcd.com]

  • 'More secure' (Score:5, Insightful)

    by Junta ( 36770 ) on Wednesday April 13, 2016 @02:21PM (#51901657)

    More secure as in 'have to wait for app packager to update openssl library rather than updating it system wide and taking care of all dynamically linked apps'.

    In fact, for 'security' I'm having a real time linking the rhetoric to a meaningful benefit. Among the benefits of this sort of strategy, I don't see how security would be one of them.

    • by Anonymous Coward

      You don't need to ship your own copy of openssl, snap packages can reference system libraries.

      The main security benefit is that snap apps are installed in their own sandbox and don't need root access to install.

      • by sad_ ( 7868 )

        they don't need root? now shit is really going to hit the fan.
        when installing ubuntu on somebodies pc, it stays stable because they can't install any of that crap that you see every single person installing on windows. now they will be able to do so on linux too, opening the door to all kinds of nasty uncontrolled stuff.

        • But that's the whole point. No nasty stuff can happen because every program is sandboxed.

          • by Junta ( 36770 )

            But... it isn't.... I mean, not really.... A non-admin user may install an app into their home directory, so that's a matter of 'convenience' and 'user doesn't need to sudo to install', but it can still access home directory and stuff... Basically precisely the scenario poked fun at by the xkcd talking about how his system would protect itself from installing drivers, but would merrily let applications at his browsing history and such.

            'sandbox' generally only has meaning if you have a very small and well

  • by digitalderbs ( 718388 ) on Wednesday April 13, 2016 @02:43PM (#51901879)

    The details on this new packaging system are scarce--and I've checked--but it looks like a reimplementation of Docker, which would be a welcome addition. A number of comments have stated that this would lead to library fragmentation and security problems with a large number of library 'copies' needing updates. However, if this is implemented like Docker, all the apps would depend on a core image that would be updated in itself.

    Frankly, docker apps are the future of package management. Each app is sandboxed (like a chroot jail), and you can establish firewall-like access to the app for directories, services and such. Also, dependency hell goes away because these apps use the advantages of static and dynamic libraries. As long as a package is using a core image (like Ubuntu 16.04), then updates to that image are automatically upgraded to all apps.

    The only puzzling aspect of this is why ubuntu didn't just use Docker. X connections are non trivial with Docker, and perhaps this new system makes access more straightforward. In any event, I think there's more than meets the eye here. Apt rocks, but docker is better for package management.

    • by Etcetera ( 14711 )

      Also, dependency hell goes away because these apps use the advantages of static and dynamic libraries. As long as a package is using a core image (like Ubuntu 16.04), then updates to that image are automatically upgraded to all apps.

      This is 2016. What Linux distro out there still has a "dependency hell" problem? Yum and Apt-get are a decade old and handle dependency resolution. If your packages are compiled against a "core image" (or actual, bona fide release) and your repo is generated wrong and you're experiencing "dependency hell" then either:
      a) someone is doing something wrong, or
      b) you have an actual, bona-fide file conflict that can't be automatically resolved

    • The details on this new packaging system are scarce--and I've checked--but it looks like a reimplementation of Docker,

      I guess we'll find out more in time, because I too couldn't find any details on how this is implemented. If it does use containers (a la Docker), that would be really cool. As soon as Docker started getting more fleshed out, this was the first application I thought of that would be perfect for it.

      An application being able to use alternative libraries is definitely a need on modern linux. I can't count the number of times that I needed to do massive upgrades of the system just to install a newer version of a

    • by Junta ( 36770 )

      You are assuming good practices. Yes, I can have a dockerfile and properly maintain and rebuild periodically 'from scratch' as it were to pick up updates.

      However there are a crap ton of ones out there that are not so disciplined. They did a docker pull and started just going to town, doing a docker commit against a running instance to get a new image, rather than doing something that would assure package updates. And also this is *if* the maintainer is even paying attention at all.

      It's also a licensing n

  • People are still using Ubuntu?
    • wow. so edgy (Score:2, Interesting)

      by Anonymous Coward

      Yes, grampa, people still use Ubuntu. The only bad part about Ubuntu is the Unity desktop environment. Xubuntu and Ubuntu MATE don't suck.

      If you make the mistake of downloading the vanilla Ubuntu ISO, all you have to do is install a better desktop environment (apt-get install xfce4 or apt-get install mate-desktop), logout, login with the new DE, and then remove unity (apt get purge unity*).

  • by emblemparade ( 774653 ) on Wednesday April 13, 2016 @06:50PM (#51903913)

    Wow, people commenting seem to have so little information about what this actually is. (Canonical is partly to blame for, as usual, doing a poor job at messaging.)

    This is not replacing the Debian build system or Debian packages. Ubuntu will continue to be based on Debian.

    This is an additional packaging system that makes it exceptionally easy to more reliably distribute Linux applications and services. Underneath it uses LXC (also originally developed at Canonical), the same jail-like technology that powers Docker and LXD. It basically lets the application get its own "view" of the operating system's filesystem (using AuFS) so that you can distribute required dependencies with the application. Of course it can't override the Linux kernel or other important system services, but it actually solves a major hurdle in distributing software across various OS library baselines. Until now, we've been using PPAs or other external Debian repositories to distribute software -- you can still use them if you prefer, but these are tied to the baseline and need constant tweaking to the packagers. A Snappy package made now should be able to run years from now without a problem. The Snapcraft packaging tool is very easy to use and does so much of the hard work for you: you can even just give it a git repository URL, and it will pull and build and package. I see it being very useful for something like Steam.

    Also, like Docker, Snappy uses SHA-signed diffs, so package updates will be very fast. It also makes it trivial to switch between versions.

    The announcement is that Ubuntu 16.04 will come with Snappy built in, so you can immediately install Snappy packages if you want. You don't have to.

    There is also a new flavor of Ubuntu called "Snappy Ubuntu Core" in which the base OS itself is a Snappy image, so that it gets updates the same way as the other packages, and in the same way you can switch between versions. It is useful for various special use cases. For example, a phone OS will have an easier and safer job upgrading while letting the user trivially revert back if things break. It is not the official Ubuntu recommended for all users, but rather a building block for developers to create specialized Ubuntu-and-Snappy-based distributions.

    • Interesting. That's more information than I was able to find anywhere else. Thanks.

      Here's what I'm most worried about, though. How dependent is it on non-lazy packagers? In other words, the easiest and most convenient way to package anything is to ship with all dependencies and the app uses those. The problem, though, is each application is then solely responsible for updating itself, including to patch bugs in any dependencies, so it quickly leads to running a million app updaters in the background, which

THEGODDESSOFTHENETHASTWISTINGFINGERSANDHERVOICEISLIKEAJAVELININTHENIGHTDUDE

Working...