Follow Slashdot stories on Twitter


Forgot your password?
Open Source Security Software Linux

Linux Foundation: Security Problems Threaten 'Golden Age' of Open Source ( 77

Mickeycaskill writes: Jim Zemlin, executive director of the Linux Foundation, has outlined the organization's plans to improve open source security. He says failing to do so could threaten a "golden age" which has created billion dollar companies and seen Microsoft, Apple, and others embrace open technologies. Not long ago, the organization launched the Core Infrastructure Initiative (CII), a body backed by 20 major IT firms, and is investing millions of dollars in grants, tools, and other support for open source projects that have been underfunded. This was never move obvious than following the discovery of the Heartbleed Open SSL bug last year. "Almost the entirety of the internet is entirely reliant on open source software," Zemlin said. "We've reached a golden age of open source. Virtually every technology and product and service is created using open source. Heartbleed literally broke the security of the Internet. Over a long period of time, whether we knew it or not, we became dependent on open source for the security and Integrity of the internet."
This discussion has been archived. No new comments can be posted.

Linux Foundation: Security Problems Threaten 'Golden Age' of Open Source

Comments Filter:
  • was more obvious
  • what OSS is insecure? i think it is company executives and lame sysadmins that are insecure. of course easier-to-use security could help.
    • Well, installing a Debian package allows it to run a script that can execute arbitrary commands with superuser privileges. Pretty simple to nuke a system by just giving someone a hot_chicks_card_game.deb.
    • by Anonymous Coward

      OpenSSL and GPG both have had numerous security flaws opened against them in the last 6 months or so.

      The Linux Kernel regularly gets critical security updates.

      A better question is, "What OSS *isn't* insecure?" If you think anything is magically "secure" you're in for a bad time.

      • by Z00L00K ( 682162 )

        So do Windows and a number of other applications and operating systems. Mostly the security issue is just a small thing but now and then a big issue appears.

        Many large issues are also caused by not one single mistake but by a chain of mistakes where each mistake by itself wasn't fatal. This isn't limited to software alone but we see it from time to time in the physical world as well.

  • it needed to happen. (Score:5, Interesting)

    by Gravis Zero ( 934156 ) on Saturday October 10, 2015 @09:34AM (#50698625)

    heartbleed was a blessing in disguise because companies were blindly assuming this software was secure and thus never investing a dime in it's development. this internet-scale problem woke up some people and now they are actually investing in real security.

  • by somepunk ( 720296 ) on Saturday October 10, 2015 @09:41AM (#50698647) Homepage
    Still a serious bug, but if forward secrecy [] had been widely deployed, much, much less threat exposure would have occurred.

    That's the lesson. Code audits are great, but they still miss stuff and are expensive. Take good practices more seriously, and you get a lot of bang for your investment in time/money/whatever.
    • Look at the list of effected companies. You'll notice that none of them are financial institutions which were still on an older version. Some people do their due diligence and audit the code, others can't afford to.
  • by PvtVoid ( 1252388 ) on Saturday October 10, 2015 @09:47AM (#50698675)

    It's really, really good that somebody is stepping up and providing funding to maintain what have become critical Open Source infrastructures.

    At the same time, it's totally disingenuous to imply that recent security issues are somehow caused by the fact that they are Open Source. There is no reason whatsoever to believe that, had the same services been proprietary, they would have had fewer bugs affecting security. In fact, the only effect of having critical services closed source would very likely have been that the security issues would have gone undiscovered for even longer. Making the critical security infrastructure for the internet closed source would be insane.

    Open Source is working exactly as intended here: critical security issues were identified (ok, way too late, agreed), and fixed. Now the people who rely on those infrastructures are realizing (also way too late) that it is in their interest to provide funding to maintain them. This is how it's supposed to work.

    • Making the critical security infrastructure for the internet closed source would be insane.

      All of the Cisco networking gear runs on closed source software.

      • by Anonymous Coward

        LOL, and look how secure Cisco gear has been lately! But more importantly, even Cisco is going down the software-defined networking route with open source NOS.

  • by Anonymous Coward

    The problem is that low-level "bootstrapping" software like the BIOS is still closed source, and—worse—becoming so complex that it's basically an entire operating system unto itself.

    Consider Intel's Management Engine and the associated Active Management Technology that is in every modern (though, upper-middle quality) Intel-based desktop/laptop these days; it provides a whole personal computer within what you, the user, think is the actual personal computer, and that embedded personal computer h

    • You can even buy laptops now that are OSS from the firmware on up. There are dozens of OSS u-boot based dev boards availible. You can run a system of a CPU design loaded onto an FGPA. There are cellphones built with basebands you can load OSS firmware onto, or are not linked into DMA with main memory and have CPU controlled hard off switches. These aren't flagship consumer products, but they are available. Security is rarely convenient.

      Intel's management engine and the like are not some vast conspirac
  • by 140Mandak262Jamuna ( 970587 ) on Saturday October 10, 2015 @11:47AM (#50699159) Journal
    It is never in the interests of the vendors to support full two way interoperability. Up starts will support interoperability with established players when it suits them and sabotage interoperability to prevent people from leaving. From early days of Microsoft making sure unix file systems can be seen by windows but not vice versa to present day CAD vendors investing order of magnitude more effort in "importing" other vendor's CAD format but becoming very apathetic to bugs reported on their export capability. Parametric Technologies started encrypting its files to prevent Microedge or some such vendor from reading the files, and invoking DMCA to stop anyone else from reading the files. The format is PTC's but the data is the customers'. They use every trick in the book to hold the data hostage. Every CAD tool vendor does this, PTC is not particularly worse than any of its competitor. It is exactly the same fight between ODF and XLS. The "open" formats are STEP and IGES

    But it is in the interest of the customers to make sure their data never gets locked up in a format they don't control. Why wouldn't the fortune 500 companies invest a tiny part of their IT budgets to support ACM or IEEE to play the role of arbitrator when it comes to file formats, data and export/import protocols, fundamental security etc. These things should be neutral and no vendor should see them as yet another way to invade and occupy their customer's systems and processes.

  • If there is no such evidence, then how does this article make any sense?

  • by drinkypoo ( 153816 ) <> on Saturday October 10, 2015 @01:27PM (#50699565) Homepage Journal

    If this is the golden age of FoSS, it's only because humanity isn't going to make it long enough to have a real one. We'll have a real one of those when we abolish software patents. Suddenly, FoSS no longer has to fear attack on bullshit grounds by patent trolls, or megalithic competitors abusing their market position. Until then, it's still a war, and nobody wins.

  • The root cause of all of these security problems has been in plain sight since 1970 or so, yet only a few people are even aware of it. It's obvious once you get it, and the scope of fixing things comes clearly into place. So, do you really want to take on forking every program to build a new version of it? If so, you can fix it, if not... this will continue to happen, and government will try to fix it by fiat, badly.

    The cause is that our operating systems operate on the assumption that programs can be trus

    • The cause is that our operating systems operate on the assumption that programs can be trusted.

      Android tries to sandbox programs, pretty successfully until you find a hole in the sandbox someplace. Two new Stagefright vulnerabilities have been found recently. I was just patched, too. I guess a new edition of the custom rom I'm running will be available shortly. And there's always selinux, but configuring it is still a massive PITA. The tools and their build processes are immature.

    • by KGIII ( 973947 )

      A remarkably beautiful young lady revealed something to me just the other day. This is from a pop song in 1982 - I'd never noticed it when it was popular and getting radio play.

      Back at base, bugs in the software
      Flash the message
      "Something's out there"

      Yup... Ah well, we'll always have bugs. Anyhow, isn't the idea of a microkernel meant to minimize such? IIRC that was the guy from MINIX (I forget his name) big complaint about Linux. Stability and security come at a cost of performance but the price might be worth paying now that we have hardware that is so speedy.

      Oh, the song is 99

      • A microkernel minimizes the amount of code you have to trust. MINIX as of 3.0 is also designed to be fault-tolerant, able to recover to almost any sort of bug. You tend to get a lot of transactional and message passing overhead though. For example the filesystem modules isn't allowed to access the disk controller, it has to ask the block layer to do it and pass the result. But the block layer can't actually pass the result directly, it has to check in with the microkernel to make sure it's okay.

        But the
        • by KGIII ( 973947 )

          Thank you for the insight. I really need to fire up MINIX in a VM and see what's going on. I'm a maths grad and not a CS grad so it will be fun for me to learn about the various ins and outs. One of the things I assume is that, in realistic use, today's hardware will cope with the added overhead with nary a problem - I'd imagine the processing rate to be only trivially slower if I'm understanding everything properly (and I may not be).

          I kind of like the checks, or the idea - I'm not fluent enough to say tha

    • by dog77 ( 1005249 )
      I think we need to go further and fundamentaly redesign the hardware architecture that the operating system runs from

      Physically Isolate the operating system from the rest of the system. Let it run from completely different memory and storage so that is impossible to access from the rest of the system. Let it be a monolithic program that has its own drivers and its own network stack.

      If you need to make a change to the operating system, you must physically switch to the operating system control and from
      • I think we need to go further and fundamentaly redesign the hardware architecture that the operating system runs from
        Physically Isolate the operating system from the rest of the system.

        Don't we have hardware in the CPU which is effectively supposed to do that? And don't we just keep poking holes in it so that we can get better performance?

      • The existing hardware virtualization and security extensions actually let you do this. See L4 as an example.

Don't tell me how hard you work. Tell me how much you get done. -- James J. Ling