Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Open Source Businesses Security

Could We Reduce Data Breaches With Better Open Source Funding? (marketwatch.com) 60

The CEO of Wireline -- a cloud application marketplace and serverless architecture platform -- is pushing for an open source development fund to help sustain projects, funded by an initial coin offering. "Developers like me know that there are a lot of weak spots in the modern internet," he writes on MarketWatch, suggesting more Equifax-sized data breaches may wait in our future. In fact, many companies are not fully aware of all of the software components they are using from the open-source community. And vulnerabilities can be left open for years, giving hackers opportunities to do their worst. Take, for instance, the Heartbleed bug of 2014... Among the known hacks: 4.5 million health-care records were compromised, 900 Canadians' social insurance numbers were stolen. It was deemed "catastrophic." And yet many servers today -- two years later! -- still carry the vulnerability, leaving whole caches of personal data exposed...

[T]hose of us who are on the back end, stitching away, often feel a sense of dread. For instance, did you know that much of the software that underpins the entire cloud ecosystem is written by developers who are essentially volunteers? And that the open-source software that underpins 70% of corporate America is vastly underfunded? The Heartbleed bug, for instance, was created by an error in some code submitted in 2011 to a core developer on the team that maintained OpenSSL at the time. The team was made up of only one full-time developer and three other part-timers. Many of us are less surprised that a bug had gotten through than that it doesn't happen more often.

The article argues that "the most successful open-source initiatives have corporate sponsors or an umbrella foundation (such as the Apache and Linux foundations). Yet we still have a lot of very deeply underfunded open-source projects creating a lot of the underpinnings of the enterprise cloud."
This discussion has been archived. No new comments can be posted.

Could We Reduce Data Breaches With Better Open Source Funding?

Comments Filter:
  • I doubt it (Score:3, Interesting)

    by Anonymous Coward on Saturday December 30, 2017 @04:07PM (#55835899)

    Here, I'll solve this problem for you in one sentence, instead of a cloaked Ponzi scheme: strict legal liability for data breaches, extending *personally* to C-level executives of the companies at fault. Management generally doesn't care about security, and the only way to make them care is hitting them in the wallet directly. When they can't hide behind the corporate veil anymore and suffer direct financial consequences for their short-term thinking, even the most dimwitted MBA will start to wake up and take notice.

    • In Business it is more complex then that.
      To survive in the market you need to get your product out before the competition and/or you need more products. Failing to survive as a business is worse then the expense of a security glitch.

      It is a chicken and the egg problem. We need a security commitment from the whole industry vs just one brave little company who would go out of business rather quickly. It isn’t an issue of bad programmers or management not wanting to give a quality product, but the restr

      • We need a security commitment from the whole industry vs just one brave little company who would go out of business rather quickly.

        A commitment from the whole industry won't happen. Fortunately, such a commitment isn't the only possible solution; the GP has already provided an alternative. Where the industry won't act voluntarily, legislation can force them to.

        If breaches in security can be proven to be due to corner cutting, laziness or negligence (such as the Equifax fiasco) the Cxx managers of companies at fault should be made personally responsible. And not just monetarily, because they can push the expense on to the company and im

        • The legislation will only happen in countries where the industry (as a big players or as a group) don't have enough influence over the legislature to keep such a thing from happens.

          I don't know what countries could might this goal, but it's small enough to not matter, especially when companies will just make sure they don't legally exist in those countries.

          It's cynical thinking yes, but pragmatic I'm afraid.

          That's ignoring the decades of legal challenges if it actually did happen, or any fallout on open sou

      • But the parent you're replying to is suggesting to increase the expense of a security glitch.

        We need a security commitment from the whole industry vs just one brave little company who would go out of business rather quickly.

        But that's his point, isn't it? By targeting the C-level executives and making them liable for security breaches, then you're effectively solving the problem for everyone involved, from the small companies to the huge companies.

    • by Anonymous Coward

      If top tier companies like Sony can get pwned, not to mention government agencies, in reality, there isn't much companies can do. Security doesn't bring income, and you can throw your entire fiscal budget at it, only to get breached anyway because someone in receiving got a RAT from browsing the web on a machine there, and one privilege escalation vulnerability later, the attacker now has domain admin rights across the AD forest.

      It really is a losing battle, as you can't win any engagement by defending onl

      • You can address that by limiting liability if they follow best practices. For example:

        Was the thing that was compromised the latest version, exploited with a zero-day vulnerability? If so, lower penalty.

        Was the thing that was compromised able to access only data that the component actually needed to function? If so, lower penalty. Higher penalties for anything that was leaked beyond the minimum that the attacked component needed to access.

        Did you retain data beyond what the originators of that data

    • by Anonymous Coward

      When they can't hide behind the corporate veil anymore and suffer direct financial consequences for their short-term thinking, even the most dimwitted MBA will start to wake up and take notice.

      If they could be held liable they would simply get personal liability insurance and pass the cost through to the customers.

      • This might actually be the solution. Insurance has mitigated a lot of other risky behaviours. Insurance companies are (mostly) pretty good at quantifying risk and have a financial incentive to improve when they aren't. If they look at your company and say you're ten times more likely to suffer a data breach than your competitor, then they'll charge you at least ten times more for insurance. Eventually, it becomes a choice of spending the money on insurance or spending less money on improving security, a
    • I hate to bring it to you, but it's the corporations that force laws upon the legislature, not the other way around.
    • This is a classic scheme of moving the question in order to obtain the desired conclusion. In this case, the real question they are trying to lead people to assume the answer to is "is the open source model to blame for security breaches". By essentially stating as a fact that it is, and then making the question "should we throw money at it to fix the problem", they are trying to get people to assume the first question.

      No, the open source model is not the cause of security woes. Microsoft, with one of th

  • Longer answer: hahahahaha, no.
  • by king neckbeard ( 1801738 ) on Saturday December 30, 2017 @04:21PM (#55835957)

    There are two main factors to weigh here, IMO.

    The first is that a lot of vital yet unsexy projects have inadequate funding and testing. Funding can help mitigate problems stemming from that.

    The second factor is sysadmins being incompetent or not being given the tools, knowledge, and power to actually fix problems. Funding can't help that.

    • by arth1 ( 260657 )

      Yes, the main problem isn't a lack of software[*]. It's that those who make decisions have no understanding of security, and their bosses in turn are looking at short term ROI.

      [*]: Nor do I believe that funding would have helped if that were the cause. A great programmer doesn't become more productive if you toss more money at him. He'd be happy, and may deserve it, but likely you'd just finance more managers and get less done.

    • by Kjella ( 173770 )

      There are two main factors to weigh here, IMO. The first is that a lot of vital yet unsexy projects have inadequate funding and testing. Funding can help mitigate problems stemming from that. The second factor is sysadmins being incompetent or not being given the tools, knowledge, and power to actually fix problems. Funding can't help that.

      I'd add a lot of attitude to that, developers that just bang it until it works. Management who says if it works, don't break it. And they go together hand in hand, if the new intranet is working we're done. The PHB and cheap Indian subtractor both think so. Firewall? Access controls? SQL Injection? URL guessing? View source? Never heard of it. And it'll keep running unpatched and out of support because it works until shit hits the fan and a scapegoat must be found, then the circle begins anew.

      The problem is

      • Exactly this yes. I.e the software that we supply to our customers are available as both deb or rpm repositories. At one time when we had a mandatory upgrade a huge chunk of the customers asked how they should proceed in order to get this mandatory upgrade... So for all the years between their initial install and this event they had not once run "apt update && apt upgrade" or "yum upgrade". People are insane is what they are.
  • by gweihir ( 88907 ) on Saturday December 30, 2017 @04:40PM (#55836021)

    For the story: These people want to get rich on the current blockchain craze, nothing else. Ignore them.

    As to the problem, best practices and liability are the only way. Yes, I am advocating jailing the CEO and CISO and possibly the board of companies that have large amounts of customer data stolen because of negligence. As an alternative, I would also accept insurance that automatically pays out $1000 to every custromer that has their data stolen (regardless of how much data it was and whether it was misused) and triple the actual damage to any customer that had their data stolen and can prove larger actual damage (losses + cost to fix) than $1000.

    In order to be not negligent (note that I use simple negligence, not gross negligence) they will have to:
    - Develop security critical software only with architects, designers and coders that are understand security (no more paying peanuts for coders...)
    - Have external reviews of all security critical code by qualified security experts
    - Have careful and adequate white-box penetration testing performed
    - Not only fix the issued found in code-reviews and pen-tests, but also fix and investigate the root-causes, such as fire incompetent coders or outsourcers

    Do this and the problem vanishes. The human race knows how to produce software that is extremely hard to break into. There are just no incentives to spend the money for it, and, despite my list above looking a bit bombastic, it would not actually be that expensive.

    • by DogDude ( 805747 )
      In the United States, corporations exist primarily to separate liability from ownership. As a result, people making criminal or negligent decisions inside corporations almost never go to jail or face any negative repercussions at all. Until the corporate structure is fixed, corporations will continue to do whatever they choose, with no criminal consequences.
    • by AmiMoJo ( 196126 )

      Anything to do with an "initial coin offering" is a scam.

    • Develop security critical software only with architects, designers and coders that are understand security

      One of the big problems, and a large part of the reason that we're in this mess, is that a lot of security-critical software wasn't security critical when it was written. Here's a simple example: libjpeg. This library was written as a reference implementation of the JPEG standard, back in 1991. It was expected to be used to compress photographs from scans and render the compressed photographs on the screen. It's not security critical, because it's dealing only with data that it produced for the user.

  • The best way to prevent breaches is to start giving a damn. Stop collecting personal data on people, use encryption, run security audits, stay on top with patches, limit access.... all the standard stuff that gets ignored because it might cost a few bucks to hire someone to take care of it. Oh, and for sure making C-level managers personally liable for all damages caused by breaches will fix this issue right away. As soon as they potentially have to sell their helicopters and yachts to pay for damages they
    • Security audits are not always useful. For example, I read the result of the security audit on Dovecot that Mozilla commissioned. They found three low-priority issues and one of those was not using FORTIFY_SOURCE. Here's the problem: FORTIFY_SOURCE does not catch any issues that cannot be caught by static analysis. If it improves security in your program, then it is only because you are not incorporating static analysis into your workflow, which is a really good way of writing insecure code.
  • by Todd Knarr ( 15451 ) on Saturday December 30, 2017 @05:03PM (#55836101) Homepage

    More open-source funding won't help reduce breaches. It'd be good to have more funding for development of the basic software, but most of these breaches happen because, despite a patch to fix the vulnerability being available, these companies treat simply don't apply the available patches. Until that stops being the case, more funding for the software will merely mean the breaches happen in different places than they would've otherwise.

    Oh, and don't hold the sysadmins responsible. They're at the mercy of the instructions they're given. The people who need held accountable are the executives who classify IT security as a cost center whose budget needs minimized and breaches as a public-relations problem instead of a security issue and who refuse to give the IT people enough budget and resources and authority to apply fixes promptly.

  • What do I win?

  • It's troubling that media can look at all the details of the Equifax story and somehow come to conclusion that OSS needs improving or is in anyway broken. OSS is certainly not perfect but the bug was identified, patched and publicized months before Equifax actually applied it. OSS did not fail here, incompetent security and* development teams did... at a company whose entire business is handling PII and Financial data. It's inexcusable and frankly criminally negligent.
    * It also bugs me that I generally
  • Is there a security equivilent of a UL Certification ?

    If not, should we require one before a product can be sold ( for IOT stuff ) in the US ? Or a mandatory periodic security audit of corporate systems housing sensitive personal data ?

  • by gbjbaanb ( 229885 ) on Saturday December 30, 2017 @08:05PM (#55836827)

    When I was involved in high-security software development, we built the web sites around multiple layers each of which was secured and access was limited, reducing the attack surfaces. If a hacker ever got past all our layers to hit the database, then frankly, I wouldn't argue with them as they would be the NSA or KGB.

    But then I started work with new Microsoft frameworks designed to make web building nice and easy (even though its a right over-engineered mess) and I see everything stuck in the webserver tier with full and open direct access to the DB via an ORM. All designed to be written as quickly and easily as possible with security a very distant concept to it.

    and yet, said framework could easily split its MVC architecture up to a service and web tier, could put comments or a text file with security hardening information in, could partition the database into secured schemas and it'd be just as easy to write as the monolithic one but far, far more securable.

    The current asp.net core framework almost is insecure by design, almost designed that everything is exposed if a hacker gets past the first (and only) level of security. All it takes is 1 zero-day exploit and all your data belongs to someone else. (and yes, other web frameworks are just as bad)

    so yes, open-source projects could help - not by compiling a database or package manager of updates and security fixes, but by providing templates and architectures for project defaults that are based around layers of protection.

    There will always be some weakness or flaw or bug in software, the only way to mitigate them is to work assuming they're are already there.

  • Its not a funding issue.
    If money solved all computer problems a few top US consumer OS brands would have been the most secure OS ever.
    They are not due the the low skill sets and the lack of education found in many of their workers.

    Consider how an open source project responds to a person who shows security issues.
    Do they have a person in place to accept the errors and communicate with the person who found the errors/bugs/backdoor/trapdoor?
    That they can communicate back that the errors are understood,
  • Until we get systems like Genode or Hurd to the point where they can be used by most of us, and especially on servers, this is going to keep happening. The idea of trusting an application or service to voluntarily restrict its own actions is idiotic (at best).

    Imagine getting a check from the bank of Windows... where after checking your ID very carefully, then handed you all of the funds for the account, and trusted you (the person delegated a small amount of the account holders money) to only take/remove th

    • Or you can use FreeBSD right now. Capsicum turns file descriptors into capabilities and as soon as you call cap_enter you lose all access to the global namespace and can only interact with external resources via existing capabilities (or ones that are given to you dynamically by another process).

      You can also more or less view iOS as a capability system if you squint hard enough. They write ACLs dynamically to try to emulate a capability system (one of the motivations for Capsicum was looking at what Ap

  • "Developers like me know that there are a lot of weak spots in the modern internet"

    There's nothing wrong with the Internet that needs fixing, the problem resides in certain computers at either end. Is this article an attempt to tarnish Open Source with some kind of crypto currency ponzi scheme?
  • Good "Open Source" funding leads to companies like Mozilla who, instead of trying to make the web better, mostly work on keeping the browser engine oligopoly alive.

    A far better solution would be to have actual FOSS with the additional rule of being as simple as humanly possible. Simple code is shorter and therefore likely contains less errors. Less errors lead to less security critical errors. Also it's easier to maintain a 1k line program than a 20 Megaline program.

    Considering that most things companies do

  • Comment removed based on user account deletion

"Marriage is low down, but you spend the rest of your life paying for it." -- Baskins

Working...