Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Databases Open Source Security News

Security Fix Leads To PostgreSQL Lock Down 100

hypnosec writes "The developers of the PostgreSQL have announced that they are locking down access to the PostgreSQL repositories to only committers while a fix for a "sufficiently bad" security issue applied. The lock down is temporary and will be lifted once the next release is available. The core committee has announced that they 'apologize in advance for any disruption' adding that 'It seems necessary in this instance, however.'"
This discussion has been archived. No new comments can be posted.

Security Fix Leads To PostgreSQL Lock Down

Comments Filter:
  • Make sure that users of your open source project are not even able to find out what attack vector exists on their systems. They should languish in the hopes that your team will fix it before malicious hackers figure out what it was. From the code they already checked out.

    Obscurity will protect everyone.

    • by bluefoxlucid ( 723572 ) on Friday March 29, 2013 @11:04AM (#43311965) Homepage Journal

      That's exactly the point. They've locked out and shrouded the changes that are being made as they're happening, because of wide-spread collaboration causing changes, tests, etc to occur. It's going to be a week before the fix is ready, but as soon as the first bits of test code go in you can quickly target that body of code and figure out the problem, then exploit it. As-is, you now have to rummage through the whole body of vulnerable code and try to guess what's actually broke.

      When the repos are opened back up, the fix will be ready. It might (probably) even be shared with the major distros, who will simultaneously have an updated package published. This greatly reduces the likelihood and window of a zero-day exploit with no fix.

      • by Anonymous Coward

        If you have a copy of the code before the changes and another copy from after, it takes literally 3 seconds to target exactly what was changed. Your explanation accounts for none of that.

        • by bluefoxlucid ( 723572 ) on Friday March 29, 2013 @11:16AM (#43312077) Homepage Journal

          My explanation accounts exactly for that and that was the point. The changes between [VULNERABLE] and [FIXED] are not public yet because the [FIXED] state is not ready for production deployment (it may be wrong, and need more work). That means you can't pop open your source tree, do a `git diff`, and go, "oh, in this code path?" and 20 minutes later have your exploit.

          Now, a week from now, this stuff will all be public and fixes will be released. Then you can target exactly what's changed, while everyone else is running updates. This is different from targeting exactly what's changed and then running around buttfucking everyone while they have to wait a week to get production-ready code OR chance it with alpha-grade software in production.

        • it takes literally 3 seconds to target exactly what was changed

          No, it' figuratively. If the patch changes multiple files, reworking big fragments of business logic, then it's less trivial to figure out the exploit. The interested parties might just use this window to update. If everyone knows the exploit before the changes are applied and tested, it's a total SNAFU.

      • When the repos are opened back up, the fix will be ready. It might (probably) even be shared with the major distros, who will simultaneously have an updated package published. This greatly reduces the likelihood and window of a zero-day exploit with no fix.

        That is what's happening, and it's the reason for the temporary lockdown. The core team member whose e-mail was linked to here is also one of RedHat's packagers for PostgreSQL as one example distribution. He's helping make sure that updated RHEL RPMs are published at the same time as the details of the vulnerability. Right now the only people who are believed to know about the problem are the project committers and a few equally trusted packagers.

    • So, go to http://git.postgresql.org/gitweb/?p=postgresql.git;a=summary [postgresql.org] and look at the source.

      What they've taken private is their patches for the problem until they can make it production ready.

      You are still fully able to access everything you've always had access to, they've just decided not to share their newest patches for a few days/weeks until people have at least a chance to protect their systems.

      Regression tests have to be run, repos need a chance to update their binary packages, all sorts of things

  • My thought is that their reaction is exactly the wrong move. All it does is announce to the bad guys that there's a vulnerability they can exploit (which they probably know about already) and that none of their targets will know what it is or how to spot an attempt to exploit it, while at the same time insuring that the admins responsible for PgSQL servers can't find out what they need to protect against. If the vulnerability is that critical and severe that it can't be discussed, then as an admin it's crit

    • Re:Wrong move (Score:5, Insightful)

      by h4rr4r ( 612664 ) on Friday March 29, 2013 @11:37AM (#43312267)

      They sent out a warning to everyone on the mailing list. I know, I got it.

      You should not have your PgSQL servers exposed to the world, no any db server. You should apply the fix when it comes out. The reality as an admin is that I know odds are damn near everything we use has as yet undiscovered vulnerabilities.

      Migrating anything major to another DB is pretty much a nonstarter. Nor will another DB give you even this much visibility. Oracle would never admit something like this with mysql.

    • Migrate to what? Postgres admitted that there is a problem. It is not known to be exploited in the wild. Do you really think Oracle, DB2, SQL Server, and MySQL have no critical security bugs in them? Or even bugs already known to the vendor in the case of the closed source ones?

      Your system is no worse today than it was yesterday. You know PostgreSQL has at least 1 bug. So unless you think another system has no bugs, do not switch.

    • As others have said, no database ports should ever be exposed to the world at large. You should have a firewall in place that only allows traffic to/from an extremely limited IP address range. Which mitigates a whole lot of issues, even if the database software is vulnerable.

      Sure, I'll need to update my pgsql instances, but because they're firewalled off from the outside world, I don't have to lose sleep over it until the fix comes out.
      • Are you positive that all the application servers you permit through the firewall are uncompromised? And that they'll remain uncompromised? Are there errors in the firewall that are allowing traffic through you don't expect? Are your servers in a data center where a mistake in the internal network could allow traffic to get to your machine from other (compromised) customers bypassing the firewall?

        And does this vulnerability even require direct access to the database server, or is it one that can be triggere

        • Jesus christ dude. Kepner-Tregoe Potential Problem Analysis. ORM charts. Decision Analysis (Pugh or Kepner-Tregoe; fuck Analytical Hierarchy, it sucks and requires tons of math for inaccurate results) followed with Adverse Consequence Analysis on ORM charts. Stop shitting yourself.
      • by CBravo ( 35450 )
        Maybe you get privileges with specific data.
    • by lgw ( 121541 )

      All it does is announce to the bad guys that there's a vulnerability they can exploit (which they probably know about already)

      You contradicted yourself in the same breath there. If the bad guys already knew about this, there would be no harm in announcing it. Announcing that there's some major vulnerability in the entire code base? That does no harm because there's some major vulnerability in the entire code base of every product out there. It's knowing where the flaw is that matters! And the team is taking the smart step to hide that for a week until the fix is ready.

      Once the fix is out, a diff will show everyone what the pr

    • If they hadn't locked it out everyone would be complaining "Why is it taking so long to patch *it's being exploited in the wild*!"

      There is just no good way to deliver news of a security hole.
    • There is no evidence of an exploit being available in the wild [hagander.net] for this issue. The PostgreSQL team has not paniced. This is a careful proactive security release for a bug that might be exploited once its source code is released. The bad guys have been given no more information than "there is an exploit possible in this code". If you believe that much information is enough for them to break into your server, and therefore you have to migrate to another system immediately, this is not a technical problem-

  • by Geeky ( 90998 ) on Friday March 29, 2013 @11:44AM (#43312325)

    I see lots of comments about needing to know the vulnerability right now, and even panic about taking servers down until it's fixed. I can't help feeling that if that's your reaction you're doing it wrong.

    In any internet facing production environment, the front end web servers will be the only place that can be attacked. They should be in a DMZ and only be accessing application servers via a firewall, which in turn access the database. Access to the database would only be allowed from the application servers, and the application servers shouldn't be able to run any random SQL. All inputs should be verified before passing to the database. It's kind of hard to see how, in a well designed system, the database is at risk. Nothing uncontrolled should be reaching it.

    Of course it's important to have security at every layer, but if an attack can get as far as exploiting code vulnerability in the database I'd say there's a bigger problem somewhere further up the chain.

    Internal attacks are another matter, but again, access controls should be ensuring that only those who really need access to the database have access to the database. Those people will be able to do enough damage without needing exploits, so again, code vulnerability at that level should be something of a non-issue.

    • A lot of the time the web servers need access to the database because the code on the web server will be doing database access. If the web servers are compromised, the firewalls will permit attacks from them against the database servers. And the same chain applies when there's application servers in the way, it just takes one more step. With automated toolkits that one more step will be taken by automated exploit software, so the attackers probably won't even notice the delay. There also, as you noted, the

      • by h4rr4r ( 612664 )

        You should of course assume there are more of these bugs in all software, all the time.

        This means web servers should not be able to submit arbitrary queries to the DB, if you can avoid it. Now getting developers to play along with this is like herding cats.

      • by Geeky ( 90998 )

        I agree it needs fixing, and even said that it's important to have security at every layer, my point was really that a number of other security measures will already have failed before the database is vulnerable. And yes, in many cases the web server will be the application server, but I'd hope that's a design that's limited to less than critical systems...

        In a truly paranoid environment the only internal access to the database will be via bastion hosts, not direct from individual desktops...

    • by lgw ( 121541 )

      any internet facing production environment, the front end web servers will be the only place that can be attacked.

      Bobby Tables would disagree - SQL injection attacks are the biggest server-side security problem these days.

      One kind of major vulnerability in a DB would be some sort of buffer overflow in parsing the data stored, such that you can take over the DB server by storing carefully crafted data - the worst kind of SQL injection attack.

      • by Geeky ( 90998 )

        Probably true, but it's sad that in 2013 we're still talking about Bobby Tables! It's still an application code issue rather than strictly a database issue.

        • by lgw ( 121541 )

          But if the DB itself has a flaw related to the content of the stored data, then the prevalence of SQL injection means you should assume you're exposed.

          For the DBs I've worked with, using stored procedures basically eliminates the threat of SQL injection (the distinction between SQL code and payload is explicit that way) - I assume Postgres is the same way, and there's really no excuse for being vulnerable to that.

          • For the DBs I've worked with, using stored procedures basically eliminates the threat of SQL injection

            Do these databases allow passing a list of values to a parameterized statement or stored procedure? For example, some features in some of the web applications I've developed require defining a procedure that takes an array and passes it to something like SELECT last_login_time FROM users WHERE username IN ?. The trouble is that a lot of database interfaces don't allow table-valued parameters, and I can't guess how many question mark placeholders I'll need in advance, so I have to make one well-tested functi

            • by rtaylor ( 70602 )

              Both Oracle and PostgreSQL will let you pass in an array as a function argument.

              Incidentally, PostgreSQL normally changes IN into =ANY(ARRAY[]) for performance, so you're not losing anything that way.

          • by Shados ( 741919 )

            stored procedures are just a mean to an end. What solves the problem is avoiding mixing queries with their parameters. When code invokes stored procedure, they are forced into the parameterized query pipeline, and that solves that (unless of course, you concatenate within the SP :)

            There's a lot of ways to invoke the parameterized query pipeline... so even without stored procedures, you really shouldn't be doing that crap anymore. And yes, all relevent and even not so relevent RDBMs have client APIs that sup

    • I see lots of comments about needing to know the vulnerability right now, and even panic about taking servers down until it's fixed. I can't help feeling that if that's your reaction you're doing it wrong.

      That a reaction exists right now is [decision-m...idence.com] wrong [decision-m...idence.com] to begin with [decision-m...idence.com]. They need a book [amazon.com] and some training [kepner-tregoe.com].

  • Do please check out this informative post from Magnus Hagander, one of the PostgreSQL core team members, which clarifies most of the points raised here:

    About security updates and repository "lockdown"

    I have received a lot of questions since the announcement [postgresql.org] that we are temporarily shutting down the anonymous git mirror and commit messages. And we're also seeing quite a lot of media coverage.

    Let me start by clarifying exactly what we're doing:

    • We are shutting down the mirror from our upstream git to our anonymous mirror
    • This also, indirectly, shuts down the mirror to github
    • We're temporarily placing a hold on all commit messages

    There has been some speculation in that we are going to shut down all list traffic for a few days - that is completely wrong. All other channels in the project will operate just as usual. This of course also includes all developers working on separate git repositories (such as a personal fork on github).

    We are also not shutting down the repositories themselves. They will remain open, with the same content as today (including patches applied between now and Monday), they will just be frozen in time for a few days.

    ...continues... [hagander.net]

C makes it easy for you to shoot yourself in the foot. C++ makes that harder, but when you do, it blows away your whole leg. -- Bjarne Stroustrup

Working...