Security Fix Leads To PostgreSQL Lock Down 100
hypnosec writes "The developers of the PostgreSQL have announced that they are locking down access to the PostgreSQL repositories to only committers while a fix for a "sufficiently bad" security issue applied. The lock down is temporary and will be lifted once the next release is available. The core committee has announced that they 'apologize in advance for any disruption' adding that 'It seems necessary in this instance, however.'"
Re: (Score:2)
Let me offer you some advice on how to get rid of this "impostor"...shutup.
That's not a good approach (Score:1, Interesting)
Make sure that users of your open source project are not even able to find out what attack vector exists on their systems. They should languish in the hopes that your team will fix it before malicious hackers figure out what it was. From the code they already checked out.
Obscurity will protect everyone.
Re:That's not a good approach (Score:5, Insightful)
That's exactly the point. They've locked out and shrouded the changes that are being made as they're happening, because of wide-spread collaboration causing changes, tests, etc to occur. It's going to be a week before the fix is ready, but as soon as the first bits of test code go in you can quickly target that body of code and figure out the problem, then exploit it. As-is, you now have to rummage through the whole body of vulnerable code and try to guess what's actually broke.
When the repos are opened back up, the fix will be ready. It might (probably) even be shared with the major distros, who will simultaneously have an updated package published. This greatly reduces the likelihood and window of a zero-day exploit with no fix.
Re: (Score:2)
No, you are just stupid.
Re: (Score:1)
If you have a copy of the code before the changes and another copy from after, it takes literally 3 seconds to target exactly what was changed. Your explanation accounts for none of that.
Re:That's not a good approach (Score:5, Insightful)
My explanation accounts exactly for that and that was the point. The changes between [VULNERABLE] and [FIXED] are not public yet because the [FIXED] state is not ready for production deployment (it may be wrong, and need more work). That means you can't pop open your source tree, do a `git diff`, and go, "oh, in this code path?" and 20 minutes later have your exploit.
Now, a week from now, this stuff will all be public and fixes will be released. Then you can target exactly what's changed, while everyone else is running updates. This is different from targeting exactly what's changed and then running around buttfucking everyone while they have to wait a week to get production-ready code OR chance it with alpha-grade software in production.
Re:That's not a good approach (Score:5, Insightful)
People looking to exploit vulnerabilities on widely-installed software (databases, programming languages, frameworks, etc.) keep an eye on commit logs to do precisely this. Those patches and commits call attention to themselves; postgres is right to ensure that a patch is available at the same time it indicates the attack vector. In fact, they'd probably be wise to make sure major binary repos have a patched copy even before making the changed source available so that sysadmins have a week to do an update from yum/apt-get/$pkgmgr
The only difference between this and patch tuesday is that you know what goes into this fix after the fact. If you see 'critical security update' in your mailing lists, it becomes a race between you updating your system and attackers figuring out how to exploit the old version; them doing so is orders of magnitude more difficult if they don't actually know what's changed.
Is it the FOSS way? No. But I'd happily take a project going closed-source for two weeks if it means my database doesn't get hacked (but then again, I'm dealing with PCI-DSS Level 1 so I kinda have to). Now hopefully people have their databases completely inside the firewall as to minimize the attack vector - assuming it has something to do with an authentication flaw, at least (and not, say, remote code execution due to a bug in parameterized queries). See - I don't know what they're changing, so I don't even know where to start probing.
Re: (Score:2)
In fact, they'd probably be wise to make sure major binary repos have a patched copy even before making the changed source available so that sysadmins have a week to do an update from yum/apt-get/$pkgmgr
That is impossible in the general case, and that fact is one reason the somewhat careful plan is executing. Some open source projects require releasing the source code along with the binaries. RedHat for example will always distribute source RPMs at the same time as the binary RPMs. The PostgreSQL license doesn't have such requirements, but the distribution's release policies can't necessarily change just because some packages have less requirements.
Fundamentally, PostgreSQL can't make any downstream pac
Re: (Score:1)
it takes literally 3 seconds to target exactly what was changed
No, it' figuratively. If the patch changes multiple files, reworking big fragments of business logic, then it's less trivial to figure out the exploit. The interested parties might just use this window to update. If everyone knows the exploit before the changes are applied and tested, it's a total SNAFU.
Re: (Score:3)
Re: (Score:2)
When the repos are opened back up, the fix will be ready. It might (probably) even be shared with the major distros, who will simultaneously have an updated package published. This greatly reduces the likelihood and window of a zero-day exploit with no fix.
That is what's happening, and it's the reason for the temporary lockdown. The core team member whose e-mail was linked to here is also one of RedHat's packagers for PostgreSQL as one example distribution. He's helping make sure that updated RHEL RPMs are published at the same time as the details of the vulnerability. Right now the only people who are believed to know about the problem are the project committers and a few equally trusted packagers.
Re:That's not a good approach (Score:5, Insightful)
Open-source doesn't magically decrease the severity or number of bugs, but it does allow more people to eventually discover them. There's an obvious trade-off here: non-malicious people can find and then report and/or fix the bugs, or malicious people can find and then exploit them. The hope is that there are more contributors than attackers finding bugs and that it ends up being a net positive for stability and security. Neither open nor closed source is the right model 100% of the time for 100% of projects.
There's no hypocrisy here - the source of the patches will be released and all future commits will be made public again. This was a short-term decision weighing practicality and security against the "religion" of OSS. It's the difference between responsible disclosure and letting the software maintainers find out about the same exploit because you blogged about it, so attackers find out at the same time. They could have one or two people developing the patch in a local branch and simply not push anything upstream until it's done and tested and have the same effect, this is just an easier approach.
Re: (Score:2)
They could have one or two people developing the patch in a local branch and simply not push anything upstream until it's done and tested and have the same effect, this is just an easier approach.
That's exactly it - the typical open source methodology and infrastructure isn't really what defines the product as open source or not. Many of the commercial dual-licensed vendors still just throw code over the wall every few months, and they're definitely still open source. All the PostgreSQL folks are doing is
Re: (Score:2)
That really depends on what is your definition of open source. My favorite definition comes from OSI [http://opensource.org/about]: "Open source is a development method for software that harnesses the power of distributed peer review and transparency of process. The promise of open source is better quality, higher reliability, more flexibility, lower cost, and an end to predatory vendor lock-in."
That clearly means companies throwing source code over the wall every few months does not make the product open s
Re: (Score:2)
I agree, that's much better. A good reason to fork a project too. It's too bad OpenJDK hasn't done that yet - they still suffer from the over-the-wall model.
Re: (Score:2)
Nonsense. I think that OSS enthusiasts grossly overstate the benefits of OSS sometimes, but the "many eyes" DID find the problem, and now they are working on a fix.
Would you rather
A) they tell everyone "hey, the problem is that you can easily exploit PostGreSQL by doing X, but we will have a fix in a week or two", or
B) tell everyone "there is a security flaw, but we will not disclose details until the fix is out"
Guess which one ALL major vendors do when they have a choice, btw? Google does this, MS does t
Re: (Score:2)
You left out option C:
C) Don't tell anyone there's a problem, and pretend that there isn't one until you have a new version to sell.
MAYBE MS doesn't do that anymore. I stopped using their products, so I don't know. They certainly used to.
Re: (Score:1)
You left out option C: C) Don't tell anyone there's a problem, and pretend that there isn't one until you have a new version to sell.
MAYBE MS doesn't do that anymore. I stopped using their products, so I don't know. They certainly used to.
D) Everyone including MS knows there is a problem, but the fix doesnt come out in the next patch because it doesnt effect enough customers to be worth the effort involved in making the neccessary changes. Because MS is profit driven organization and has no incentive to assist a minority when there is no return on investment.
Re: (Score:2)
Sorry, but EVERYONE uses that option. Check out old bugs in some open source projects. So you can't reasonably single out MS for that one.
Re: (Score:2)
OSS is not a magical fairy dust. There are OSS projects that suck at fixing bugs and there are projects that are pretty good at this. PostgreSQL is one of the great ones, IMHO.
What's great about OSS is that the bugs can be analyzed and fixed by anyone with sufficient knowledge, not just by a single company.
Re: (Score:3)
So, go to http://git.postgresql.org/gitweb/?p=postgresql.git;a=summary [postgresql.org] and look at the source.
What they've taken private is their patches for the problem until they can make it production ready.
You are still fully able to access everything you've always had access to, they've just decided not to share their newest patches for a few days/weeks until people have at least a chance to protect their systems.
Regression tests have to be run, repos need a chance to update their binary packages, all sorts of things
Re:Say what? Streisand effect on security perhaps? (Score:5, Insightful)
You are assuming it is a new problem, the approach they selected tells me they have found a *MAJOR* issue in several versions of PostgreSQL; that means it's old code.
They even say, keep an eye on this next release, because you (users) need to apply it at once - this isn't something that only affect latest build.
Re:Say what? Streisand effect on security perhaps? (Score:5, Informative)
And from Postgres we have:
http://www.postgresql.org/about/news/1454/ [postgresql.org]
This is a major security issue and it affects *ALL* versions of postgres. Locking it down while updates are being created seems the right way to do it to me...
Re:Say what? Streisand effect on security perhaps? (Score:5, Insightful)
They'll have to hunt through all the code. Since a viable, production-ready fix won't be available for a week, but a new piece of code in the vulnerable body is available now, leaving the repo public would result in a week of free exploitation--they've gone and highlighted the exact bit of code the problem is in. The repos are closed, so only contributors and any downstream distribution providers that are working with them to build and test the fixed code are privy to this.
This temporary closure greatly reduces the risk of an attacker tearing down the code and finding the precise vulnerability they're trying to mitigate.
Re: (Score:2, Flamebait)
Re: (Score:1)
You're no longer a script kiddie when you can find an undisclosed vulnerability in a source code base as large as PostgreSQL. You've graduated to cyber criminal.
No. You become a cyber criminal when you abuse the vulnerability. When you can find one, you're a successful security auditor.
Re:Say what? Streisand effect on security perhaps? (Score:5, Informative)
From the article:
The reason for the lockdown is to ensure that malicious users don’t work out an exploit by monitoring the changes to the source code while it is being implemented to fix the flaw.
So a mirror of the code from 24 hours ago wouldn't have any work-in-progress commits. These commits would give clues as to where the vulnerability is.
It sounds like a really good use case for distributed version control. When this sort of thing happens, developers should be able to temporarily fork the repo and work on security issues in private, while everyone else is still able to access the main repo.
Re: (Score:2)
I don't think there's anything wrong with posting this to Slashdot. Everybody already knows that any complex software will have bugs in it. This doesn't gvie any clue as to what the bug is. And anybody serious about doing a malicious penetration will already have read the announcement.
Further, this gives people warning to not start any new installs of PostGreSQL right now, because you'll just need to re-install it in a week or so.
The "religious war" thing that's going on under this story is just loud-mou
Re: (Score:2)
It sounds like a really good use case for distributed version control. When this sort of thing happens, developers should be able to temporarily fork the repo and work on security issues in private, while everyone else is still able to access the main repo.
Sure, if you have infrastructure to run a hidden repo that only your devs can access. They likely don't have this, as is the case with most FOSS projects.
Re: (Score:1)
Re: (Score:2)
Seems like the best way to handle it. Fixing security flaws that touch a lot of code and doing all your development in the open aren't always compatible.
Most linux distros secure security bugs for similar reasons. They don't usually have to block as much because they don't need extensive changes and integration work to deploy security patches. Well-contained software bugs also don't need as much of this since you don't need as much coordination.
I've always admired Postgres. I just wish the SQL world was
Re: (Score:2)
Since they use git ... I would say that would be what happened.
Linked from their downloads page is this:
http://git.postgresql.org/gitweb/?p=postgresql.git;a=summary [postgresql.org]
And its still fully accessible.
Re: (Score:2)
That's interesting, because the git.postgresql.org page you linked shows recent work desicribed as "Fix page title for JSON Functions and Operators." Couple that with the fact that the Slashdot summary has a link to a Parity News page that contains a link to the Postgresql announcement, and the Parity News link is loaded with javascript in the url.
I wonder if Parity News is trying to demonstrate the Postgresql flaw?
Re: (Score:1)
How is this worse than announcing the vulnerability publicly? If they had done that, no one would even need to hunt for the vulnerability. They would just have to read the announcement.
Re:Say what? Streisand effect on security perhaps? (Score:5, Insightful)
Let me get this straight, so I know we're on the same page.
There is a major vulnerability in basically ALL Postgres installations in the world. That means it has not been introduced by any recent commits. The patch(es) are not yet public, and the repositories have been made non-public while the fix is in the works.
The fix is likely delayed somewhat by the occurrence of Easter holidays. Lots of people have taken extended weekends - probably a good number of Postgres devs included. There is probably no sane way to deploy the fixed versions until after the holidays. Not everyone can afford 24/7 admins.
And you want to complain about the developers being irresponsible when dealing with this?
(For the record: I'm pretty much a full-disclosure guy, but a slightly delayed disclosure with NO IN-THE-WILD EXPLOITS for a vulnerability that is discovered just ahead of a major holiday weekend... I can live with that.)
Re: (Score:2)
The git source is still available (http://git.postgresql.org/gitweb/?p=postgresql.git;a=summary [postgresql.org]); it is only the patches for the bug-in-question that are closed off. This seems entirely reasonable given the severity of this vulnerability.
Re: (Score:1)
This seems like a really dumb move. What the team has done now is to raise the exposure level of this vulnerability by a HUGE margin. Now all any script kiddie needs to do is find a mirror of the code from 24 hours ago or any other recent period, which is likely quite trivial to do with an open source project as large as postgresql, and hunt for the vulnerability. They know it will be pretty bad since they did this action!
Raising exposure helps companies prepare for a database change. I know if we were using Postgres, we would be scheduling time for the upgrades an preparing for potential down time if needed. We would be going through all our database code to identify exactly what needs to be checked after the upgraded software and documenting as much as possible. Because if this is major, the changes could effect the way our code integrates with the database. For example, if its an authentication breach, then much of our ow
Re: (Score:2)
The only train wreck is your mind.
Re: (Score:2)
You are making a perhaps invalid presumption as to his reasoning. It could well be that he's just used to MySQL and doesn't want to think of changing. He could have a lot of code that's dependent on incompatible features, and doesn't want to believe that this was a bad choice. Money isn't the reason for everything.
That said, I'm not really convinced that PostGreSQL is superior in all use cases to the MySQL family of databases. I do tend to think that it's generally superior, but I'm not expert in either
Wrong move (Score:2)
My thought is that their reaction is exactly the wrong move. All it does is announce to the bad guys that there's a vulnerability they can exploit (which they probably know about already) and that none of their targets will know what it is or how to spot an attempt to exploit it, while at the same time insuring that the admins responsible for PgSQL servers can't find out what they need to protect against. If the vulnerability is that critical and severe that it can't be discussed, then as an admin it's crit
Re:Wrong move (Score:5, Insightful)
They sent out a warning to everyone on the mailing list. I know, I got it.
You should not have your PgSQL servers exposed to the world, no any db server. You should apply the fix when it comes out. The reality as an admin is that I know odds are damn near everything we use has as yet undiscovered vulnerabilities.
Migrating anything major to another DB is pretty much a nonstarter. Nor will another DB give you even this much visibility. Oracle would never admit something like this with mysql.
Re: (Score:2)
Re: (Score:2)
I think you might be right. There's not much that should rise to this level of alarm, but this would.
I told a client earlier today, "let's assume it's a post-sanitation vulnerability and make a plan to handle that. We can scale back the plans if it turns out to be less severe."
Re: (Score:2)
What a strange universe you live in. Sounds nice.
Databases firewalled? No bad guys on your network? No direct DB connectivity?
Re: (Score:3)
Migrate to what? Postgres admitted that there is a problem. It is not known to be exploited in the wild. Do you really think Oracle, DB2, SQL Server, and MySQL have no critical security bugs in them? Or even bugs already known to the vendor in the case of the closed source ones?
Your system is no worse today than it was yesterday. You know PostgreSQL has at least 1 bug. So unless you think another system has no bugs, do not switch.
Re: (Score:2)
Sure, I'll need to update my pgsql instances, but because they're firewalled off from the outside world, I don't have to lose sleep over it until the fix comes out.
Re: (Score:3)
Are you positive that all the application servers you permit through the firewall are uncompromised? And that they'll remain uncompromised? Are there errors in the firewall that are allowing traffic through you don't expect? Are your servers in a data center where a mistake in the internal network could allow traffic to get to your machine from other (compromised) customers bypassing the firewall?
And does this vulnerability even require direct access to the database server, or is it one that can be triggere
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
All it does is announce to the bad guys that there's a vulnerability they can exploit (which they probably know about already)
You contradicted yourself in the same breath there. If the bad guys already knew about this, there would be no harm in announcing it. Announcing that there's some major vulnerability in the entire code base? That does no harm because there's some major vulnerability in the entire code base of every product out there. It's knowing where the flaw is that matters! And the team is taking the smart step to hide that for a week until the fix is ready.
Once the fix is out, a diff will show everyone what the pr
Re: (Score:2)
There is just no good way to deliver news of a security hole.
Re: (Score:2)
There is no evidence of an exploit being available in the wild [hagander.net] for this issue. The PostgreSQL team has not paniced. This is a careful proactive security release for a bug that might be exploited once its source code is released. The bad guys have been given no more information than "there is an exploit possible in this code". If you believe that much information is enough for them to break into your server, and therefore you have to migrate to another system immediately, this is not a technical problem-
How would an attack happen? (Score:4, Informative)
I see lots of comments about needing to know the vulnerability right now, and even panic about taking servers down until it's fixed. I can't help feeling that if that's your reaction you're doing it wrong.
In any internet facing production environment, the front end web servers will be the only place that can be attacked. They should be in a DMZ and only be accessing application servers via a firewall, which in turn access the database. Access to the database would only be allowed from the application servers, and the application servers shouldn't be able to run any random SQL. All inputs should be verified before passing to the database. It's kind of hard to see how, in a well designed system, the database is at risk. Nothing uncontrolled should be reaching it.
Of course it's important to have security at every layer, but if an attack can get as far as exploiting code vulnerability in the database I'd say there's a bigger problem somewhere further up the chain.
Internal attacks are another matter, but again, access controls should be ensuring that only those who really need access to the database have access to the database. Those people will be able to do enough damage without needing exploits, so again, code vulnerability at that level should be something of a non-issue.
Re: (Score:2)
A lot of the time the web servers need access to the database because the code on the web server will be doing database access. If the web servers are compromised, the firewalls will permit attacks from them against the database servers. And the same chain applies when there's application servers in the way, it just takes one more step. With automated toolkits that one more step will be taken by automated exploit software, so the attackers probably won't even notice the delay. There also, as you noted, the
Re: (Score:2)
You should of course assume there are more of these bugs in all software, all the time.
This means web servers should not be able to submit arbitrary queries to the DB, if you can avoid it. Now getting developers to play along with this is like herding cats.
Re: (Score:2)
I agree it needs fixing, and even said that it's important to have security at every layer, my point was really that a number of other security measures will already have failed before the database is vulnerable. And yes, in many cases the web server will be the application server, but I'd hope that's a design that's limited to less than critical systems...
In a truly paranoid environment the only internal access to the database will be via bastion hosts, not direct from individual desktops...
Re: (Score:3)
any internet facing production environment, the front end web servers will be the only place that can be attacked.
Bobby Tables would disagree - SQL injection attacks are the biggest server-side security problem these days.
One kind of major vulnerability in a DB would be some sort of buffer overflow in parsing the data stored, such that you can take over the DB server by storing carefully crafted data - the worst kind of SQL injection attack.
Re: (Score:2)
Probably true, but it's sad that in 2013 we're still talking about Bobby Tables! It's still an application code issue rather than strictly a database issue.
Re: (Score:2)
But if the DB itself has a flaw related to the content of the stored data, then the prevalence of SQL injection means you should assume you're exposed.
For the DBs I've worked with, using stored procedures basically eliminates the threat of SQL injection (the distinction between SQL code and payload is explicit that way) - I assume Postgres is the same way, and there's really no excuse for being vulnerable to that.
Table-valued parameters (Score:2)
For the DBs I've worked with, using stored procedures basically eliminates the threat of SQL injection
Do these databases allow passing a list of values to a parameterized statement or stored procedure? For example, some features in some of the web applications I've developed require defining a procedure that takes an array and passes it to something like SELECT last_login_time FROM users WHERE username IN ?. The trouble is that a lot of database interfaces don't allow table-valued parameters, and I can't guess how many question mark placeholders I'll need in advance, so I have to make one well-tested functi
Re: (Score:2)
Both Oracle and PostgreSQL will let you pass in an array as a function argument.
Incidentally, PostgreSQL normally changes IN into =ANY(ARRAY[]) for performance, so you're not losing anything that way.
Variable number of placeholders (Score:2)
No, you just dynamically build a statement that has the correct number of placeholders (using no user-supplied data except to determine that number, and none in the statement itself) and then execute it.
Making sure that the placeholders remain in the same order as the values that will be substituted into the placeholders is almost as troublesome as substituting literal values. For example, a statement involving WHERE foo = ? AND bar IN ? will misbehave, possibly almost as catastrophically as in an injection, if another part of the code is modified to add the value for foo to the list after the values for bar have been added and the part that creates the query containing placeholders is not updated in perf
Re: (Score:2)
It really isn't.
Disagreeing without telling me why you disagree tells me nothing. Please elaborate.
If WHERE foo = ? AND bar IN (?,?,?,?,?,?,?,?,?,?) ends up changed to WHERE bar IN (?,?,?,?,?,?,?,?,?,?) AND foo = ?, how do I prevent this change from causing disastrous results if the order in which the placeholders appear in the statement does not match the order in which values are added to the array, if one of the valid values for column foo is also a valid value for column bar? In the case of a single well-tested func
Re: (Score:2)
stored procedures are just a mean to an end. What solves the problem is avoiding mixing queries with their parameters. When code invokes stored procedure, they are forced into the parameterized query pipeline, and that solves that (unless of course, you concatenate within the SP :)
There's a lot of ways to invoke the parameterized query pipeline... so even without stored procedures, you really shouldn't be doing that crap anymore. And yes, all relevent and even not so relevent RDBMs have client APIs that sup
Re: (Score:2)
I know it's not always easy, but most data input into web forms is quite straightforward. The application should not be checking whether the data is invalid - it should be checking that it's valid. That's a subtle distinction, and I'm probably going to fail to explain it! The critical thing is to allow only that data that is valid for the question being asked. Most of the time restricting the input to a certain length and only allowing specific characters should be enough, and wherever possible limit input
Re: (Score:3)
I know it's not always easy, but most data input into web forms is quite straightforward. The application should not be checking whether the data is invalid - it should be checking that it's valid. That's a subtle distinction, and I'm probably going to fail to explain it!
You'd probably have an easier time explaining it as whitelisting versus blacklisting. A developer can't hope to ever enumerate all the bad things an app should reject, so s/he should instead enumerate the much smaller set of things it should accept. Same deal if you're using a regex or whatnot to sanitize input instead of matching against a list.
Re: (Score:2)
Whitelisting - thank you, describes what I meant perfectly.
Re: (Score:2)
This is all wrong. I mean, you might want to validate anyway. But the best way to prevent injection is to only supply user inputs to methods that won't execute code contained in them.
Re: (Score:2)
I see lots of comments about needing to know the vulnerability right now, and even panic about taking servers down until it's fixed. I can't help feeling that if that's your reaction you're doing it wrong.
That a reaction exists right now is [decision-m...idence.com] wrong [decision-m...idence.com] to begin with [decision-m...idence.com]. They need a book [amazon.com] and some training [kepner-tregoe.com].
Blog post from one of the core team members (Score:3)
Do please check out this informative post from Magnus Hagander, one of the PostgreSQL core team members, which clarifies most of the points raised here: