Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
News

Postgres Beats MySql, Interbase, And Proprietary DBs 254

Mike writes: "An independent organization tested Postgres 7 vs. MySql, Interbase, and two leading commercial databases using the ANSI SQL Standard Scalable And Portable benchmark and found that Postgres was the clear winner. In fact, Postgres was the only open source database to offer similar performance to the two commercial applications. The results are detailed here."
This discussion has been archived. No new comments can be posted.

Postgres beats MySql, Interbase, And Proprietary DBs

Comments Filter:
  • You should try backing up your claim. Obviously, MySQL is a toy database. It is used for fast tabular data access, not true database work. What makes PostgreSQL a toy? It supports things such as table inheritance, very nice transaction support, and other things.

    I'm curious, what makes open source automatically have lesser products? I'm not saying that currently PostgreSQL has everything you may need, but does that have to do with open source or just the specific project? Also, I'm not impressed with your amount of data. If you are using a pre-written application, most of them hold and keep _way_ more data than they ever need to. Oracle Applications, for instance, for our 300-person company, took up 1 Gig just for the table definitions! So your 4Tb system could be more from mismanagement than having that much useful data.

    Actually, for database applications, the critical component is the hardware (assuming you have a _real_ RDBMS - not MySQL). Sun hardware beats just about anything. If you want to see PostgreSQL being used in the real world, take a look at http://www.pgsql.com/

    Have you taken a look at the features in PostgreSQL? They are very, very nice.
  • They compared the bleeding edge postgres (7.0) with the old-as-heck mysql (3.22) - they're now up to revision *22* of the development series for mysql - that's a pretty huge amount of changes. I would have been much more impressed with this if they had ran the comparison between 3.23.22 and 7.0.
    Postgres 7.0 is the current stable release. According to http://www.mysql.com/downloads, MySQL v3.22 is the current stable release and v3.23 is the current beta release of MySQL. Given that, I can't blame them for not testing a stable product versus a beta product.

    Even if there have been "a pretty huge amount of changes" as you state, it's still marked as a beta product. As for myself, I wouldn't use a beta product in a production environment until it's been marked as stable by the developers, no matter how stable other people might say it is. That's why I use MySQL v3.22 right now. If something is marked beta, there is more than likely a reason for it. It would be irresponsible of me to risk using it and risk losing customer's data. I'm sure other businesses and individuals can't afford to take those risks with their data either.

  • 9 out of 10 people say Statistics can be made to mean anything you want.
    2 out of 13 people don't understand statistics.
    45 out of 51 statistics are made up on the spot.
    3 out of 4 people don't know how to spell Statistics.
  • The lame ODBC driver is the only one available for Interbase. The "real" odbc driver has been retracted and may never see the light of day. What do you want them to use? As of today there is no ODBC driver for IB 6.0 not even a commercial one.

    A Dick and a Bush .. You know somebody's gonna get screwed.

  • >They don't whine about their competition not
    >being "SQL compatible" (a debatable term in any
    >event). They don't lie about how the competition
    >"isn't relational." They don't sneer about how
    >the competition is "just a front end to the file
    >system."

    Have you read the MySQL docs?

    Their section on why not to use foreign keys at best makes me laugh, at worst makes me cringe. This sentiment is even shared by most of the MySQL fans I know.

    They gloss over their lack of sub-selects with an example that doesn't require the feature.

    They gloss over the lack of transactions and commit/rollback syntax by suggesting table locking, which simply is not practical in high volume environments. Moreover, their examples neatly omit the idea that you may need to lock a *LOT* of tables, stalling updates to most of the database in order to circumvent the lack of MVCC.

    Whether you want to admit it or not, the drawbacks in MySQL are *VERY* real, and the MySQL documentation trys to play them off as minimal annoyances.

    I find their lack of responsibility in this more offensive than a few causally posted insults.
  • >Anyway, if there was a standard benchmark to run
    >against both DB's from a vanilla install (and by
    >that I mean running './configure' with no flags)
    >then I think that would go a long way towards
    >giving a more balanced view.

    The tests they used ARE standard database benchmarking tests - AS3AP and TPC-C.

    As for configuration, I can't rightly say what they did with it, as I wasn't there.

    However, running them without any options does not necessarily put them on equal footing. For example, Postgres by default does an fsync after every single write, whereas MySQL does not. This SERIOUSLY impacts performance.
  • Not to mention the fact that in the real world, it's not really a viable option to recompile programs/kernels every other week or even shut the server down more than once a month.
  • A year ago - that's about the timeframe in which some VERY serious memory leaks were uncovered and fixed, at the prodding of some folks using it in a web environment.

    When you've got memory leaks, your server will go down, it's just a matter of time. Once the leaks were fixed, the folks I know using AOLserver+Postgresql who were forced to reboot periodically found their problems disappeared.

    We at OpenACS (http://www.openacs.org) have been running our Postgres installations for months at a time with no need to reboot.

    It truly has improved dramatically in the last 15 months or so.

    Am I a biased Postgres fanatic? Not exactly - when I evaluated 6.4 in January, 1999 I decided it was useless for web work, far too slow and crash-prone. When 6.5 was released it was so dramatically improved that I changed my mind.

    And, the OpenACS web toolkit project intends to support both Postgres and Interbase (now that the latter's Open Source), so we're not Postgres-only bigots.
  • ...as someone who has downloaded, installed and used Oracle 8i, IBM's DB2 and Borland's Interbase I can testify that configuring any of these DB's properly is a non-trivial task that can easily be messed up by someone who has no idea what he/she is doing.

    Most of the major DB companies provide DB's for independent benchmarking from organisations like the Transaction Processing Performance Council [tpc.org]. As can be seen from this story [zdnet.com] these tests involve several thousand transactions per second and not several hundred as reached by this Great Bridge sponsored benchmark.
    The Queue Principle
  • The review says "proprietary" because they're not allowed to release the names. It's true, I've read the contracts (I use Oracle as well as Postgres, depending on whether or not I'm picking up the tab). It's not their fault.

    Clearly, if they were open source, their names could be revealed. They have to use SOME word to describe them and owe their users an explanation as to why they don't name names, don't they?

    These contract terms really suck, and that's where you should be venting your anger, rather than accusing the benchmark authors of "FUD" just because they adhere to contract terms.

    They explained this right in the article, too, maybe you should've read it instead of falling asleep.
  • Were you using PG 7.0, or PG 6.5? The optimizer's been improved in PG 7.0, and in my case one particularly annoying query that had a very poor plan chosen by PG 6.5 had a very good plan chosen for PG 7.0.

    You can turn off various join strategies in PG 7.0 via "set" commands, not as flexible as Oracle's hints notation to the optimizer but it can help in situations like you describe. At least you get SOME control over the optimizer.

  • frankly... i somehow doubt they will be able to fix everyon's concerns.

    as many have said before you cannot add transactions to a non-transaction database. transactions are complex entities which require pervasive changes to the code base, and require some _real_ thought how to deal with the issues they bring up. MVCC for example...

    anyways, we have postgreSQL, its advanced, its cool, it has a better interface than mysql! why not improve postgreSQL rather than hacking up mysql to make it do something it fundementally wasnt designed to do?
  • version 7.0.2 is the current reccomended production stable release.
  • "It looks like they got those DB/2 results on a system with 128 Xeon CPUs and a system price tag of just $14,232,696! I wonder how well DB/2 would do on the same hardware as the Postgres tests."

    Conversly I'd like to see how well Postgres would do on that hardware.
  • I read the article. I was asking rhetorically. My point is, they were running the TCP benchmark without the benefit of TPC's expertise. I'll wait until the TPC itself runs it. Also, the price/performance thing is hardware/software. Theoretically, that should give free(beer) databases an edge, right? hmmmmm, wonder why the top ten is MS SQL? could it be it's the best product for the money? Nah.....
    ---
  • by x-empt ( 127761 ) on Monday August 14, 2000 @11:49AM (#855918) Homepage
    It was added a while back. And still, mysql (when using MyISAM) is a lot faster than competing databases. Sorry I've done my independant benchmarks in the "real-world".

    Don't believe me, just test them out for yourself. MySQL opens a can of whoopass, but people just don't realize that I guess...

    I really strongly encourage everyone to benchmark their database servers independantly, instead of trusting these "independant" companies like.... well we all remember MindCraft, can we trust these organizations?
  • Unfortunately they tend to use very expensive machines to do those tests. I don't know of any open source software that scrape 4 or 5 million dollars to build a huge machine like that. I would love to see tpc ratings done on affordable 2 to 4 processor machines tho. That would be cool.

    A Dick and a Bush .. You know somebody's gonna get screwed.

  • by Anonymous Coward on Monday August 14, 2000 @11:50AM (#855921)
    The tests were performed using ODBC drivers. ODBC drivers vary widely in quality but are always inferior to the native API of any given database. For Interbase they used the ODBC driver from Interbase 5. This is a notoriously poor performing driver. I've found Interbase to be as fast as SQL Server 7 and nearly as fast as Oracle 8i when accessed using native methods.
  • I'm no lawyer, but I would have to wonder if there isn't a constitutionally protected right to free speech that prevents this sort of "license" from having any force.
  • *IF* you are in a production environment, you need a *real* RDMS, NOT MySQL (a file system with a SQL interface).

    PostgreSQL sounds cool, will look deeper for my next DB server.

    Visit DC2600 [dc2600.com]
  • That is because it is not a REAL RDBMS.

    Visit DC2600 [dc2600.com]
  • How do you edit those PDF docs?
  • The MySQL online manual [mysql.com] is very thorough. Print it, and it would be better than the books that are out there.

    In fact, I got the O'Reilly book on MySQL(don't get this one, get the New Riders book if you need to have a book.) and I found myself referring to the MySQL help file for most things. I certainly trust it more than the book for correct information.

    • Scales to multiple processes? Excellent in that regard, that's why it performs so well with 100 users.
    • Scales well in a multi-processor environment? Due to its architecture currently that's mostly an OS issue.
    • 300GB databases? Not its forte, probably want to buy Oracle today for such huge databases. PG does support large tables (>2GB) but IMO not very efficiently. Of course, for filesystems supporting >2GB files this caveat disappears. Also, there's no tablespace or similar feature to use multiple filesystems in a single database. This shortcoming is being addressed.
    • keep transaction time reasonable for 1000's of users? In Postgres writers never block readers, ala Oracle (absent explicit user-level locks, of course). This was implemented in 6.5, and is the major reason why performance under high concurrency is so vastly superior to earlier versions. On the other hand, I doubt if there are any Postgres installations running thousands of simultaneous users.
    • Online backup? A limited "yes", the pg_dump utility will create a consistent database if it will fit into a single dump file. pg_dump is one area that is being worked on and improved, and improved on-line backup facilities are planned.
    • Proven data integrity? It can hose, but heck, Oracle partially hosed me last week so what do you expect? :) I've never had a situation arise where I've not been able to retrieve my data, but there are theoretical holes in the current implementation. Full write-ahead logging is being implemented right now, and not only will this even further increase reliability but should improve speed (if you put the log on another platter separate from the database that is).
    • triggers, stored procedures, JDBC etc? Not sure about DRDA but you can always write your own and submit it to the project if not. ODBC and JDBC are supported, as are triggers and stored procedures (in a variety of languages, not only SQL).
    • Full RDBMS support If you mean transaction semantics and referential integrity, yes.
    • SQL'99 core compatibility Currently, SQL'92 is the goal. I'm sure SQL'99 will become the goal when the time's ripe. The referential integrity stuff is closer to SQL'99 than Oracle is (my one and only contribution to the project was to help implement RI).
  • by chuckw ( 15728 )
    This is pretty surprising to me. The reason I originally chose MySQL was because of Linux Journal's reporting of how it stacked up with Oracle and the other commercial DB's. Hmmmm...
    --
    Quantum Linux Laboratories - Accelerating Business with Linux
    * Education
    * Integration
    * Support
  • Moderators, start stocking up on your points, because it'll get ugly.

    --
    Ben Kosse

  • Could everyone please do a search for postgres or mysql and read the previous stories before repeating the SAMOE points that have been beaten to death about 1000 times here? Like:

    - But mysql is faster! - who cares, mysql doesn't do transactions
    - who cares, most web apps don't need transactions
    - well i hope you don't expect to do anything important without transactions!
    etc etc etc etc

    At least MySql is GPL now so noone can go off on THAT. But seriously, I find this story interesting, but geezus, let's get another section for these types of story. like a Database section, or maybe Flamebait section.

    sig:

  • The reason MySQL was slower was because they used the ODBC drivers. The MySQL ODBC drivers are known to be significately slower then the native drivers.
    I relayed this opinion to the postgresql development team, and they insist that this can not be the case. I'm going to take the liberty of quoting Thomas Lockhart: "If it were due to the ODBC driver, then MySQL and PostgreSQL would not have had comparable performance in the 1-2 user case. The ODBC driver is a per-client interface so would have no role as the number of users goes up."

    They make the point that AS3AP test is read-only and should have been a piece-of-cake for MySQL, if you believed MySQL's slashdot-enhanced reputation as the ultimate in web automation.

    Instead, it's looking as though MySQL is only fast when it doesn't matter: when there aren't very many people using it.

    The Postgresql people insist that the results of these benchmarks were completely unexpected... they knew that they'd improved things a lot, but they'd never made an attempt to measure it before. It's understandable to suspect that the "independant agency" was watching which side it's bread was buttered on, but it's also obvious that Great Bridge really needed to know this information, *someone* had to do these tests, and who else was going to pay for them?

    It's all very well and good to take benchmarks with the proverbial grain of salt, but you can't just throw out data because you don't like the result.

  • by rotten_ ( 132663 ) on Monday August 14, 2000 @11:54AM (#855959)
    This 'article' is nothing more than a press release from Great Bridge [greatbridge.com].

    There may be some additional information learned by reading the results of the benchmark from
    http://www.tpc.org/New_Result/TPCC_ Results.html [tpc.org]

    Although I am having a hard time finding any reference to Postgres on that page. Can anyone find any better references?

    -k
  • The first "top ten results by price/performance" stuck strictly with MS SQL Server, which we all know doesn't run on Linux. Besides, how do you rate a product in price when it has no price? Postgres can be downloaded and used freely.

    um... Not unless your hardware and personnel are free. From http://www.tpc.org/faq_TPCC.html

    Q: What do the TPC's price/performance numbers mean?

    A: TPC's price/performance numbers (e.g. $550 per tpmC) may not be what you think they are. When first analyzing the TPC price/performance numbers, most people mistakenly believe they are looking at the cost of the computer or host machine. That is just one component, and not always the major component of the TPC's pricing methodology. In general, TPC benchmarks are system-wide benchmarks, encompassing almost all cost dimensions of an entire system environment the user might purchase, including terminals, communications equipment, software (transaction monitors and database software), computer system or host, backup storage, and three years maintenance cost. Therefore, if the total system cost is $859,100 and the throughput is 1562 tpmC, the price/performance is derived by taking the price of the entire system ($859,100) divided by the performance (1562 tpmC), which equals $550 per tpmC.
  • by carlos_benj ( 140796 ) on Monday August 14, 2000 @11:59AM (#855968) Journal
    I am quite surprised that the Postgres results are almost 3 times that of proprietary databases. That seems kind of fishy - surely the big proprietary databases aren't THAT slow.

    Someone who was very familiar with Postgres and not very familiar with Oracle, Informix or whatever else might easily obtain that sort of result. A misconfigured database can creep along at a snail's pace.

  • Don't be too sure -- if I were benchmarketing I'd pick leaders like Sybase, IBM UDB, Ingres....

    OTOH, Oracle can be slow if you don't configure it right. Which is one good reason to prohibit benchmarks :-)

  • by jallen02 ( 124384 ) on Monday August 14, 2000 @03:58PM (#855974) Homepage Journal
    This is very very true.

    The native interface to Oracle is *much* quicker. Having used Native drivers to Oracle on ColdFusion machines I can definately say it works quicker than ODBC.

    At my work we write scalable applications and in the near future I would like to sit down with a copy of Oracle, tune the hell out of it and spend a week optimizing the atabases, PostgreSQL and Oracle, and MsSQL 7.0 because Benchmarking just one system like a Database is not really telling, Stick it on one of my Cluster of ColdFusion machines and lets see if the DB can keep up with the rest of my Application.. THEN I will be impressed not until then.

    I read the TPC Benchmarks I even know how they work, but they still seem useless to me because We cant afford some 70K / Year Oracle DB Admin to come and constnatly Tune our Databases for us, if I cant learn enough to get it resonably well configured in a certain time frame I will go with something that *CAN* handle the load, and if only Oracle can ( which I dont believe ) Then we would be forced to find a solution that performs, But I have a copy of DB2 as well and you guys would be very suprised at the little places support for Databases come from.

    I have read and installed ColdFusion on Linux and in the documentation and release notes Allaire has instructions on how to set up Both MySQL *AND* PostgreSQL via ODBC, so its not like a native driver, but it is support and I find it rather impressive that a big company that is more or less leading the way for commercial web application development platforms reaches out to support both MySQL and PostgreSQL.

    My opinon is if you cant do much with the database what good is it?

    Jeremy


    If you think education is expensive, try ignornace
  • Has anyone noticed that some open-source marketeers are beginning to really resemble their closed-source peers? This press release just reeks of FUD; how many times is the word "proprietary" snidely used? If it were "innovat*" this could have been written by a Micros~1 lackey - there aren't a lot of other differences.

    I'm as big an open-source and free software proponent as anyone I know, just as opposed to proprietary code, and I've even specifically used and recommended Postgres in the past. And still I think that adopting these methods is not the way, because they represent a lot of what was wrong with the proprietary model. The MBAs who put this little exercise together have no idea what they're selling or why - they just want to sell it now, through any means and at any cost to the truth. The real virtues of Postgres, open-source, or anything else get lost in the hype.

    I personally fell asleep halfway through reading that mess. I've only just now woken up, and dammit, I'm cranky.
  • We paid for the test. Xperts worked as a contractor to Great Bridge. Look, this isn't vendor FUD. This is an open source company sharing its research with the open source community. Feel free to ignore the results if you find them somehow tainted. Our hope is that by releasing our findings, others will try the same tests. Regards, Ned Lilly VP Hacker Relations Great Bridge
  • >> MySQL is a nice, simple fast database for smaller applications. But for larger stuff, there are better RDBMS' out there.

    Which probably explains why one of the biggest goals in the slashcode [slashcode.com] is database independence.

    With 200,000+ users, slashdot is not exactly a small application anymore.

  • The MySQL online manual [mysql.com] is very thorough. Print it, and it would be better than the books that are out there.

    It was nice of you to give a URL to documentation, and then suggest that we print it; however, you give the URL of the "by chapter" HTML documentation, which would take a hell of a lot of clicking to print! On that subject, I've been searching high & low on MySQL's site for some time now, looking for a PDF version of the manual that I can download and/or print; there are hints several places on their site that such a document exists, but I can't find it. Can anyone help?

    Thanks,
  • I'm no lawyer, but I would have to wonder if there isn't a constitutionally protected right to free speech that prevents this sort of "license" from having any force.

    You could always publish a benchmark and make yours the test case.

  • PostgreSQL in the past was indeed clunky and slow. However, it has improved vastly in the past two years and the PG developers have worked very hard to correct problems inherited from the Berkeley times. Unfortunately PostgreSQL still has the old times fame but it certainly is a much much better product and in does not deserve that fame anymore.
  • Ummm, maybe you could read my post above that the price includes hardware, thus free(beer) databases should have an edge?
    ---
  • by Dredd13 ( 14750 ) <dredd@megacity.org> on Monday August 14, 2000 @12:05PM (#855999) Homepage
    I found this the most interesting quote:

    The two industry leaders cannot be mentioned by name because their restrictive licensing agreements prohibit anyone who buys their closed source products from publishing their company names in benchmark testing results without the companies' prior approval.

    Apparently, if you succumb to the MS, or Oracle virus (or whomever it was they tested), you're not allowed to talk about your experiences comparing them to other products. I wonder exactly how legal that clause is in the license....

    D

  • Your rules are full of shit: Here's why:
    • If you really care about your data and have $: Oracle. Typical DOT.COM thinking. DB2 scales way further than Oracle. Especially on dedicated hardware and dedicated clusters. If you care about your data you will not put it on Oracle's primary platform (slowarez) in first place. Neither on the secondary - NT.
    • If you really hate your data and don't need transactions: MySql.
    • If you like your data but don't have the green for Oracle: Postgres Oracle or Informix for a small installation cost peanuts. Check the oracle price list. But the primary cost of a database has long shifted to be the support contract and the DBA/duhvelopers salaries. And the expenses you will run on postgress are going to be fairly similar.
    • The reason for chosing a tabase different than Oracle are:
      • If you do not want to be confined by ansient constraints like 2K for a varchar.
      • If you want to use all ANSI SQL features as well as the additions from the ODBC 3.x spec like "replace into" and not have to define triggers and stored procedures for the most elementary stuff. These come in handy in all applications that write happily over their old data. Session state engines and stuff comes to mind.
      • If you want proper database level support for all ODBC and ANSI SQL types, especially support for logical ops on integers and proper unix style timestamps along with full date/time support. Try to do an bitwise AND on integers in Oracle within a select statement for example. The latter are actually essential for network applications. Here MySQL rules. Period.
    • There are reasons to chose Oracle of course
      • Existing apps, Oracle Financials comes to mind
      • Support contracts and abundance of "educated" personnel. Though the "educated" personnel thinks in categories of using "sqlImport" and other stuff intsead of perl two liners for data import and uses half a year for stuff that takes 15 minutes and does not want to f... learn (there are few exemptions of course). But it is still some personnel. To use and to hold. And if you stay strictly within the limits of applications and methods where Larry's vision have put you now this personnel is worth its money.
      But the reasons you quoted are complete shit...
  • It was added a while back.

    No it wasn't. MySQL per se does NOT do real transactions. However, MySQL-the-company have partnered to produce MaxSQL [mysql.com], which apparently does.
  • We built a data analysis system with Postgres. With about a million rows of data, we ran into major performance snags in which some complex queries that took only minutes to run on SQL Server took hours to run on Postgres. We examined the EXPLAIN plans to see how the queries were being optimized, and saw that Postgres was choosing an inefficient execution plan. We tried many alternative algorithms including the use of views to assure controlled execution. Unfortunately, Postgres provides no explicit way to control query optimization, and with great disappointment, we eventually gave up.

    We like Postgres. But it couldn't get us home when we needed it to.

    bart

  • I'm really sick and tired of the flaming directed at MySQL for no other reason than that it is successful.

    The proponents of Postgresql have completely lost any credibility with me. Their unending whining about MySQL's limitations are topped only by how they duck the touchy issue of Postgres performance.

    Today's article is supposed to address that. So Great Bridge plants a deliberately biased "benchmark" (oops, mortal sin, I used the B word) which, unsurprisingly, puts the competitors at a disadvantage by artificially forcing down their performance via ODBC drivers so they will match Postgres' slug-slow performance.

    And what we have is a PRESS RELEASE about a BENCHMARK. Can it get any lower than that? Shame on Postgres and shame on Great Bridge. When MySQL does comparisons, they publish entire tables showing the features for each database in the test, they show the run results, and they make the source code available.

    They don't whine about their competition not being "SQL compatible" (a debatable term in any event). They don't lie about how the competition "isn't relational." They don't sneer about how the competition is "just a front end to the file system." By the way -- many are now arguing that it is better to run Oracle, DB2 and other systems that have a native disk mode through the file system anyway -- see the very informative Slashdot discussion on this a few weeks ago.

    No, Monty and the MySQL folks just keep doing their thing, improving an already good product and sticking to a clearly charted development course. They have responded to the market and GPLed their code and are adding other features. When they get to subselects I'll probably be using it.

    That's right, I'm not even a MySQL user these days. I still use R:Base, everyone's favorite database to piss on in the early 1990s, just like you losers are pissing on MySQL now. Only problem with R:Base (aside from it not being open source, I'm still trying to get their attention on that) is that it has virtually no marketing. But it is rock-solid, as ANSI92 SQL compatible as anyone, supports all the features of Postgres and then some, and runs like a bat out of hell.

    If the Postgres crowd will stop flaming, whining and lying, I will start taking Postgres seriously again.

    -------

  • The MySQL site lists "Alternative Formats" to the HTML documentation, but the link is not active yet.

    My suggestion for you would be to download a program like WebCopier(you can find it at tucows) so you can download the whole directory at once, no clicking required. I'm not sure what the Linux equivalent would be, but there's undoubtedly something on freshmeat. Then do a global search and replace from the mysql site to your own site or directory. Besides, wouldn't you prefer HTML to PDF? :)

  • by Wee ( 17189 ) on Monday August 14, 2000 @12:07PM (#856020)
    I don't know if it's a good book, but there's a book called PostgreSQL [fatbrain.com] by Jeff Perkins coming out in October. Fatbrain didn't have a description, but Amazon did:
    PostgreSQL is the perfect book for you if you use PostgreSQL at work and on your Web sites wherever you expose data on the Web using Linux and Apache. It covers the new features of PostgreSQL as well as the PostgreSQL processor, which defines all necessary objects in a database, to get acquainted with SQL and to test ideas and verify joins and queries. Database developers for corporate and Web applications will find this book useful. It is geared toward intermediate to advanced developers who have designed and administered databases, but not PostgreSQL. The accompanying CD includes PostgreSQL, plus sample databases and modules.

    If you just want to use it (and not admin it), O'Reilly's Programming the Perl DBI [oreilly.com] has some info on accessing a PostgreSQL DB (hint: it's not that different from any other DB when seen through DBI). Oh yeah, MySQL & mSQL [oreilly.com], also from O'Reilly has a little bit about it (but not very much at all). I guess readmes, man pages and HOW-TOs [linuxdoc.org] are your friends for the next couple months.

    If you're really curious, throw it on a test machine and (if possible) "port" some apps to use Postgres instead of MySQL or whatever. You probably won't reach any real conclusion (or do nearly enough work to justify moving to another DB for a production environment), but the effort will very likely get you very familiar with how it works, how to set it up, how to admin it, its performance, its quirks, etc. That's both a good and a bad thing, BTW... :-)

    -B

  • They ran the tests through ODBC, meaning that the performance of the driver (or lack thereof) becomes a huge bottleneck. All this benchmark tells us is that Postgres has a well-optimized ODBC driver. It says very little about the underlying performance of the RDBMSs in question.
  • It looks like they got those DB/2 results on a system with 128 Xeon CPUs and a system price tag of just $14,232,696! I wonder how well DB/2 would do on the same hardware as the Postgres tests. Major difference in hardware here. :) The press release about Postgres says:
    Xperts ran the benchmark tests on Compaq Proliant ML350 servers with 512 mb of RAM and two 18.2 Gb hard disks, equipped with Intel Pentium III processors and Red Hat Linux 6.1 and Windows NT operating systems.
  • The article/press release/marketing FUD does not lend itself well to peer review, which is as important to technical journalism as it is in the scientific circles.

    Here are the questions that came to my mind:

    1. Were the DB's administered by a competent DBA? Oracle 8i can scream or crawl, depending on the amount of tuning you do. So can most other database products. An RDBMS is not a bloody spreadsheet; it requires a *ton* of tuning to optimally perform.
    2. What version(s) of the "properietary" databases were they using? They never mentioned that they were using the latest versions of the databases, which could mean that they were comparing the latest versions of PostgreSQL to Oracle 6.
    3. What server hardware/network hardware/client hardwares were the benchmarkers using? "Same Computers" could mean that they were benchmarking PostgreSQL on a GB switched ethernet, and the rest of the RDBMS on a $10 10Mbps hub. Not exactly an apples-to-apples comparison

    Of course, they may not want to reveal these informations for fear of peer review. *sigh*.


    --
  • if you read my comments, i was QUOTING what everyone usually says knobbus.

    sig:

  • I'm a big fan of pgsql, but I doubt that these tests really help the cause. In relative terms they're pretty good. In absolute terms, they suck.

    Quad Xeon machines are doing around 25,000 transactions per minute on the real tpc tests (here [tpc.org]) so for a 2-cpu machine to do 300 per minute is not terribly impressive. I think it's probably the trivial hardware that was holding those test back, though, rather than postgres per se.

    With only two disks the tests were almost certainly disk-bound, which would explain the striking similarity in the TPC-C results for all three vendors. I doubt any of the database systems really got a chance to hit their stride.

    So the bottom line is that we still don't know what postgres can do given reasonable HW, by which I mean at least 4 CPU's, 2GB memory, and 16 disks.

    Hello, Great Bridge?

  • Hmm... I may have to take another look at Postgres. I've been using MySql in the name of speed and because, for what I'm doing these days, I don't really need Postgres' more advanced features. I had heard that Postgres was slow as hell and a serious resource hog, but I'll have to do some testing of my own. Is there a Postgres admin here who'd like to tell us what kind of resources Postgres demands? After all, the horror stories I've heard might have been coming from people trying to run it on a 386 with 6 megs of RAM.

    --
  • It's one thing to claim to be enterprise ready as a databse product. It is quite another to be one.

    Before I get started, I should hasten to mention that I work on developing DB2 UDB and therefore anything I write is biased and should be viewed as such :-)

    Enterprise-ready is one of those phrases which gets bounced around a lot. But what does it mean for relational databases? In my opinion, it at least includes the following:

    • Scales well to multiple processes - not just 2 or 4, but 32, 64 and up.
    • Scales well to multiple machines doing the processing (MPP) - look for performance to increase as a close-to-linear function of numbers of nodes
    • Is able to cope with 300GB+ databases - the modern data requirements are only going up and TB databases are now common.
    • Is able to keep the transaction time down to reasonable levels for thousands of users.
    • Has online backup facilities so the database can be backuped without down time.
    • Has proven data integrity
    • Has proven uptime - i.e. can look for >99.9% uptime.
    • Supports triggers, stored procedures and every access method you can think of (JDBC, ODBC, DRDA)
    • Full RDBMS support.
    • SQL '99 core compatability.

    At this point, I don't know what the score is for PostgreSQL on the above. Any expert care to comment?

    Cheers,

    Toby Haynes

  • by denshi ( 173594 ) <toddg@math.utexas.edu> on Monday August 14, 2000 @05:29PM (#856040) Homepage Journal
    Sadly, the docs are not free. Now, I'm not really whining about shelling out the $40 or whatever for the 'interbase handbook' or whatnot; my complaints are these:

    Timeliness. Programs CHANGE! Open projects change QUICKLY! You must have open docs, or at least docs not committed to paper, to revise them.

    Sheer quantity of docs! Ever used Oracle? Oracle8: The Complete Handbook is not all you need! You inevitably end up buying the whole goddamn bookshelf! RBDMS's are complex beasts! With that in mind, closed docs means that such a project can't be called 'free' after all.

    I guess it's 'Raymond Open' not 'Stallman Free'.
  • Bullshit! If that were "fair" then Consumer Reports would have nothing to do. Buying competing products, and pitting them against each other under identical circumstances is a time-honored tradition. You're even allowed to make TV commercials about it (to wit, the "Pepsi Challenge" ads are nice examples, as are any which have statements like "FooWidget outperforms BarWidget 2-to-1 on {whatever}", which happen all the time. It'd be VERY hard to enforce that clause of the license, methinks. D
  • As you can see, PostgreSQL IS NO LONGER the klunky and slow product that it used to be years ago. It has improved VASTLY in the last couple of years and its developers have worked very hard to correct problems inherited from the Berkeley code.

    It's a pitty that users new to database concepts and database-backed websites are misled to believe that MySQL is a robust database product. It's NOT ! It wasn't designed nor written to be.

    Without

    • Transactions (recently partially added in MySQL)
    • Procedural languages (PG supports at least 3: PL/PGSQL, PL/Tcl, PL/Perl)
    • Subqueries (e.g: SELECT foo FROM bar WHERE bar IN (SELECT bar FROM foo2) )
    • Multi-Variant Concurrency Control (absolutely essential for websites - Readers don't wait for writers and writers don't wait for readers)
    • Referential Integrity
    • Much of the SQL92 standard
    MySQL should only be used in applications that did not require any security or where the integrity of the data is not important (why would you want a database of inconsistent data?)

    The features that I mentioned above are crucial to any database product. Without them you will be spending much more time and effort trying to implement in the application level, things that should be handled by the backend.

    Look at the Sourceforge bug reports page and you'll notice that most of the problems they face are due to their poor choice: MySQL. I am not saying that MySQL does nat have its space... it does, but it's not in the enterprise and it's not in important data.

    Quit whining about the ODBC deal. Stick to the facts: without these features MySQL is very limited.

  • People who scream "FUD" without reading the article really annoy me. Mind the allcaps, please; But I feel it is necessary, so that this actually gets through some of the skulls out there.

    "THE TWO INDUSTRY LEADERS CANNOT BE MENTIONED BY NAME BECAUSE THEIR RESTRICTIVE LICENSING AGREEMENTS PROHIBIT ANYONE WHO BUYS THEIR CLOSED SOURCE PRODUCTS FROM PUBLISHING THEIR COMPANY NAMES IN BENCHMARK TESTING RESULTS WITHOUT THE COMPANIES' PRIOR APPROVAL."
  • by The Man ( 684 ) on Monday August 14, 2000 @12:34PM (#856050) Homepage
    However, that sort of licensing term is unlikely to stand up in court. It'd be nice if someone with at least half a testicle would stand up and tell the world that the terms are bullshit and most likely illegal.

    I benchmarked Oracle and Microsoft-SQL against one another for box weight (that is, how heavy the software, packaging, and associated manuals are) and found that the differences are scale-dependent. Overall I found that Oracle was heavier.

    Go ahead, guys, sue me. Good luck; you'll need it.

  • by java.bean ( 66111 ) on Monday August 14, 2000 @12:34PM (#856052) Homepage

    There's no such thing as free speech. Haven't you noticed that the First Amendment to the Constitution reads:

    Congress shall make no law [...] abridging the freedom of speech,
    except in those cases where it is deemed to harm corporate profits, [...]
    Libel, slander, copyright trademark or patent violations, licensing agreements, saying posting or printing anything that someone with more money and more lawyers than you doesn't like...Free speech is a dream. --jb
  • Everyone seems to be missing the fact that it is a press release by a company providing support/etc for Postgres. Now, I don't know about you, but this raises the "benchmarketing" alarms for me.

    Not to say that this isn't true. However, as I browsed the release, I noticed things like "1-100 client connections", which tell me that there is a lot of maneuvering room to pick the best values.

    Jason Pollock
  • According to those benchmarks, postgres is very impressive. I'd like to learn more about it.

    I liked the New Riders book on MySQL. Is there a similar book from any publisher for postgres?

    Or maybe some good web sites?
    Torrey Hoffman (Azog)
  • Bartwol had it right -- it was a combination of memory leaks and having to take the db offline to Vacuum it.

    I have yet to see a database that doesn't require a large amount of "rigamarole": some of it nightly, some of it weekly, and maybe even some of it annually. When you're dealing with something as complex as a modern RDBMS, maintenance is a given. A good DBA will automate most of it, with notification of exceptions, of course

    Well, if you were running Oracle, once the sucker is running it never comes down -- not for backup, schema changes, moving tables to different disks -- nothing. It takes some expertise, to be sure, but it all makes sense.

    You're right that there's a lot of work in a production database -- capacity planning, performance analysis and tuning. A lot of it's kind of fun. On the other hand, things that I consider rigamarole are when the admin has to do stuff that should be the system's jobs: freeing memory pointers and reusing disk space.

    It sounds like the memory leaks have been fixed, and what remains is the vacuum process. For applications that don't need to run 7x24, this is not so bad.

  • by Doviende ( 13523 ) on Monday August 14, 2000 @11:39AM (#856059) Homepage
    i didn't see who the leading proprietary databases were that they competed against....anyone else find some details?

    also, i'm all for postgres, but doesn't it seem funny that their business is based on postgres solutions, and now they come out with this "independant benchmark" claiming that postgres is the best?

    This could be another attempt at "benchmarketing" ;)

    -Doviende

    "The value of a man resides in what he gives,
    and not in what he is capable of receiving."

  • This reads more like an ad than a benchmark. Then again, it wouldn't be tough to beat MySQL. Maybe the two 'leading closed source' databases were MS Access and FoxPro.....

  • by Kiro ( 220724 ) on Monday August 14, 2000 @11:40AM (#856072)
    Interbase fits right into a Linux environment.
    Borland has just recently released its source code [borland.com] and so what we have now is an open-source, royalty-free, Borland-quality database to use and abuse.

    Download links are:
    Client and server source code [conxion.com]
    Server Linux binaries [conxion.com]

    From personal experience, Interbase is perfect for a tight budget situation where you need to server a medium-size userbase.

    --
    Kiro

  • Er... I really can't comment, other than to say that it's not IBM, Informix, or Sybase ...

    Why can't you comment? Is it because of restrictive EULAs that won't let you disclose the results of benchmarks?
  • There's another missing feature that one expects from a real database: stored procedures. Yes, postgreSQL has user-defined functions, but those are intended only to return a single value. Yes, they can return an ``aggregate'', so you can effectively return a single row, but if you need to return a table, forget it.

    There are two edges to the stored procedure sword: no two databases appear to implement them in the same way, so if you contemplate changing databases, it is more ``portable'' and ``modular'' to package such processing in your portable and object-oriented application language libraries, thus obviating the need for stored procedures. On the other hand, if you're more likely to change your language than your database, then putting your logic into your database makes a lot of sense.

    The fundamental problem is that, as a standard, SQL isn't. A real standard would cover the whole system, including protocol for connection, C and/or language independent API, authentication, authorization, mechanisms for extensions (e.g., stored procedures), mandatory data types, and more. Put it in a series of RFC's, and then we'd have a real competition. There is no motivation for the big commercial database companies to do any of this; once the open source market begins to dominate, however, there may be some progress. Look for real standards for databases in about 2020.
  • ...against DB2, Oracle, or SQL Server.

    Probably the 2 "commercial databases" were 8 year old copies of Informix and Paradox or something similar...
  • This isn't surprising - everything I have ever read/seen/experienced about MySQL says that it's a pretty fast and simple database for smaller applications, but isn't very scalable (both in terms of features and performance).

    I am quite surprised that the Postgres results are almost 3 times that of proprietary databases. That seems kind of fishy - surely the big proprietary databases aren't THAT slow.

    But either way, it confirms what lots of people already knew: MySQL is a nice, simple fast database for smaller applications. But for larger stuff, there are better RDBMS' out there.
  • They didn't do the test because MySQL doesn't support the SQL standard. Is there something wrong with a benchmark biased against non-conformant programs?

    Most people who know much about MySQL know it doesn't have transactions and is best used as a read mostly write some database. It's fine for that. It is not fine for big systems.

    --
  • by lal ( 29527 )
    If you're looking for some facts, check out the original press release [greatbridge.com]. Highlights:
    1. All tests used ODBC.
    2. Neither MySQL nor Interbase 6.0 were tested in TPC-C. MySQL doesn't have enough SQL92 conformance. Interbase 6.0's ODBC driver isn't ready yet. They tried to use Interbase 5.0 but couldn't get it to work.
    3. The test where all databases competed is the AS3AP test. A little more research [benchmarkresources.com] shows that this is a test with mixed updates and retrievals. MySQL 3.22 is known to have poor performance with a large number of mixed updates and retrievals. This may explain why the MySQL line peaked and then fell off for this test.
  • Well, considering that they couldn't mention Oracle by name even if it was involved in performance testing (because of its draconian no-reviews license agreement) I don't see how you can rule it out.

    Do any other RDBMS' license agreements have such clauses?

    Informix does.

  • Postgres doesn't support transaction loggin, replication, and other goodies that the big databases support. That could be partially responsible.

    --
  • Let me preface my remarks by stating that I am a huge PostgreSQL fan. I personally believe that the added features that you get with PostgreSQL are very important, and so I am not very interested in MySQL.

    That being said the benchmarks in question definitely play to PostgreSQL's strengths and MySQL's weaknesses. The AS3AP (which is the test where PostgreSQL was pitted against MySQL) involves transactions. PostgreSQL has some very sophisticated support for transactions (MVCC) and MySQL has their kludge interface to Sleepycat's DBM. That alone would explain the numbers.

    Which only goes to show what everyone has been saying forever. If you don't need transactions or subselects then MySQL will make you very happy. If you do, well then, you might want to try another product.

  • Ahh, it hasn't taken long for the MySQL wwenies to crawl out of the woodwork.>[?

    They compared the bleeding edge postgres (7.0)

    PostgreSQL 7.0 is the current production release of PostgreSQL. Try getting your facts right.


    --
    My name is Sue,
    How do you do?
    Now you gonna die!
  • As others have said, putting a SQL front end to sleepycat DB is hardly calling MySQL a transactional database.

    Furthermore, MySQL's braindead locking scheme really chokes it when it comes down to massive concurrent access with both readers and writers.

    For fucks sake, im tired of hearing mysql apologists push it as the ultimate solution...

    One little problem is Oracle/PostgreSQL/Interbase are _not_ competing databases! I'd say the closest competing database is libgdbm or libdb or maybe grep+perl or something like that, in which case I would agree.

    MySQL is a little out of its league when you compare it to those "big" databases.

    And if you fuckers moderate me down for putting forth computer science truth I will haunt you for the rest of your lives ;-)

    heh

    ok, take it easy, dont burst a vein.
  • i didn't see who the leading proprietary databases were that they competed against....anyone else find some details?

    The other 2 DBs can't be named due to their draconian licensing terms. From the article:

    Postgres consistently matched the performance of the two leading proprietary database applications. The two industry leaders cannot be mentioned by name because their restrictive licensing agreements prohibit anyone who buys their closed source products from publishing their company names in benchmark testing results without the companies' prior approval."

    i'm all for postgres, but doesn't it seem funny that their business is based on postgres solutions, and now they come out with this "independant benchmark" claiming that postgres is the best?

    I'm not saying this benchmark wasn't biased (it may well have been, and I'm sure many in the MySQL community would like to believe that), but this benchmark was NOT performed by Great Bridge (the company you refer to). From the article:

    The tests were conducted by Xperts Inc. of Richmond, Virginia, an independent technology solutions company, using Quest Software's Benchmark Factory application.

  • Comparing a bleeding edge product (postgres 7.0)

    7.0 is the stable release of Postgres, not the "bleeding edge" version. You get that from CVS.

    Experimental Transaction support

    --
    My name is Sue,
    How do you do?
    Now you gonna die!
  • Timeliness. Programs CHANGE! Open projects change QUICKLY! You must have open docs, or at least docs not committed to paper, to revise them.
    Programs change, docs don't. Other than a few projects like Apache and PostgreSQL most documentation lags so badly that the 'free' documentation is useless. It's unencumbered but the developers are too busy to update the documentation and no outside parties step up to the plate.

    Writing documentation isn't sexy enough for most people yet those same people aren't competent enough to contribute code... so they (myself included) contribute nothing.

    In a lot of cases I'd rather have non-free but up-to-date documentation.

  • by trippd6 ( 20793 ) on Monday August 14, 2000 @11:41AM (#856103) Homepage
    The reason MySQL was slower was because they used the ODBC drivers. The MySQL ODBC drivers are known to be significately <sp> slower then the native drivers. Although I do agree it is probably the best way to test a large number of databases using ODBC drivers, I would like to see the results of all the databases tested using native drivers.

    -Tripp
  • by Pfhreakaz0id ( 82141 ) on Tuesday August 15, 2000 @03:29AM (#856104)
    Was it not www.tpc.org [tpc.org] that ran these tests? In that case, I don't put much stock in them. Take a look at the top ten results by price/performance [tpc.org] or even the complete results by database vendor [tpc.org] shows no mention of Postgres. I'll believe it when I see these results ratified by the TPC.
    ---
  • by EvlG ( 24576 ) on Monday August 14, 2000 @06:42PM (#856105)
    Something I think many people, benchmarkers included, forget is that the drivers are important too. What I mean is, if I am going to use a piece of hardware/RDBMS/etc... then what drivers I can use with it are going to be an integral part of my overall experience with the product.

    If one DBMS/3D Card has better drivers, even though it is "slower in theory", then that means that my overall experience will be a better one than the "theoretically faster, but with crappier drivers" product.

    What does this mean? Trying to equalize products on drivers is often an exercise in finding which product has the most tuned driver. IN the case of Postgres, it appears their ODBC driver is tuned much better than the others. However, very few people I know use ODBC drivers for MySQL, and not many use them for Oracle either. They all use the native drivers. Thus, this benchmark doesn't mean anything to them, because a non-real-world situation was benchmarked.

    I wish people would perform real-world benchmarks: i.e., run what people would actually run. That's one thing that I really like about the gamer-oriented hardware review sites. They post a bunch of meaningless BusinessMark2000 and AppMark2k scores, but they also go in and show you how fast the actual games will play on the hardware. That is CRUCIAL to my purchasing decisions. RDBMS vendors should benchmark one database with its best-performing driver vs. another database with its bets performing driver. Then we could really get an inkling of an idea as to how the thing will really perform in the field.

    Testing with the same drivers only looks fair; in reality, as in this Postgres benchmark, it was likely the deciding factor to making Postgres "trounce" the competition.
  • by dfallon ( 19751 ) on Monday August 14, 2000 @11:42AM (#856106)
    A few comments... The most noticible and glaring issue is the "independant study" was (surprise, surprise) commissioned by Great Bridge, and great bridge's reason for existance is to sell support and services for postgres. Not the strongest indicator of impartiality. The entire press release is designed to sell postgres, not to provide a fair comparison.

    Issue two. They compared the bleeding edge postgres (7.0) with the old-as-heck mysql (3.22) - they're now up to revision *22* of the development series for mysql - that's a pretty huge amount of changes. I would have been much more impressed with this if they had ran the comparison between 3.23.22 and 7.0. As with everything, folks, don't believe benchmarks, especially ones in press releases. Believe real-world tests. I've used both, and I'm using MySQL 3.23.22 for my site.
  • by Mandomania ( 151423 ) <mondo@mando.org> on Monday August 14, 2000 @11:44AM (#856115) Homepage
    If you take a look here [postgresql.org] you'll see there's one in the works. However, the author has been kind enough to post the book here [postgresql.org] :) -- Mando
  • by carlos_benj ( 140796 ) on Monday August 14, 2000 @11:45AM (#856116) Journal
    Why do you suppose the "2 leading commercial databases" were never named? It would be interesting to know what they were

    The article points out that the companies prohibit publishing benchmark results when you buy their product.

  • I actually had read enough of the AS3AP test to assume that you ran the multi-user tests alongside the single user tests. Other than test 2 of the multi-user tests (Information retrieval) PostgreSQL should have killed MySQL. The second that you start combining writes with those reads MySQL's performance decreases dramatically.

    Which is basically why I use PostgreSQL. I have found that I can even get PostgreSQL nearly up to MySQL's pure read performance simply by tuning the DB, indexing it properly, and vacuuming. Just the addition of subselects is worth the upgrade. To say nothing of stored procedures, triggers, or rules. Used correctly these mechanisms not only speed development, but they allow you to access your data quickly and easily. For example, complex queries can be made extremely fast simply by coding in the necessary stored procedures.

    Combine PostgreSQL's many strengths with a top-notch development team and the most useful mailing lists I have ever encountered and you have a winning project.

  • by jhoffmann ( 42839 ) on Monday August 14, 2000 @11:45AM (#856120) Homepage
    everybody wants to know what they are. one has to be MS SQL server... why else would they even have mentioned NT server? every other database will run on linux. the other is probably oracle, just given ned lilly's discussion of them on the postgres mailing list, although i wouldn't be surprised if it was something else. here's a reference to a message from ned:

    http://www.postgre sql.org/mhonarc/pgsql-general/2000-06/msg00390.htm l [postgresql.org]

  • I have yet to see a database that doesn't require a large amount of "rigamarole": some of it nightly, some of it weekly, and maybe even some of it annually. When you're dealing with something as complex as a modern RDBMS, maintenance is a given. A good DBA will automate most of it, with notification of exceptions, of course.

    I'd be surprised if your installation (you don't say what you chose) doesn't have a lot of maintenance associated with it, even if cron's doing all of it for you.

    --
  • OK, this is great news, but performance isn't everything. I didn't go with Postgres for one of my projects last year for a couple of reasons.

    The first and biggest reason is that some people who ran it said was that it wasn't as stable as the commercial databases. There was some rigamarole you had to do every week or so like rebooting the daemon or running some utility or another to keep the system from losing its mind. Sorry I can't recall better, but this was a year or so ago. In any case once I set a database server up and running I don't want to have to do anything with it unless something changes in its environment. Does anybody remember this?

    Also, this is really more of a nit, but Postgres also lacked some important SQL constructs like outer joins and foriegn keys. OK, you can get around this, but the solutions are ugly and this also makes porting stuff a pain. Last time I checked, Foreign keys had been done, but while outer joins were on the list of features to add no work had started.

    It would be nice if it supported all ANSI SQL intermediate language constructs. This would greatly facilitate porting to Postgres.

  • by sjames ( 1099 ) on Monday August 14, 2000 @12:59PM (#856127) Homepage Journal

    t was added a while back. And still, mysql (when using MyISAM) is a lot faster than competing databases. Sorry I've done my independant benchmarks in the "real-world".

    Too bad the transactions only work with Berkeley DB tables (not MyISAM).

    I tried MySQL again last week to see if I could certify it for use on a failover cluster (logging and transactions are VERY important for that!). I didn't even get to the part about simulating a node failure before it flunked the test (which postgreSQL passed). I found that sometimes it would silently ignore BEGIN and ROLLBACK. That may be a Beta issue, so I'll look again when it's released. If the version in use doesn't have the support for Berkeley tables compiled in, it will silently create a MyISAM table instead. Then, it will silently ignore any transaction commands (BEGIN, COMMIT, ROLLBACK).

    It's be one thing if it simply failed with errors returned, but silent failure to use transactions is the stuff nightmares are made of! I'd actually rather have it core dump than do that.

  • It's a rather interesting insight into how badly IBM has done it's database marketing that whenever your 'quasi-technical man in the street' lists databases, noone seems to remember db2. Charles Miller
    --
  • Any tests that you might have done prior to 6.5.3 are ridiculously outdated at this point. And the difference between 6.5.3 and 7.0.2 is quite noticeable.

    PostgreSQL has come a long way since the versions that you are talking about.

  • by Jason Earl ( 1894 ) on Monday August 14, 2000 @01:09PM (#856133) Homepage Journal

    There is a PDF for the upcoming Addison-Wesley book, by none other than PostgreSQL's Bruce Momjian, available here [postgresql.org].

  • Actually, it depends. If you're talking about raw speed then it's often hard to beat MySQL since it lacks most of the features that slow down databases. MySQL has a reputation for being fast, but it doesn't have a reputation for being a replacement for a true DBMS.
  • MySQL didn't take part in the TPC-C part of the test, because MySQL can't handle TPC-C. It doesn't implement enough of SQL2 for that.
    In my professional opionion, MySQL needs to drop the My part and adhere better to the SQL part. At least that would make my life easier. :)
  • There was a comment on the article at the bottom that gives a link to the original story, which has some pictures, you can find it Here [greatbridge.com]
  • The "Real World" uses stored procedures and transactions


    Transactions, yes.


    Stored procedures - sometimes yes, sometimes no. There are advantages (increased speed) and disadvantages (decreased flexibility, portability) to using stored procedures. Some shops always use them, some never use them, and the rest mix & match as they see fit. Point being, there definitely are "Real World", well-written db systems that don't heavily use stored procedures.

  • by nedlilly ( 121152 ) on Monday August 14, 2000 @07:23PM (#856147) Homepage
    Er... I really can't comment, other than to say that it's not IBM, Informix, or Sybase ...

    Ned Lilly
    VP Hacker Relations
    Great Bridge
  • There seems to be a lot of infighting with borland and some people who were promised a spin off company but were snubbed when borland changed it's mind. A good amount of borland employees quit and are trying to form a new company which they refer to as "newco". Apparently this newco is looking for $500K (you read that right) in commitments before they will actually start anything. Until then they have retracted the ODBC driver and the online developers documentation they have been working on.

    Right now the source is on a couple of CVS trees, and some projects have been formed but borland still owns the test suite, the name and a few cruicial pieces like the "official documents".

    To me this smells like two commercial entities fighting for control over an open source project and it's looking like the rest of the user base is being used as a weapon. On the one hand you have borland saying "you can have this source but you can't have these other pieces" and on the other you have newco saying "we are going to hold on to the stuff we developed till we get paid" either way doesn't sound much like an open source project to me.

    Maybe a user driven project will eventually take off, maybe some uninterested third party will write an open source ODBC driver, maybe a user driven documentation will spring forth like the PHP documentation but that situation does not exist yet.

    This open source thing was flubbed badly by all parties involved. It makes them look petty and frankly it's embarrasing to watch. IB is a nice database I hope these people can get their acts together and or the user community can take charge but I would guess that will take some time.

    It's apparent

    A Dick and a Bush .. You know somebody's gonna get screwed.

Your password is pitifully obvious.

Working...