Refactoring SQL Applications 159
stoolpigeon writes "My past as a DBA probably makes me a bit biased, but the reaction I've seen the most when a database application isn't performing as well as would be liked seems to focus on the database side of things. The search for a solution usually seems to center around tuning db parameters, the building (or removal) of indexes and, if the budget allows, throwing hardware at the problem. In their new work, Refactoring SQL Applications, Faroult and L'Hermite bring a much wider range of options to the table. There is a lot in this little book for the developer charged with fixing an existing application and I think a lot of good information that could save one from making a number of headache-inducing mistakes on a new application." Keep reading for the rest of JR's review.
The book is divided into eight chapters; the first two deal with how to approach a problematic application in general. In the preface the authors say, "This book tries to take a realistic and honest view of the improvement of applications with a strong SQL component, and to define a rational framework for tactical maneuvers." I found this to be true throughout the entire book and was impressed by how well the examples, suggestions and problems echoed my real-life experience. This book is first and foremost practical. There is really almost nothing in the book that does not come immediately to bear upon the problem at hand. I've seen others do a lot less with many more pages. Refactoring SQL Applications | |
author | Stephane Faroult with Pascal L'Hermite |
pages | 293 |
publisher | O'Reilly Media, Inc. |
rating | 9/10 |
reviewer | JR Peck |
ISBN | 978-0-596-51497-6 |
summary | Good for a developer charged with fixing an existing application. |
The examples and benchmarks are compared across three different popular relational database management systems. They are MySQL, Oracle RDBMS and Microsoft SQL Server. I thought that this brought up a couple interesting issues that are not directly addressed in the book. First is that the authors are talking about how to improve performance, not comparing platforms, but the numbers are there and may be of some interest to people who would like to compare them. Secondly, I've met a number of people over the years who get quite animated about insisting that a good DBA does not need to know any certain solution, but rather just the fundamentals. I think Faroult and L'Hermite put this idea to rest, though unintentionally. In order to discuss how to best understand what exactly is happening and how best remedy issues, they show that it is necessary to have an understanding of platform specific issues and tools. This is true on two levels. The first is that the location of use of the built in tools for each platform are different. The second is that what works for one platform does not necessarily work for another.
For example, Chapter Two "Sanity Checks" contains a section on parsing and bind variables. The authors compare performance when queries are hard coded, with new prepared statements on each iteration (firm coded) and with one prepared statement and changing the parameter value on each iteration in a loop (soft coded). On Oracle and SQL Server the performance was poorest with hard coded, better with firm coded and best with soft coded. MySQL did best with soft coded as well but actually took a performance hit moving from hard coded to firm coded. This had to do with differences in how MySQL server caches statements. The authors took the time to rewrite their code from java to C in order to ensure that the issue was not related to language or driver issues. This is not to say that one can ignore RDBMS and SQL fundamentals, but rather that to get top performance requires knowledge of platform specific issues. This also comes out again when dealing with optimizers.
With that in mind, the authors recommend that readers have a solid understanding of SQL and some programming language. Most examples are SQL and code is given in Java and PHP. There are also examples that illustrate SQL extensions showing procedures, functions, etc. written for all three RDBMS products covered. The authors stick primarily to standard SQL but do make note and at times show examples of how things will look in each of the other databases. This information is current and reflects the most recent versions of the each product.
The fourth chapter, "Testing Framework" is incredibly useful. The authors cover generating test data and then checking correctness of outcomes through comparison. This is really useful information for anyone working to improve an application, or writing one for the first time. I think it also a large part of why this book could really appeal to new and experienced developers as well as the developer working on existing or brand new applications. I think there is a good chance that only the most extremely experienced developer would find nothing new here, or at least some new way to approach a problem. New developers can learn quite a bit and avoid some bad habits and assumptions without having to gain that information the hard way. And then the tools for generating random data, large amounts of data and comparing results will provide excellent opportunities for learning and real world application.
The next three chapters cover dealing with specific types of issues and how to improve performance. The last chapter then quickly describes a scenario of just how the authors step into real world situations and start to attack a problem. This is followed with two appendices. The first is scripts and samples, the second tools that are available to help in finding issues and resolving them. Some of the authors tools use SQLite, which is discussed briefly in the chapter on creating test data as some of the tools depend upon it.
I think that it has been a while since I've read a book that could have such a rapid return on investment. There are many suggestions and insights that should enable anyone to squeeze better performance out of just about any database application. While the focus is on the application side, there is plenty that requires understanding and work on the database side as well. There is discussion of the parameters and hardware I mentioned at the start of this review. But rather than the only options, they are one part in a much larger and systematic approach.
The authors relate that often refactoring for this type of application comes into play when something that used to work does not work any more. This can often lead to an environment of high pressure and emotion. The desire for a rapid resolution can lead to casting about in the dark for a quick fix or a feeling that cost is no longer as significant since a fix must be had now. The authors argue, and I agree, that this is exactly when a rational, disciplined process of tracking down and fixing issues is the most valuable. I agree. The issue is of course that someone in a position to do something must have the ability to take that approach. This book will get one well on the way to being in that place. Of course it can't take a brand new developer or DBA an expert. Much like a degree it can give them some fundamental tools that will allow them to take full advantage of experience as it comes rather than just crashing and burning.
If I could I'd have any developer on a database centric application read this, and DBAs as well. There is a lot here for both sides to learn about just how much they depend upon and impact one another. This may be an idealistic dream, especially for larger shops where often the relationship between those two groups is adversarial, but I think that such an approach could only make life much better for everyone involved. For anyone looking to enter this world on either side of the DBA or developer equation, this may make a nice addition to their education. For that individual wearing both hats this could be a life saver. In this small book they will learn many things to look out for as well as gain exposure to some of the similarities and differences in what are arguably the top three relational database management systems right now.
You can purchase Refactoring SQL Applications from amazon.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.
Should have included PostgreSQL and DB2 (Score:5, Insightful)
This study would have carried more weight if it had included PostgreSQL and IBM's DB2. These two databases do more serious work than MySQL although many believe MySQL is more widely deployed.
Re:Should have included PostgreSQL and DB2 (Score:5, Interesting)
I have to agree. Maybe I'm unfairly biased and it's just past performance (both relayed by others and experienced by myself), but I don't trust MySQL for anything more complex than a backend for a simple website. If I want a reliable open source database for a serious project, I'd go for PostgreSQL in a heartbeat.
I actually was looking at KnowledgeTree recently as a possible solution for a document management system for our organization (we have a clunky old system and some others here are pushing SharePoint as a replacment . . .), but it's use of MySQL pretty much stopped that dead in it's tracks. I know they'd prefer MS SQL Server as an option here. I could *probably* talk with them if it supported PgSQL. But MySQL isn't even an option to discuss for something this important.
Re:Should have included PostgreSQL and DB2 (Score:4, Insightful)
You forgot the other often repeated/never researched traditional slashdot claims seen in every mysql comment section, such as mysql doesn't do transactions and doesn't do replication, both of which are necessary for each and every database install ever done past present or future... (Of course it has those features for about half a decade, maybe more, certainly since around the 4.0 range, but that never slows down the folks that repeat those claims)
Then there are the re-occurring claims that mysql is useless because it doesn't have some bizarre feature that you might personally think is useful, therefore any database without it is useless for everyone doing anything, like perhaps direct file importation of COBOL ISAM punch cards, or an internal database representation for complex four dimensional vectors. You know, the stuff everyone uses.
Then there are the posts explaining how a failing hard drive on an old gateway 2000 vaporized the filesystem and/or bad ram caused endless kernel lockups, and the mysql software was running on that bad hardware, and correlation always implies causation, so mysql must be bad too.
Finally I expect several posts about how they found an obscure bug in the beta 3.23 version back around eight years therefore they'll never use it again because that is the only software that has ever had a bug.
Re:Should have included PostgreSQL and DB2 (Score:4, Interesting)
Well... I have been using Postgresql since back WHEN MySQL didn't do transactions.... I still don't trust MySQL's transactions or the new strict mode. At the same time, I have watched PostgreSQL do an absolutely terrific job of running horrendously written queries optimally. Here are two criticisms I have about using MySQL for real application work, especially when you are distributing that application (and thus have little control over how users set up their db's):
1) MySQL transactions are built into the table engines, and by default (last I checked, and meaning you don't install innodb, etc), the tables will not be transactional. This means that if you are building an inhouse app, you can trust it more than you can if you are distributing your software. In short, if you are distributing software you can't guarantee that it is running on a system with transactions without a great deal of headache........ The same goes for referential integrity enforcement.
2) Strict mode can be turned off by any application. This means that the more recent data integrity checks cannot be relied upon. This is an issue on both inhouse and distributed software because it adds quite a bit of overhead to the QA process internally, and can add support headaches in software for distribution.
MySQL is a good db for single-app databases, where data integrity is not a tremendous issue or where you are deploying a separate MySQL instance on a different port. It is quite a bit worse than PostgreSQL for anything else.
Re: (Score:2, Interesting)
I write applications that use MySQL that get installed on servers at the cient's premises. I'm also the one doing with the installation and MySQL config.
Reponding to your points :
Re: (Score:3, Interesting)
If the client were to insist on handling the MySQL part, and screwed it up, it would cease to be my problem. Or rather, I would point at the instalation and tell them were they fucked up;
Ok, so your point is that this is fine as long as you install MySQL, make sure that Innodb, etc. is installed, etc. Fine. I don't want that responsibility.
About turning off strict-mode. If your applications are turning off strict-mode, then don't be supprised if you break data integrity. If your clients are writing apps t
also, please reread this: (Score:3, Interesting)
MySQL is a good db for single-app databases, where data integrity is not a tremendous issue or where you are deploying a separate MySQL instance on a different port. It is quite a bit worse than PostgreSQL for anything else.
From your description you are using MySQL for a single-app database where you run a dedicated instance of MySQL for your app. That is not the usage case I was describing, which is a central RDBMS serving out the same data to a myriad of different applications. If you are trying to go
Re: (Score:3, Interesting)
1) MySQL transactions are built into the table engines, and by default (last I checked, and meaning you don't install innodb, etc), the tables will not be transactional. This means that if you are building an inhouse app, you can trust it more than you can if you are distributing your software. In short, if you are distributing software you can't guarantee that it is running on a system with transactions without a great deal of headache........ The same goes for referential integrity enforcement.
It's easy e
Re: (Score:3, Interesting)
On the whole, this is probably a good thing. If the application is under your control, you can use whichever mode you want. If you're relying on somebody else's application, forcing it to use strict mode when it wasn't written for this environment could introduce subtle bugs. Now, if you were to argue that the _existence_ of these different modes of operation was an issue, then I'd probably agree. But given the existence of the modes (and that's unfortunately a necessity for backwards compatibility reasons)
Re:Should have included PostgreSQL and DB2 (Score:4, Interesting)
I'm not holding anything against it that regard. The simple fact is that I've had two fairly low traffic MySQL databases become corrupted beyond the point of being usable within the last 3 years. The hardware wasn't at fault here (nor was it old or outdated). Now luckily, this was for something that while important, wasn't "OMG somebody's head's gonna roll!" critical (namely, it was the quarantine database for amavisd-new on a mail filter, and then later an internal message/call tracking system that we'd wrote).
For stuff like that, where you can stand to lose the data, or at worst, roll to a backup, then MySQL has it's uses. However, our document management system for example contains tons of documents that we must legally keep archived and available (Government institution - we have to have it available for FOIA requests). We also have for instance land appraisal software keeping databases of property taxing information that we need to bill at the end of the year (with about $50 million annually riding on that - if we don't get those bills out our whole budget shuts down). I just don't trust that type of thing to MySQL. Not to mention that the "nobody ever got fired for buying Microsoft" mentality does kick in. If the database fails and I have to restore from backup, then if it's MS SQL Server or Oracle then your bosses will usually not fault me(as long as I have good backups in place, which I do). If something that critical fails and I used MySQL on the project, I very well might be looking for a new job.
Re: (Score:3, Interesting)
I'm not holding anything against it that regard. The simple fact is that I've had two fairly low traffic MySQL databases become corrupted beyond the point of being usable within the last 3 years. The hardware wasn't at fault here (nor was it old or outdated).
I'm not sure what you're doing wrong here, but I think many of us have been running a lot more MySQL databases than that and never experienced corruption. Myself, I have been maintaining on average about 20 MySQL instances spread across 3 different ser
Re: (Score:2)
Seriously, if you believe MySQL to be safe you have no business with database applications, google it - heck, just read the linked sites from sibling posts.
Transactions are only supported by specific engines and even when you think you are running the right engine MySQL might surprise you (usually when you need a rollback the most). Read up on it, your data is being corrupted!
Re: (Score:2)
While I mostly agree with your points, I definitively don't understand why MySQL still creates tables without Foreign Key support, if you don't add the silly "engine=innodb" keyword. Please don't reply with "backward compatibility for broken applications/schemas"...
... ...
Server version: 5.0.67-0ubuntu6 (Ubuntu)
mysql> create table testtable ( xx integer);
Query OK, 0 rows affected (0.00 sec)
mysql> show create table testtable; ...CREATE TABLE testtable( ...
) ENGINE=MyISAM DEFAULT CHARSET=latin1 |
Re: (Score:2)
Re: (Score:2)
Oddly enough, I am also looking at KnowledgeTree. Very inexpensive and well put together system. What exactly is your problem with mysql in this instance?
I'd also be interested in hearing more about your view of KnowledgeTree as a whole. I was very impressed with its Office integration and overall ease of use compared with more expensive products.
Re: (Score:2)
I've had a few MySQL databases become corrupted in production systems. I've not had any corruption in either MS SQL Server or PostgreSQL databases that have been in use longer and are used much more heavily. More or less just a case of "once bitten twice shy".
As to KnowledgeTree specifically, I didn't use it extensively, but it did look promising. I did have some minor issues defining permissions on certain items, but that was probably just a matter of learning curve. The only downside I'd state was tha
Re: (Score:2)
Yeah, the speed seems to be the only thing I found lacking. I attributed that to the workstation I have this installed on as well. If it turns out that it runs better on a server, we will likely begin using it.
Do you have another system that you are looking at?
Re: (Score:2, Funny)
Exactly, MySQL is nothing but a toy database.
You're right! I wanted to catalog all my LEGO sets and G.I.Joes and it was just useful enough.
Re:Should have included PostgreSQL and DB2 (Score:4, Insightful)
Exactly, MySQL is nothing but a toy database.
This is the problem with most slashdotters. Most of them put up unsupported comments. What I would like you to do is to support your claims by pointing us to websites that have made the "mistake" of first running MySQL and later discovering the "light" in adopting PostgreSQL or otherwise.
Alternatively, you could websites that use MySQL; which websites can be branded as "toy websites" by extension.
Re:Should have included PostgreSQL and DB2 (Score:4, Insightful)
As an aside, some of the toy websites that use mySQL include Flickr, Facebook, Wikipedia, Google, Nokia and YouTube.
Re: (Score:2)
Confusing volume with data integrity (Score:5, Insightful)
The typical argument goes something like: 'MySQL suxorz - nobody uses it for serios work' followed by: 'Yeah? well explain that to =HIGH VOLUME SITE=!'
Such responses show a misunderstanding of what serious work is being discussed.
MySQL does a fabulous job with simple, high-volume transactions, exactly the type seen by Yahoogle/Flicker/Blogsites. They need to sore simple data (EG text) and be able to retrieve it quickly, and for these uses, MySQL is probably a better bet than Postgres or DB2.
But 'serios work' means thing like strong, ACID compliant transactions, row-level locking, strong integrity of field types, and a query scheduler that holds its own when you combine inner, outer, nested, subqueries mashing together a dozen or more tables with millions/billions of records/combinations.
Postgres will do this, MySQL won't. MySQL isn't bad because of this, it's just a tool not well suited to this specific job. I use MySQL for website CMS, I use Postgres for financial applications.
Does your dishwasher suck because it does a piss-poor job cleaning your socks? Use the right tool for the job.
Re: (Score:3, Insightful)
PS: Your company is pissing away tens of thousands of dollars on Oracle, when you could use PostgreSQL for free!
And no, I haven't read your requirements, but I'd be intrigued to find out what needs Oracle answers that PostgreSQL can't!
Re:Confusing volume with data integrity (Score:4, Funny)
And no, I haven't read your requirements, but I'd be intrigued to find out what needs Oracle answers that PostgreSQL can't!
See, I have this budget that I need to use up, or I lose the budget, and then my pay grade goes down, and I don't get to keep my secretary and this office with the nice window...
Or, I have this Oracle DBA, and I can't convince him to learn any other platform, because he sez it's bad for his career, and he's my brother in law...
Other than that, I like PostgreSQL real well, too. MS SQL Server is a pretty good low cost solution for a lot of smaller uses, too, if your company insists on spending money.
Re: (Score:3, Informative)
Even non-fuzzy full-text searches on Postgresql are a pain. Yes, they do work great, but the syntax is an abomination.
I loathe Oracle as much as the next guy, but even MySQL does a better job at fuzzy string matching! Really.
Re: (Score:2)
Re: (Score:2)
Re:Should have included PostgreSQL and DB2 (Score:5, Informative)
GP is right, MySQL is a toy database, advanced toy but still a toy database.
The absolutely most important thing for a database is data integrity, the ability to trust in your system - when it says "Yeah I saved that for you", it should take catastrophic events to lose it again.
MySQL treats data in a best effort way, if what you asked it to do doesn't fly with current config, it reverts to something that looks right enough and go with that.
Consider a database setup, admin installs MySQL with default creates some tables, runs it for a while, decides he needs more log space (transaction), he adjusts the settings and restart MySQL. It starts, everything is peachy. Transactions are running, being committed, he adds more tables, and then suddenly shit hits the fan, he does a rollback, MySQL says ok, but lo and behold, the data is still there...
So what went wrong? When he changed the transaction log size MySQL during start up realized an inconsistency between the actual log file size and the wanted, MySQL can't expand this file on the fly so InnoDB is disabled, MySQL now reverts to MyISAM (I am not kidding, this is what MySQL will do). Any subsequent calls to begin and commit transaction will be accepted with an OK. Any tables created afterward will be accepted, even with explicit engine syntax MySQL will just issue query ok, 1 warning.
Now the warning will tell you that the InnoDB engine wasn't available, so MySQL chose MyISAM instead - however, most aren't aware of this behavior, especially since most programming languages does not support this.
A database should at no point _ever!_ say "OK" to a request for something that can't be handled. If I say begin transaction and something isn't right I want my database to shout on top of its binary lungs that something is wrong and my data isn't safe.
Re:Should have included PostgreSQL and DB2 (Score:5, Insightful)
It isn't quite that simple, but I suppose one of my earlier (and later abandoned) projects qualifies.
I set up HERMES (a CRM suite written in PHP4) originally on MySQL and eventually discovered that the lack of transactions, etc. were a serious problem (this was back in 1999). I tried to move it over to PostgreSQL and discovered that PostgreSQL was really hard to administer (this was back in 1999). I ended up doing all my prototyping on MySQL, then converting the schemas to PostgreSQL using mysql2pgsql.pl because this was the only way I could get the data protections I needed (back in 1999).
Now, both MySQL and PostgreSQL have come a long way in the nearly-a-decade since then. MySQL has added transactions (for some table types not installed by default), foreign keys (for some table types not installed by default), strict mode (which can be circumvented on the app level), and a the planner has gotten much better. On the other hand, nearly every one of my issues with PostgreSQL has been resolved too. 8.3 has some really impressive new features from a developer perspective, and 8.4 will have even more. I haven't had to do prototyping on MySQL since PostgreSQL 7.3 came out.
I still stand by the statement that "compared to PostgreSQL, MySQL is a toy," and I would expect the gap between them to continue to widen. However what was limited to light content management db's in 1999 (MySQL), has become better able to handle a wider range of single-app dbs. MySQL is still no reasonable choice for an enterprise-wide database management solution especially where critical data is involved, but there are an increasing number of special cases where it is an option, in particular when compared to Firebird's embedded version, SQLite, and stuff like Sybase's SQL Anywhere. Comparing MySQL to MS SQL though only comes out favorably for MySQL where MS SQL is quite a bit more than is needed. PostgreSQL OTOH can in most cases compare favorably to Oracle, DB2, and MS SQL.
So the other half of the statement needs to be "but there are some cool things you can do with a toy db...."
Re:Should have included PostgreSQL and DB2 (Score:5, Insightful)
What I would like you to do is to support your claims by pointing us to websites that have made the "mistake" of first running MySQL and later discovering the "light" in adopting PostgreSQL or otherwise.
It's a toy database because when things aren't set up properly, they don't fail. Instead, they succeed silently and corrupt data (see using the wrong file format for your tablespace). Also, the developers are a treat - "we don't need transactions, do integrity checks in the app", followed by "we now have transactions, aren't we cool". Do they have triggers yet? Meanwhile, I have postgres, which works just fine.
Re: (Score:2)
Or alternatively, you could find sights detailing the problems with MySQL. I got bitten by one, where if a table was defined with any timestamp, any row update would automatically update the timestamp whether asked for or not. They thought it a feature; I thought it a disaster that took hours to track down. Then I found that it has no validation, taking dates such as 2005-01-42.
I have no idea whether it still has these particular failings, but by the time I wised up and started looking for web sites deta
Re: (Score:2)
Well, for a start, this bug [mysql.com] doesn't exactly inspire confidence.
Even less inspiring was this quote [blogspot.com] from the former founder that his "main reason for leaving was that I am not satisfied with the way the MySQL server has been developed, as can be seen on my previous blog post. In particular I would have like to see the server development to be moved to a true open development environment that would encourage outside participation and without any need of differentiation on the source code. Sun has been consider
Re: (Score:2)
They don't focus on MySQL - and I don't think the gp (or whatever it was) said they did - but just so it's clear. They do everything evenly between the three. But no - they do not do this for DB2 or PostgreSQL. I don't know how or why they chose those 3 - but that is what they cover. Though I have to imagine much of this will, as in The Art of SQL carry over to any RDBMS.
Re: (Score:2)
Wow. 35 million! That's heaps!
Is that the sum of rows in all tables? # of members? Transactions? User audit entries?
Re: (Score:2)
Re: (Score:2)
This study would have carried more weight if it had included PostgreSQL and IBM's DB2. These two databases do more serious work than MySQL although many believe MySQL is more widely deployed.
"Study"? This is a book review.
Thanks for getting the "WHAT ABOUT POSTGRES" comment that must accompany every Slashdot story submission that mentions MySQL out of the way early, though.
Re: (Score:2)
Suboptimal SQL procedures can be slow on any system.
Once I wrote a stored procedure in my first draft (Quick and easy to code) It took 45 minutes to run...
The DBA optimized using server tools brought it down to about 30 minutes.
Then I went back checked for all the bottle necks and fixed them from longest to shortest. It was able to run in 20 seconds. Yea to took more lines and by no way as an elegant SQL call. But for a 13,500% speed improvement lets put elegance to the wast side.
Re: (Score:2)
MySQL has always worked fine for me ... anecdotal I know, but a lot of coders I've met tend to have the mentality "I can throw whatever query I want at the system, and if it doesn't work, it's a DBA problem".
Thankfully I am both the programmer and DBA for our system, so I don't have to worry about a.n.other dumbass making a query that joins 27 tables with combinations of LEFT, RIGHT and INNER joins that end up running a WHERE clause of 1 million billion gazillion fufillion shabady-u-illion ... yen ... sorry
Re: (Score:2)
Dude, he goes for the most used and easily understood, not what is on the way out.
He wants to stay current with today's business model, I have yet to hear of a webfarm making available either of the dbs you mentioned....the 2 main ones available from any ISPs like Godaddy etc....are MySql and SLQServer from M$.
Problem is not the SQL writers..... (Score:5, Insightful)
But with management.
when I spent a few years as a DBA it was common to be told to not work on that project any more as soon as it produced usable data. That means as soon as you have a working prototype you are required to drop it and start the next project. Many times after you get a working prototype you then go back and refine it so that it's faster and uses less resources.
Management is the blame. Unrealistic deadlines for DBA's and if you are honest with them and give a report that you have data they think it's good to go. I actually got wrote up once for taking one of the old procedures we had and rewriting it so that it worked much faster and the resource hog it was was reduced to the point that others could use the DB while it ran. I was told I was wasting time.
Re:Problem is not the SQL writers..... (Score:4, Interesting)
Agreed COMPLETELY.
I work as a DBA as well, and the moment the prototype produces reliable data, its immediately off to the next project. Only time I ever get to go back and tweak code is if some random variable that was not thought of was missed in the original design, or a bug, forces me back into the code.
I've got some code out there that I know beyond a shadow of a doubt is horribly inefficient...but I'm not given the time and opprotunity to correct that.
Re:Problem is not the SQL writers..... (Score:4, Insightful)
Re: (Score:2)
Oh, please. If there are technical or financial consequences, then they are capable of being expressed in a spreadsheet that an MBA can understand. The weak link here is communication skills. Which, sad to say, are generally worth more in the marketplace than being able to solve partial differential equations, because they are rarer.
Re: (Score:2)
compare SQL to Code (Score:3)
btw, how come tech books don't come on tape/cd?
Re:compare SQL to Code (Score:5, Informative)
On a REAL database, like Oracle, the query optimizer will factor common expressions, eliminate unused branches, and in general execute your SQL in completely different manner than what you write.
Doing things in a "relational calculus" way, where you specify what to be done (i.e., with SQL) is superior to doing things in a "relational algebra" way (individual statements correlated by procedure code).
I've written some queries that were a dozen pages long for a individual statement, mostly because I use a python-like style where the indentation specifies the structure and thus you can string together monstrous subexpressions and not get confused. The DBA was like "you're not running that on MY box," but it ran super fast because of the query optimizer.
That's what I mean when I say MySql is a Toy, compared to DB/2, Oracle, or SQL Server. The query optimizer.
Re:compare SQL to Code (Score:4, Informative)
Chapter 5 - "Statement Refactoring" includes, according to the author, "...how to analyze SQL statements so as to turn the optimizer into your friend, not your foe." It's solid and probably points people towards writing things that work just as you describe.
Re: (Score:2)
btw, how come tech books don't come on tape/cd?
Only on Slashdot do you find someone who wants to listen to Natalie Portman talk SQL.
Joking aside, I doubt I'd find tech books on tape all that useful. Without diagrams, code examples, etc., you lose quite a bit of the value, IMO.
Oh if only... (Score:4, Funny)
Only on Slashdot do you find someone who wants to listen to Natalie Portman talk SQL.
SELECT * FROM Memes WHERE Reference LIKE '%Portman%' AND LIKE '%naked%' AND LIKE '%petrified%' ORDER BY SlashdotCommentScore, HotGrits;
27,154,947 Rows Returned.
Re: (Score:2)
Because you used '%string%', you forced the query to do a full table scan, i.e. he couldn't use any indexes, and then you sorted 27 million rows, forcing a filesort rather than in-memory sort.
Remind me never to offer you a job as a DBA.
Re: (Score:2)
I think the emphasis here is on writing the best sql so you can write the best code. Removing unneeded iteration on either side can be a huge benefit. Repeated calls to a database can be expensive - in numerous ways - so I think they aim the reader towards a state where more work is done with less trips.
I think that it is also safe to say that many of the tools they give for testing performance would be very useful in nailing down just where the issue is. It's not an issue of finding what works b
Re: (Score:2)
If you are competent you can accomplish 99-100% of your business logic in the database.
Re: (Score:2)
The problem is that 'in code' the modules driving the business rules are far and away from the data driving the rules- requiring you to spend an inordinate amount of time ferrying the data back and forth from the db to the code modules.
The real problem is that 'everything is a nail' and most developers do not have a solid handle on data design and database coding concepts.
Re: (Score:3, Insightful)
That's possibly a VERY bad idea. Even with small queries it's possible to create huge intermediate result tables and loading all that data into your application will make it crash. And if that doesn't happen, breaking a complex SQL statement into separate parts robs the SQL query optimizer of useful information. Your code limits the choices for an optimum evaluation plan, but how close is your code to the optimum plan that can be achieved?
Having said that, the optimizers can't work magic. I sometimes split
Performance Tuning is Not Refactoring (Score:5, Insightful)
I have the misfortune of working with a database that is primarily a couple of tables with key-value pairs (not a traditional database model).
There is only one column that can be indexed, and it has to be done with a full text index.
Every once in a while, there is a discussion about moving this mess to something more traditional. I was excited to read the review on this book, but as I read through the review, it seemed like this was more of a "performance tuning guide".
Re-factoring a database is a lot more involved - changing tables, stored procedures, maybe even the underlying database.
The term Database Application is fuzzy and poorly defined. Is it the front end? The stored procedures? The database tables? I would consider a database application to be any part of the code that stores, retrieves, removes or modifies data stored in a database, and the entities that have been defined to store that data.
Using that definition, this book is about tuning, not refactoring.
Re: (Score:2)
It discusses how to change client code - which is definitely not database tuning. There are database tuning techniques involved, but really it is much more than that. I tried to express that in the review but maybe I didn't do as well as I would have liked.
Here is how Wikipedia defines refactoring, "Code refactoring is the process of changing a computer program's internal structure without modifying its external functional behavior or existing functionality. This is usually done to improve externa
Re:Performance Tuning is Not Refactoring (Score:4, Funny)
Re: (Score:2)
Fair enough. Tuning is a fuzzy term as well. Tuning can imply hints in what you've mentioned (indexes, parameters, hardware, etc), but it can also imply hints, views, etc, etc, which could technically be called refactoring.
You're right - it's hard to summarize a book - I'll take a flip through it the next time I'm at a bookstore that carries it.
Thanks for the reply.
Re: (Score:2)
Re-factoring a database is a lot more involved - changing tables, stored procedures, maybe even the underlying database.
This particular case is easy: identify one form of data storage (for instance, a customer record) in the tables, build a schema, change the clients, and migrate the data. You will need a history of used queries if your codebase is murky to identify all the users of this data. Follow this process for 1-3 items at a time until you empty the old table.
If you have a dev environment, pull the table apart and identify all the uses of the table, make a schema that supports them, and migrate your code to use the D
Re: (Score:2)
Then you may want to "try out" this book:
http://books.slashdot.org/article.pl?sid=06/06/07/1458232 [slashdot.org]
"Incidentally", it was written by... Stéphane Faroult. I've read it a few times, and used its lessons (there are no other words for it, really) to prove by figures that the redesign of the data model that I suggested could improve the performance by a factor of 10.
Before reading that book, I knew that the data model was broke, but couldn't explain why. This book told me why. We use Oracle, but the le
Re: (Score:2)
Ah, that nasty anti-pattern. Definitely not a good model.
If you need to store data that you can't define at design time, you're still better off creating an actual table, or set of tables. Yes it can make the rest of the application that references these values more complex, but it pays off in the long run.
Another nasty anti-pattern we have to deal with is the compound value stored in a single column. The worst example I've seen involves storing XML strings to define the state of an object. The only relia
Re: (Score:3, Informative)
Let me explain what I meant by "doesn't work any more". For example a query that originally took 30 seconds now take 3 hours. It still 'works' from a functional perspective but from a business perspective may have become completely useless. Refactoring can make it work again. I should have been more clear on what I meant there.
Re: (Score:2)
Very good comment. What's funny is that when I hear the term "refactoring" it usually means, "Lets clean up the code so that it's more compatible with new features we need to add".
If the code is cleaned up, then new features added, then it really is refactoring.
But I would suspect that new features are added while the "refactoring" is going on, and thus it's not really refactoring.
Server performance is important, but... (Score:4, Insightful)
I've found that the biggest issues with SQL applications (writing rich clients) is not in performance turning of server/sql but in dealing with ORM issues, where to draw the line between how much work the client does vs. how much the server does, reconciling changes made in memory with data in tables, concurrency, database architecture designed to cope in advance with poorly thought-out requirements you're given, etc. I'd hope that book on refactoring SQL *applications* would touch on these issues.
Shoot the developers (Score:2, Interesting)
PostgreSQL, DB2 and application performance is poor, 99% of the time it is poor design from our
(developer's) side.
Developers without good understanding of Relational Databases and SQL often produce problems that
cannot be solved by indexes, or throwing transistors at them.
It is so nice to see a "custom" made map implemented in the database using temporary tables instead of
using the language's built-in map function
Re: (Score:2, Informative)
I would cautiously agree with the developers being less educated on SQL than they should be. The trouble seems to be in the different mind-sets required to solve the application problems. SQL is a declarative language, and it operates on your set of data as described. The host language, for what of a better term, is C or Java or some other iterative language, and it operates on each individual member of the set, stepwise. If you primarily think "stepwise" or algorithmically, you're already framing your
Re: (Score:2)
Custom made map implementation? You mean a view?
This is why I like our new company architecture model. Essentially the application developers only see a view. The view should closely match the "screen" of the application. The view is designed, and optimized by architects, and then, of course, all these views are available in our DBMS, managed by our DBA's. Never should developers make complex queries; they just query the view. Although some definitely have the touch, the majority routinely do th
Re: (Score:2)
Ahhh, no I'm not joking
A java front-end requires customer vital data: DOB, first/last name, city. This
Re: (Score:2, Insightful)
How do you debug, log, source control, deploy?
I found triggers to be a pain in the ass on the problems above
(Unless you just have one big customer and nothing else)
Re: (Score:2)
That is exactly how we do it. It also grants us a very distinct separation from the two, meaning in a few years we can rollout a brand spanking new UI, and ever have to touch the DB code.
1) Debugging/logging through Oracle is one of the largest pain in the arses I've ever undertaken. I don't even bother to debug code through toad anymore, its primarily just trace statements, that when "debug" mode is turned on, it traces pretty much every single operation, much like java stack tracing.
2) We manage
Re: (Score:3, Interesting)
You throw transistors at your developers? ;-)
Actually I agree with you. One of the big wins on the LedgerSMB project was the approach of moving the all main queries into user defined functions (and then using them as named queries). One of the nice things about this approach is that dba-types (like myself) can address performance when it crops up rather than dealing with the ABOLUTELY horrendous approaches found in the SQL-Ledger codebase..... (Assembling queries as little bits of text strings, throwing
Re: (Score:2)
You throw transistors at your developers? ;-)
Yeah. I started with BC10s and 2N7000s, but it wasn't helping much. I found to get results, it has to be one of these [webpublications.com.au].
Re: (Score:2)
Yes totally agree.
As a developer I used to have a very poor understanding of basic things like indexing columns and optimizing SQL queries.
A significant fraction of the workload were wrongly put on the application shoulder.Or worst multiple queries were launched instead of just one. Man how wrong I was...
I remember a case where a query used to take "minutes", with the proper optimizations it became a mere second or so.
Now I've got some sorts basic rules I try to respect.
Re: (Score:2)
Developers without good understanding of Relational Databases and SQL often produce problems that
cannot be solved by indexes, or throwing transistors at them.
It is so nice to see a "custom" made map implemented in the database using temporary tables instead of :-)
using the language's built-in map functionality
sorting arrays using the database gets extra points (no kidding, I have seen this!),
Dude, that isn't a lack of understanding regarding SQL. That's complete, utter incompetence.
Re: (Score:2)
Everyone's always down on bubble sort. It's a shame- bubble sorting is easy to write and it's actually the best sorting method to use when your list has ten elements or less.
Plus by the time it has a million elements you'll probably be on your next job.
Re: (Score:2)
When you are sorting 10 items or less, it's kind of academic what you use. The choice between 10ms and 11ms is an easy one to make.
But thinking that "you'll probably be on you next job by the time it has a million elements", isn't a terribly good work ethic. What if the requirements explode while you are still on that job ... you're gonna look like a twat then, after your 10ms becomes 10 minutes and the boss is leanign on you for a fix.
Anyway, swap sort FTW ... also easy to code, and way more efficient for
Use views (Score:3, Insightful)
I think it's usually best to have views (whether with rows that are the result of code, or with a pure 'select' definition, or materialized ones) define what your application 'sees', so that you can always change the underlying datastructure. That way refactoring becomes a bit more easy.
Re: (Score:2)
I do something similar by using a 'facade' pattern to assemble several modules' output into one stored procedure.
Re: (Score:2)
I have not found this to be true. It sounds good in theory, but a view cannot be optimized in some cases and depending on the architecture can also have conversion problems that simply do not exist otherwise.
If you are working at the problem strictly from the point of view of an application, oftentimes your perspective is correct, but if your looking at raw performance issues, and outright bugs, sometimes views can have a significance performance issue. Also, views often only grab subsets of data, and whe
Re: (Score:2)
I know what you're saying; that's why I explicitly included views that are the result of code (can be done in postgres and oracle) and materialized views (oracle). Especially materialized views can make you forget about performance issues; the data is just available as you want it to be as if it were a proper table, indexed and all, and if your original data has been abstracted and normalized correctly, you don't have the headache of pushing hacks into your original data purely for performance reasons eith
Re: (Score:3, Interesting)
I have to agree. In DB theory we have learned that you should normalize your data for a good database design. However, materialized views can give HUGE performance gains, by eliminating multistep joins between tables. You can't built customs indexes for queries that have these joins when the index condition is not in adjacent tables and you always have to deal with large intermediate results.
If the app is read-only and performance is critical the best strategy is to used materialized views built from norma
Re: (Score:2)
Normalize ... how I hate that word, as so many people who are obsessed with normalization tend to lack basic common sense, and can't see beyond next week in terms of flexibility and expandability of the system.
I remember one fix I had to do was on a simple address table. The original guy had implemented ZIP codes as an integer, because "everyone knows that US ZIP codes are numeric, and 5 digits maximum, right ?". Hey, that's such an efficient use of space, it must be good ?
Then we had to expand our list of
Re: (Score:2)
1) That's nothing to do with normalization.
2) So you had to pour over data from one column into a temporary varchar one, delete the original column and then rename the temporary one to the original one ? Poor you. Look how they make you work.
Re: (Score:2)
1) Yes, I KNOW it has nothing to do with normalization, but too many people equate normalization with "the most efficient use of a suitable datatype for the data it will contain", without thinking that maybe next week, the data might change.
2) Yes, and with a bit of forethought, the column probably would have been varchar in the first place. See point 1) for clarification. Oh, and try doing that on InnoDB on a live system, and come back to me in 5 hours when he's finished rebuilding the indexes, and your bo
Re: (Score:3, Informative)
Re: (Score:2)
Use views as transition states... (Score:2)
I think it's usually best to have views (whether with rows that are the result of code, or with a pure 'select' definition, or materialized ones) define what your application 'sees', so that you can always change the underlying datastructure.
As another poster noted, I don't see much benefit from that over just changing what the app sees. At least not long term.
I agree with Joel On Software Joel, in that you should set up views as you stated to present the best view possible for the application - then chang
Anything by Stephane Faroult... (Score:2)
...is probably a good read. He has a lively writing style; it kind of reminds me of Bertrand Meyer's "Object Oriented Software Construction". Anyhow, I've got both this book and Faroult's "The Art of SQL"... both are excellent.
Re: (Score:2)
SQL Tuning is also pretty good (Score:2)
This, of course, only works if the rest of the database setup is more or less ok ;-)
Re: (Score:2)
Sometimes problems are remarkably difficult to address, especially in PL/(Pg)?sql environments.
Let me give you an example.
I was doing a rewrite of a bulk processing portion of an application and we were running into performance issues. Eventually we ended up talking to some REAL PostgreSQL experts, reviewed the affected functions, etc. The function had a few warning issues because we had to pass into it really large (5000x5) text arrays into the function.
At one point we all discovered that we could create
Re: (Score:2)
Just to note the expert that was consulted actually gave us the correct answer at first but then we all went down the wrong path. :-)
Much better ways to do complex sql (Score:3, Informative)
I have developed a design pattern using in memory data table objects that can satisfy the most complex requirements without using any dynamic code.
It also allows the queries and business logic to be broken into discreet chunks that are easily optimized and debugged.
Re: (Score:2)
And let me guess. This margin is too narrow to contain a description of your magic design pattern?
Re: (Score:2)
Dude, he said it was discreet.
Bad Applications (Score:2)
Recently I was asked to improve the performance of a MySQL based PHP web application. After turning on query caching and tuning the settings I was left with looking at the queries involved. It turned out the application was really the problem, not the queries. Just loading the main page involved several hundred queries. For example, settings were saved in a table. Instead of loading all of the settings with a single query, it grabbed them one at a time. It wasn't like they had a few hundred variables
Here's an easy solution... (Score:2)
I've found that the easiest way to make applications run great is to give the developers systems that are at least 2 generations older than what will be used in production (with the latest software, patches, drivers, etc)...
Then, hold them to making the application perform as you want it to, on that hardware. They don't get paid (their final lump amount) until said application performs as you'd like on the 2 gen old hardware.
Then, when you migrate to the production hardware, it's quite a bit faster, and do
I Only Know Oracle (Score:5, Informative)
1) You have to establish a general rule of thumb for each production db whereby any one sql that consumes more than x% of the db resources needs to be tuned. The value of x varies from db to db. If it cannot be tuned below x% then it needs to be refactored.
2) Learn to use stored outlines. If you can get them to work they will save your ass and make you look like a total hero.
3) Never turn your back on the optimizer. Really. Even for simple queries, even with the deepest stats.
4) Bind variables are a necessity for high-repetition sql. Bind variables are something you might want to avoid for reports queries for which the optimal plans depend on the user input values. This is because a sql's plan is cached along with it the first time it is parsed, and if you use bind variables then the first plan you get is the plan you will always get so long as the sql remains in the shared pool.
(You can sometimes work around this issue by turning off bind variable peeking, but consider doing it on a per-session basis instead of changing it system-wide. Scary!)
5) Nowadays a 32GB SGA is no big thing. Get yourselves a ton o' RAM and set up a keep pool in the buffer cache to pin your most important reporting indexes and tables. Partition your big reporting tables and maintain a sliding window of the most recent partition(s) in the keep pool.
6) No sorting to-disk. Ever. If you cannot let the session have the PGA it needs to sort the query in memory then the SQL needs to be "refactored".
7) Once you have eliminated most of the physical disk reads it then becomes all about the buffer gets (BG's). When disk reads are low the high-logical-BG queries immediately become the new top SQL. This is because logical BG's are all CPU and your db is now cpu-bound, which is right where you want it. So from this point it's back to item #1 and we prune and tune (thanks KG!)
I could go on all day. Perhaps I should write a book?
PostgreSQL and the book (Score:2)
I don't know. Are the ideas worth testing on PostgreSQL? I am willing to bet they are.
Re: (Score:2)
see http://www.sqlonrails.org/ [sqlonrails.org] for why you are wrong. SOR is a great app framework!
Actually, though there are some things that SQL is very, very good at, but you are right that people often do the wrong sorts of things in it. My favorite approach is to use SQL stored procs to do named queries and then put processing the results of those queries into the middleware or thick client level.