Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Image

Refactoring SQL Applications 159

stoolpigeon writes "My past as a DBA probably makes me a bit biased, but the reaction I've seen the most when a database application isn't performing as well as would be liked seems to focus on the database side of things. The search for a solution usually seems to center around tuning db parameters, the building (or removal) of indexes and, if the budget allows, throwing hardware at the problem. In their new work, Refactoring SQL Applications, Faroult and L'Hermite bring a much wider range of options to the table. There is a lot in this little book for the developer charged with fixing an existing application and I think a lot of good information that could save one from making a number of headache-inducing mistakes on a new application." Keep reading for the rest of JR's review.
Refactoring SQL Applications
author Stephane Faroult with Pascal L'Hermite
pages 293
publisher O'Reilly Media, Inc.
rating 9/10
reviewer JR Peck
ISBN 978-0-596-51497-6
summary Good for a developer charged with fixing an existing application.
The book is divided into eight chapters; the first two deal with how to approach a problematic application in general. In the preface the authors say, "This book tries to take a realistic and honest view of the improvement of applications with a strong SQL component, and to define a rational framework for tactical maneuvers." I found this to be true throughout the entire book and was impressed by how well the examples, suggestions and problems echoed my real-life experience. This book is first and foremost practical. There is really almost nothing in the book that does not come immediately to bear upon the problem at hand. I've seen others do a lot less with many more pages.

The examples and benchmarks are compared across three different popular relational database management systems. They are MySQL, Oracle RDBMS and Microsoft SQL Server. I thought that this brought up a couple interesting issues that are not directly addressed in the book. First is that the authors are talking about how to improve performance, not comparing platforms, but the numbers are there and may be of some interest to people who would like to compare them. Secondly, I've met a number of people over the years who get quite animated about insisting that a good DBA does not need to know any certain solution, but rather just the fundamentals. I think Faroult and L'Hermite put this idea to rest, though unintentionally. In order to discuss how to best understand what exactly is happening and how best remedy issues, they show that it is necessary to have an understanding of platform specific issues and tools. This is true on two levels. The first is that the location of use of the built in tools for each platform are different. The second is that what works for one platform does not necessarily work for another.

For example, Chapter Two "Sanity Checks" contains a section on parsing and bind variables. The authors compare performance when queries are hard coded, with new prepared statements on each iteration (firm coded) and with one prepared statement and changing the parameter value on each iteration in a loop (soft coded). On Oracle and SQL Server the performance was poorest with hard coded, better with firm coded and best with soft coded. MySQL did best with soft coded as well but actually took a performance hit moving from hard coded to firm coded. This had to do with differences in how MySQL server caches statements. The authors took the time to rewrite their code from java to C in order to ensure that the issue was not related to language or driver issues. This is not to say that one can ignore RDBMS and SQL fundamentals, but rather that to get top performance requires knowledge of platform specific issues. This also comes out again when dealing with optimizers.

With that in mind, the authors recommend that readers have a solid understanding of SQL and some programming language. Most examples are SQL and code is given in Java and PHP. There are also examples that illustrate SQL extensions showing procedures, functions, etc. written for all three RDBMS products covered. The authors stick primarily to standard SQL but do make note and at times show examples of how things will look in each of the other databases. This information is current and reflects the most recent versions of the each product.

The fourth chapter, "Testing Framework" is incredibly useful. The authors cover generating test data and then checking correctness of outcomes through comparison. This is really useful information for anyone working to improve an application, or writing one for the first time. I think it also a large part of why this book could really appeal to new and experienced developers as well as the developer working on existing or brand new applications. I think there is a good chance that only the most extremely experienced developer would find nothing new here, or at least some new way to approach a problem. New developers can learn quite a bit and avoid some bad habits and assumptions without having to gain that information the hard way. And then the tools for generating random data, large amounts of data and comparing results will provide excellent opportunities for learning and real world application.

The next three chapters cover dealing with specific types of issues and how to improve performance. The last chapter then quickly describes a scenario of just how the authors step into real world situations and start to attack a problem. This is followed with two appendices. The first is scripts and samples, the second tools that are available to help in finding issues and resolving them. Some of the authors tools use SQLite, which is discussed briefly in the chapter on creating test data as some of the tools depend upon it.

I think that it has been a while since I've read a book that could have such a rapid return on investment. There are many suggestions and insights that should enable anyone to squeeze better performance out of just about any database application. While the focus is on the application side, there is plenty that requires understanding and work on the database side as well. There is discussion of the parameters and hardware I mentioned at the start of this review. But rather than the only options, they are one part in a much larger and systematic approach.

The authors relate that often refactoring for this type of application comes into play when something that used to work does not work any more. This can often lead to an environment of high pressure and emotion. The desire for a rapid resolution can lead to casting about in the dark for a quick fix or a feeling that cost is no longer as significant since a fix must be had now. The authors argue, and I agree, that this is exactly when a rational, disciplined process of tracking down and fixing issues is the most valuable. I agree. The issue is of course that someone in a position to do something must have the ability to take that approach. This book will get one well on the way to being in that place. Of course it can't take a brand new developer or DBA an expert. Much like a degree it can give them some fundamental tools that will allow them to take full advantage of experience as it comes rather than just crashing and burning.

If I could I'd have any developer on a database centric application read this, and DBAs as well. There is a lot here for both sides to learn about just how much they depend upon and impact one another. This may be an idealistic dream, especially for larger shops where often the relationship between those two groups is adversarial, but I think that such an approach could only make life much better for everyone involved. For anyone looking to enter this world on either side of the DBA or developer equation, this may make a nice addition to their education. For that individual wearing both hats this could be a life saver. In this small book they will learn many things to look out for as well as gain exposure to some of the similarities and differences in what are arguably the top three relational database management systems right now.

You can purchase Refactoring SQL Applications from amazon.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.

*

This discussion has been archived. No new comments can be posted.

Refactoring SQL Applications

Comments Filter:
  • by bogaboga ( 793279 ) on Wednesday March 11, 2009 @01:11PM (#27154387)

    This study would have carried more weight if it had included PostgreSQL and IBM's DB2. These two databases do more serious work than MySQL although many believe MySQL is more widely deployed.

    • by MBGMorden ( 803437 ) on Wednesday March 11, 2009 @01:22PM (#27154571)

      I have to agree. Maybe I'm unfairly biased and it's just past performance (both relayed by others and experienced by myself), but I don't trust MySQL for anything more complex than a backend for a simple website. If I want a reliable open source database for a serious project, I'd go for PostgreSQL in a heartbeat.

      I actually was looking at KnowledgeTree recently as a possible solution for a document management system for our organization (we have a clunky old system and some others here are pushing SharePoint as a replacment . . .), but it's use of MySQL pretty much stopped that dead in it's tracks. I know they'd prefer MS SQL Server as an option here. I could *probably* talk with them if it supported PgSQL. But MySQL isn't even an option to discuss for something this important.

      • by vlm ( 69642 ) on Wednesday March 11, 2009 @02:10PM (#27155363)

        You forgot the other often repeated/never researched traditional slashdot claims seen in every mysql comment section, such as mysql doesn't do transactions and doesn't do replication, both of which are necessary for each and every database install ever done past present or future... (Of course it has those features for about half a decade, maybe more, certainly since around the 4.0 range, but that never slows down the folks that repeat those claims)

        Then there are the re-occurring claims that mysql is useless because it doesn't have some bizarre feature that you might personally think is useful, therefore any database without it is useless for everyone doing anything, like perhaps direct file importation of COBOL ISAM punch cards, or an internal database representation for complex four dimensional vectors. You know, the stuff everyone uses.

        Then there are the posts explaining how a failing hard drive on an old gateway 2000 vaporized the filesystem and/or bad ram caused endless kernel lockups, and the mysql software was running on that bad hardware, and correlation always implies causation, so mysql must be bad too.

        Finally I expect several posts about how they found an obscure bug in the beta 3.23 version back around eight years therefore they'll never use it again because that is the only software that has ever had a bug.

        • Well... I have been using Postgresql since back WHEN MySQL didn't do transactions.... I still don't trust MySQL's transactions or the new strict mode. At the same time, I have watched PostgreSQL do an absolutely terrific job of running horrendously written queries optimally. Here are two criticisms I have about using MySQL for real application work, especially when you are distributing that application (and thus have little control over how users set up their db's):

          1) MySQL transactions are built into the table engines, and by default (last I checked, and meaning you don't install innodb, etc), the tables will not be transactional. This means that if you are building an inhouse app, you can trust it more than you can if you are distributing your software. In short, if you are distributing software you can't guarantee that it is running on a system with transactions without a great deal of headache........ The same goes for referential integrity enforcement.

          2) Strict mode can be turned off by any application. This means that the more recent data integrity checks cannot be relied upon. This is an issue on both inhouse and distributed software because it adds quite a bit of overhead to the QA process internally, and can add support headaches in software for distribution.

          MySQL is a good db for single-app databases, where data integrity is not a tremendous issue or where you are deploying a separate MySQL instance on a different port. It is quite a bit worse than PostgreSQL for anything else.

          • Re: (Score:2, Interesting)

            by Leolo ( 568145 )

            I write applications that use MySQL that get installed on servers at the cient's premises. I'm also the one doing with the installation and MySQL config.

            Reponding to your points :

            1. If the client were to insist on handling the MySQL part, and screwed it up, it would cease to be my problem. Or rather, I would point at the instalation and tell them were they fucked up;
            2. About turning off strict-mode. If your applications are turning off strict-mode, then don't be supprised if you break data integrity. I
            • Re: (Score:3, Interesting)

              by einhverfr ( 238914 )

              If the client were to insist on handling the MySQL part, and screwed it up, it would cease to be my problem. Or rather, I would point at the instalation and tell them were they fucked up;

              Ok, so your point is that this is fine as long as you install MySQL, make sure that Innodb, etc. is installed, etc. Fine. I don't want that responsibility.

              About turning off strict-mode. If your applications are turning off strict-mode, then don't be supprised if you break data integrity. If your clients are writing apps t

            • MySQL is a good db for single-app databases, where data integrity is not a tremendous issue or where you are deploying a separate MySQL instance on a different port. It is quite a bit worse than PostgreSQL for anything else.

              From your description you are using MySQL for a single-app database where you run a dedicated instance of MySQL for your app. That is not the usage case I was describing, which is a central RDBMS serving out the same data to a myriad of different applications. If you are trying to go

          • Re: (Score:3, Interesting)

            by julesh ( 229690 )

            1) MySQL transactions are built into the table engines, and by default (last I checked, and meaning you don't install innodb, etc), the tables will not be transactional. This means that if you are building an inhouse app, you can trust it more than you can if you are distributing your software. In short, if you are distributing software you can't guarantee that it is running on a system with transactions without a great deal of headache........ The same goes for referential integrity enforcement.

            It's easy e

            • Re: (Score:3, Interesting)

              by einhverfr ( 238914 )

              On the whole, this is probably a good thing. If the application is under your control, you can use whichever mode you want. If you're relying on somebody else's application, forcing it to use strict mode when it wasn't written for this environment could introduce subtle bugs. Now, if you were to argue that the _existence_ of these different modes of operation was an issue, then I'd probably agree. But given the existence of the modes (and that's unfortunately a necessity for backwards compatibility reasons)

        • by MBGMorden ( 803437 ) on Wednesday March 11, 2009 @02:37PM (#27155783)

          I'm not holding anything against it that regard. The simple fact is that I've had two fairly low traffic MySQL databases become corrupted beyond the point of being usable within the last 3 years. The hardware wasn't at fault here (nor was it old or outdated). Now luckily, this was for something that while important, wasn't "OMG somebody's head's gonna roll!" critical (namely, it was the quarantine database for amavisd-new on a mail filter, and then later an internal message/call tracking system that we'd wrote).

          For stuff like that, where you can stand to lose the data, or at worst, roll to a backup, then MySQL has it's uses. However, our document management system for example contains tons of documents that we must legally keep archived and available (Government institution - we have to have it available for FOIA requests). We also have for instance land appraisal software keeping databases of property taxing information that we need to bill at the end of the year (with about $50 million annually riding on that - if we don't get those bills out our whole budget shuts down). I just don't trust that type of thing to MySQL. Not to mention that the "nobody ever got fired for buying Microsoft" mentality does kick in. If the database fails and I have to restore from backup, then if it's MS SQL Server or Oracle then your bosses will usually not fault me(as long as I have good backups in place, which I do). If something that critical fails and I used MySQL on the project, I very well might be looking for a new job.

          • Re: (Score:3, Interesting)

            by julesh ( 229690 )

            I'm not holding anything against it that regard. The simple fact is that I've had two fairly low traffic MySQL databases become corrupted beyond the point of being usable within the last 3 years. The hardware wasn't at fault here (nor was it old or outdated).

            I'm not sure what you're doing wrong here, but I think many of us have been running a lot more MySQL databases than that and never experienced corruption. Myself, I have been maintaining on average about 20 MySQL instances spread across 3 different ser

        • by Splab ( 574204 )

          Seriously, if you believe MySQL to be safe you have no business with database applications, google it - heck, just read the linked sites from sibling posts.

          Transactions are only supported by specific engines and even when you think you are running the right engine MySQL might surprise you (usually when you need a rollback the most). Read up on it, your data is being corrupted!

        • While I mostly agree with your points, I definitively don't understand why MySQL still creates tables without Foreign Key support, if you don't add the silly "engine=innodb" keyword. Please don't reply with "backward compatibility for broken applications/schemas"...
          ...
          Server version: 5.0.67-0ubuntu6 (Ubuntu) ...
          mysql> create table testtable ( xx integer);
          Query OK, 0 rows affected (0.00 sec)

          mysql> show create table testtable; ...CREATE TABLE testtable( ...
          ) ENGINE=MyISAM DEFAULT CHARSET=latin1 |

      • by Tiro ( 19535 )
        I have the same qualms about deploying Wordpress, which requires MySQL. Not to mention that the MySQL commercial license costs $600.
      • Oddly enough, I am also looking at KnowledgeTree. Very inexpensive and well put together system. What exactly is your problem with mysql in this instance?

        I'd also be interested in hearing more about your view of KnowledgeTree as a whole. I was very impressed with its Office integration and overall ease of use compared with more expensive products.

        • I've had a few MySQL databases become corrupted in production systems. I've not had any corruption in either MS SQL Server or PostgreSQL databases that have been in use longer and are used much more heavily. More or less just a case of "once bitten twice shy".

          As to KnowledgeTree specifically, I didn't use it extensively, but it did look promising. I did have some minor issues defining permissions on certain items, but that was probably just a matter of learning curve. The only downside I'd state was tha

          • Yeah, the speed seems to be the only thing I found lacking. I attributed that to the workstation I have this installed on as well. If it turns out that it runs better on a server, we will likely begin using it.

            Do you have another system that you are looking at?

    • by hondo77 ( 324058 )
      The book is not a study, it is trying to teach refactoring concepts with the idea that you can take and apply them to any SQL project. Surely you can figure out how to apply an Oracle or MySQL example to PostgreSQL, yes?
    • This study would have carried more weight if it had included PostgreSQL and IBM's DB2. These two databases do more serious work than MySQL although many believe MySQL is more widely deployed.

      "Study"? This is a book review.

      Thanks for getting the "WHAT ABOUT POSTGRES" comment that must accompany every Slashdot story submission that mentions MySQL out of the way early, though.

    • Suboptimal SQL procedures can be slow on any system.

      Once I wrote a stored procedure in my first draft (Quick and easy to code) It took 45 minutes to run...

      The DBA optimized using server tools brought it down to about 30 minutes.

      Then I went back checked for all the bottle necks and fixed them from longest to shortest. It was able to run in 20 seconds. Yea to took more lines and by no way as an elegant SQL call. But for a 13,500% speed improvement lets put elegance to the wast side.

      • MySQL has always worked fine for me ... anecdotal I know, but a lot of coders I've met tend to have the mentality "I can throw whatever query I want at the system, and if it doesn't work, it's a DBA problem".

        Thankfully I am both the programmer and DBA for our system, so I don't have to worry about a.n.other dumbass making a query that joins 27 tables with combinations of LEFT, RIGHT and INNER joins that end up running a WHERE clause of 1 million billion gazillion fufillion shabady-u-illion ... yen ... sorry

    • Dude, he goes for the most used and easily understood, not what is on the way out.
      He wants to stay current with today's business model, I have yet to hear of a webfarm making available either of the dbs you mentioned....the 2 main ones available from any ISPs like Godaddy etc....are MySql and SLQServer from M$.

  • by Lumpy ( 12016 ) on Wednesday March 11, 2009 @01:16PM (#27154471) Homepage

    But with management.

    when I spent a few years as a DBA it was common to be told to not work on that project any more as soon as it produced usable data. That means as soon as you have a working prototype you are required to drop it and start the next project. Many times after you get a working prototype you then go back and refine it so that it's faster and uses less resources.

    Management is the blame. Unrealistic deadlines for DBA's and if you are honest with them and give a report that you have data they think it's good to go. I actually got wrote up once for taking one of the old procedures we had and rewriting it so that it worked much faster and the resource hog it was was reduced to the point that others could use the DB while it ran. I was told I was wasting time.

    • by Samalie ( 1016193 ) on Wednesday March 11, 2009 @01:26PM (#27154663)

      Agreed COMPLETELY.

      I work as a DBA as well, and the moment the prototype produces reliable data, its immediately off to the next project. Only time I ever get to go back and tweak code is if some random variable that was not thought of was missed in the original design, or a bug, forces me back into the code.

      I've got some code out there that I know beyond a shadow of a doubt is horribly inefficient...but I'm not given the time and opprotunity to correct that.

    • by CodeBuster ( 516420 ) on Wednesday March 11, 2009 @02:43PM (#27155897)
      This experience speaks to a more general issue that I have with non-technical MBA types who tend to reduce everything to a dollars and cents issue without fully appreciating or even being able to fully appreciate either the technical OR the financial consequences of their decisions. They assume that their MBA piece-of-paper mail-order diploma makes them oh-so-much smarter than anyone else who doesn't have one, when in fact the smartest people tend to study mathematics, physics, engineering, other hard science, or even philosophy while the intellectual light-weights study social science and get their MBA. If anyone is actually a waste of time and resources then it is the middle management social climbers who produce a lot of hot air using the latest "management techniques" that they read about in a trade magazine on an airline flight or heard about at a conference held in a cheap hotel ballroom.
      • Oh, please. If there are technical or financial consequences, then they are capable of being expressed in a spreadsheet that an MBA can understand. The weak link here is communication skills. Which, sad to say, are generally worth more in the marketplace than being able to solve partial differential equations, because they are rarer.

         

    • by afidel ( 530433 )
      DBA time is expensive but nearly as expensive as Oracle licenses so our DBA gets plenty of time to analyze and tune SQL where he can, but we are mostly a COTS house, we do minimal in house development.
  • by trybywrench ( 584843 ) on Wednesday March 11, 2009 @01:20PM (#27154541)
    I'd like to see some work done on the balancing act of how much to do in code and how much to do in SQL. My coworker can put SQL statements together that if printed on an 8.5x11 would fill the whole sheet if not run over. Me, on the other hand, I tend to break up huge sql statements into a set of smaller ones and then use code to do some of the work that could possible have been done in SQL. I don't have the time to find out what works best on my own but I do have the time to read about it.

    btw, how come tech books don't come on tape/cd?
    • by Saint Stephen ( 19450 ) on Wednesday March 11, 2009 @01:42PM (#27154945) Homepage Journal

      On a REAL database, like Oracle, the query optimizer will factor common expressions, eliminate unused branches, and in general execute your SQL in completely different manner than what you write.

      Doing things in a "relational calculus" way, where you specify what to be done (i.e., with SQL) is superior to doing things in a "relational algebra" way (individual statements correlated by procedure code).

      I've written some queries that were a dozen pages long for a individual statement, mostly because I use a python-like style where the indentation specifies the structure and thus you can string together monstrous subexpressions and not get confused. The DBA was like "you're not running that on MY box," but it ran super fast because of the query optimizer.

      That's what I mean when I say MySql is a Toy, compared to DB/2, Oracle, or SQL Server. The query optimizer.

    • by OG ( 15008 )

      btw, how come tech books don't come on tape/cd?

      Only on Slashdot do you find someone who wants to listen to Natalie Portman talk SQL.

      Joking aside, I doubt I'd find tech books on tape all that useful. Without diagrams, code examples, etc., you lose quite a bit of the value, IMO.

      • by RulerOf ( 975607 ) on Wednesday March 11, 2009 @02:07PM (#27155325)

        Only on Slashdot do you find someone who wants to listen to Natalie Portman talk SQL.

        SELECT * FROM Memes WHERE Reference LIKE '%Portman%' AND LIKE '%naked%' AND LIKE '%petrified%' ORDER BY SlashdotCommentScore, HotGrits;

        27,154,947 Rows Returned.

        • Because you used '%string%', you forced the query to do a full table scan, i.e. he couldn't use any indexes, and then you sorted 27 million rows, forcing a filesort rather than in-memory sort.

          Remind me never to offer you a job as a DBA.

    • I think the emphasis here is on writing the best sql so you can write the best code. Removing unneeded iteration on either side can be a huge benefit. Repeated calls to a database can be expensive - in numerous ways - so I think they aim the reader towards a state where more work is done with less trips.

      I think that it is also safe to say that many of the tools they give for testing performance would be very useful in nailing down just where the issue is. It's not an issue of finding what works b

    • If you are competent you can accomplish 99-100% of your business logic in the database.

    • Re: (Score:3, Insightful)

      by he-sk ( 103163 )

      That's possibly a VERY bad idea. Even with small queries it's possible to create huge intermediate result tables and loading all that data into your application will make it crash. And if that doesn't happen, breaking a complex SQL statement into separate parts robs the SQL query optimizer of useful information. Your code limits the choices for an optimum evaluation plan, but how close is your code to the optimum plan that can be achieved?

      Having said that, the optimizers can't work magic. I sometimes split

  • by puppetman ( 131489 ) on Wednesday March 11, 2009 @01:23PM (#27154589) Homepage

    I have the misfortune of working with a database that is primarily a couple of tables with key-value pairs (not a traditional database model).

    There is only one column that can be indexed, and it has to be done with a full text index.

    Every once in a while, there is a discussion about moving this mess to something more traditional. I was excited to read the review on this book, but as I read through the review, it seemed like this was more of a "performance tuning guide".

    Re-factoring a database is a lot more involved - changing tables, stored procedures, maybe even the underlying database.

    The term Database Application is fuzzy and poorly defined. Is it the front end? The stored procedures? The database tables? I would consider a database application to be any part of the code that stores, retrieves, removes or modifies data stored in a database, and the entities that have been defined to store that data.

    Using that definition, this book is about tuning, not refactoring.

    • It discusses how to change client code - which is definitely not database tuning. There are database tuning techniques involved, but really it is much more than that. I tried to express that in the review but maybe I didn't do as well as I would have liked.

      Here is how Wikipedia defines refactoring, "Code refactoring is the process of changing a computer program's internal structure without modifying its external functional behavior or existing functionality. This is usually done to improve externa

    • Re-factoring a database is a lot more involved - changing tables, stored procedures, maybe even the underlying database.

      This particular case is easy: identify one form of data storage (for instance, a customer record) in the tables, build a schema, change the clients, and migrate the data. You will need a history of used queries if your codebase is murky to identify all the users of this data. Follow this process for 1-3 items at a time until you empty the old table.

      If you have a dev environment, pull the table apart and identify all the uses of the table, make a schema that supports them, and migrate your code to use the D

    • Then you may want to "try out" this book:

      http://books.slashdot.org/article.pl?sid=06/06/07/1458232 [slashdot.org]

      "Incidentally", it was written by... Stéphane Faroult. I've read it a few times, and used its lessons (there are no other words for it, really) to prove by figures that the redesign of the data model that I suggested could improve the performance by a factor of 10.

      Before reading that book, I knew that the data model was broke, but couldn't explain why. This book told me why. We use Oracle, but the le

    • Ah, that nasty anti-pattern. Definitely not a good model.

      If you need to store data that you can't define at design time, you're still better off creating an actual table, or set of tables. Yes it can make the rest of the application that references these values more complex, but it pays off in the long run.

      Another nasty anti-pattern we have to deal with is the compound value stored in a single column. The worst example I've seen involves storing XML strings to define the state of an object. The only relia

  • by Ukab the Great ( 87152 ) on Wednesday March 11, 2009 @01:25PM (#27154655)

    I've found that the biggest issues with SQL applications (writing rich clients) is not in performance turning of server/sql but in dealing with ORM issues, where to draw the line between how much work the client does vs. how much the server does, reconciling changes made in memory with data in tables, concurrency, database architecture designed to cope in advance with poorly thought-out requirements you're given, etc. I'd hope that book on refactoring SQL *applications* would touch on these issues.

  • Shoot the developers (Score:2, Interesting)

    by kafros ( 896657 )
    I am a developer, and my experience has shown that if you use one of: Oracle, SQLServer,
    PostgreSQL, DB2 and application performance is poor, 99% of the time it is poor design from our
    (developer's) side.

    Developers without good understanding of Relational Databases and SQL often produce problems that
    cannot be solved by indexes, or throwing transistors at them.

    It is so nice to see a "custom" made map implemented in the database using temporary tables instead of
    using the language's built-in map function
    • Re: (Score:2, Informative)

      I would cautiously agree with the developers being less educated on SQL than they should be. The trouble seems to be in the different mind-sets required to solve the application problems. SQL is a declarative language, and it operates on your set of data as described. The host language, for what of a better term, is C or Java or some other iterative language, and it operates on each individual member of the set, stepwise. If you primarily think "stepwise" or algorithmically, you're already framing your


    • Custom made map implementation? You mean a view? :)

      This is why I like our new company architecture model. Essentially the application developers only see a view. The view should closely match the "screen" of the application. The view is designed, and optimized by architects, and then, of course, all these views are available in our DBMS, managed by our DBA's. Never should developers make complex queries; they just query the view. Although some definitely have the touch, the majority routinely do th
    • Re: (Score:3, Interesting)

      by einhverfr ( 238914 )

      You throw transistors at your developers? ;-)

      Actually I agree with you. One of the big wins on the LedgerSMB project was the approach of moving the all main queries into user defined functions (and then using them as named queries). One of the nice things about this approach is that dba-types (like myself) can address performance when it crops up rather than dealing with the ABOLUTELY horrendous approaches found in the SQL-Ledger codebase..... (Assembling queries as little bits of text strings, throwing

      • by julesh ( 229690 )

        You throw transistors at your developers? ;-)

        Yeah. I started with BC10s and 2N7000s, but it wasn't helping much. I found to get results, it has to be one of these [webpublications.com.au].

    • Yes totally agree.

      As a developer I used to have a very poor understanding of basic things like indexing columns and optimizing SQL queries.

      A significant fraction of the workload were wrongly put on the application shoulder.Or worst multiple queries were launched instead of just one. Man how wrong I was...

      I remember a case where a query used to take "minutes", with the proper optimizations it became a mere second or so.

      Now I've got some sorts basic rules I try to respect.

      • If you have to parse query results
    • Developers without good understanding of Relational Databases and SQL often produce problems that
      cannot be solved by indexes, or throwing transistors at them.

      It is so nice to see a "custom" made map implemented in the database using temporary tables instead of
      using the language's built-in map functionality :-)
      sorting arrays using the database gets extra points (no kidding, I have seen this!),

      Dude, that isn't a lack of understanding regarding SQL. That's complete, utter incompetence.

  • Use views (Score:3, Insightful)

    by bytesex ( 112972 ) on Wednesday March 11, 2009 @01:27PM (#27154697) Homepage

    I think it's usually best to have views (whether with rows that are the result of code, or with a pure 'select' definition, or materialized ones) define what your application 'sees', so that you can always change the underlying datastructure. That way refactoring becomes a bit more easy.

    • I do something similar by using a 'facade' pattern to assemble several modules' output into one stored procedure.

    • I have not found this to be true. It sounds good in theory, but a view cannot be optimized in some cases and depending on the architecture can also have conversion problems that simply do not exist otherwise.

      If you are working at the problem strictly from the point of view of an application, oftentimes your perspective is correct, but if your looking at raw performance issues, and outright bugs, sometimes views can have a significance performance issue. Also, views often only grab subsets of data, and whe

      • by bytesex ( 112972 )

        I know what you're saying; that's why I explicitly included views that are the result of code (can be done in postgres and oracle) and materialized views (oracle). Especially materialized views can make you forget about performance issues; the data is just available as you want it to be as if it were a proper table, indexed and all, and if your original data has been abstracted and normalized correctly, you don't have the headache of pushing hacks into your original data purely for performance reasons eith

        • Re: (Score:3, Interesting)

          by he-sk ( 103163 )

          I have to agree. In DB theory we have learned that you should normalize your data for a good database design. However, materialized views can give HUGE performance gains, by eliminating multistep joins between tables. You can't built customs indexes for queries that have these joins when the index condition is not in adjacent tables and you always have to deal with large intermediate results.

          If the app is read-only and performance is critical the best strategy is to used materialized views built from norma

          • Normalize ... how I hate that word, as so many people who are obsessed with normalization tend to lack basic common sense, and can't see beyond next week in terms of flexibility and expandability of the system.

            I remember one fix I had to do was on a simple address table. The original guy had implemented ZIP codes as an integer, because "everyone knows that US ZIP codes are numeric, and 5 digits maximum, right ?". Hey, that's such an efficient use of space, it must be good ?

            Then we had to expand our list of

            • by bytesex ( 112972 )

              1) That's nothing to do with normalization.
              2) So you had to pour over data from one column into a temporary varchar one, delete the original column and then rename the temporary one to the original one ? Poor you. Look how they make you work.

              • 1) Yes, I KNOW it has nothing to do with normalization, but too many people equate normalization with "the most efficient use of a suitable datatype for the data it will contain", without thinking that maybe next week, the data might change.

                2) Yes, and with a bit of forethought, the column probably would have been varchar in the first place. See point 1) for clarification. Oh, and try doing that on InnoDB on a live system, and come back to me in 5 hours when he's finished rebuilding the indexes, and your bo

    • Re: (Score:3, Informative)

      by bjourne ( 1034822 )
      That's silly. When you change the data model you must change the views too. Then you could as well have changed how the application uses the database instead and avoid a whole layer of indirection. Plus, views are read-only so the client application still needs direct access to the tables to update data. Views are useful and very under appreciated, but not in the way you suggest.
      • Views are updatable in MS SQL Server, and probably Oracle and DB2. THere are some restrictions of course (no aggregates for example are allowed in the view). We use this fetaure all the time to hide schema changes from older code that we don't want to mess with.
    • I think it's usually best to have views (whether with rows that are the result of code, or with a pure 'select' definition, or materialized ones) define what your application 'sees', so that you can always change the underlying datastructure.

      As another poster noted, I don't see much benefit from that over just changing what the app sees. At least not long term.

      I agree with Joel On Software Joel, in that you should set up views as you stated to present the best view possible for the application - then chang

  • ...is probably a good read. He has a lively writing style; it kind of reminds me of Bertrand Meyer's "Object Oriented Software Construction". Anyhow, I've got both this book and Faroult's "The Art of SQL"... both are excellent.

  • In cases where the problem is query performance, I've had pretty good results with the techniques in "SQL Tuning" by Dan Tow.

    This, of course, only works if the rest of the database setup is more or less ok ;-)

    • Sometimes problems are remarkably difficult to address, especially in PL/(Pg)?sql environments.

      Let me give you an example.

      I was doing a rewrite of a bulk processing portion of an application and we were running into performance issues. Eventually we ended up talking to some REAL PostgreSQL experts, reviewed the affected functions, etc. The function had a few warning issues because we had to pass into it really large (5000x5) text arrays into the function.

      At one point we all discovered that we could create

      • Just to note the expert that was consulted actually gave us the correct answer at first but then we all went down the wrong path. :-)

  • by avandesande ( 143899 ) on Wednesday March 11, 2009 @01:58PM (#27155181) Journal

    I have developed a design pattern using in memory data table objects that can satisfy the most complex requirements without using any dynamic code.
    It also allows the queries and business logic to be broken into discreet chunks that are easily optimized and debugged.

  • Recently I was asked to improve the performance of a MySQL based PHP web application. After turning on query caching and tuning the settings I was left with looking at the queries involved. It turned out the application was really the problem, not the queries. Just loading the main page involved several hundred queries. For example, settings were saved in a table. Instead of loading all of the settings with a single query, it grabbed them one at a time. It wasn't like they had a few hundred variables

  • I've found that the easiest way to make applications run great is to give the developers systems that are at least 2 generations older than what will be used in production (with the latest software, patches, drivers, etc)...

    Then, hold them to making the application perform as you want it to, on that hardware. They don't get paid (their final lump amount) until said application performs as you'd like on the 2 gen old hardware.

    Then, when you migrate to the production hardware, it's quite a bit faster, and do

  • I Only Know Oracle (Score:5, Informative)

    by bloobamator ( 939353 ) on Wednesday March 11, 2009 @09:59PM (#27161495)
    I only know Oracle but I've known it since version 5.0. Intimately. I haven't read the book but I read the review. Here are a few tips I've learned over the decades that you might find useful, just in case they aren't covered in the book:

    1) You have to establish a general rule of thumb for each production db whereby any one sql that consumes more than x% of the db resources needs to be tuned. The value of x varies from db to db. If it cannot be tuned below x% then it needs to be refactored.
    2) Learn to use stored outlines. If you can get them to work they will save your ass and make you look like a total hero.
    3) Never turn your back on the optimizer. Really. Even for simple queries, even with the deepest stats.
    4) Bind variables are a necessity for high-repetition sql. Bind variables are something you might want to avoid for reports queries for which the optimal plans depend on the user input values. This is because a sql's plan is cached along with it the first time it is parsed, and if you use bind variables then the first plan you get is the plan you will always get so long as the sql remains in the shared pool.
    (You can sometimes work around this issue by turning off bind variable peeking, but consider doing it on a per-session basis instead of changing it system-wide. Scary!)
    5) Nowadays a 32GB SGA is no big thing. Get yourselves a ton o' RAM and set up a keep pool in the buffer cache to pin your most important reporting indexes and tables. Partition your big reporting tables and maintain a sliding window of the most recent partition(s) in the keep pool.
    6) No sorting to-disk. Ever. If you cannot let the session have the PGA it needs to sort the query in memory then the SQL needs to be "refactored".
    7) Once you have eliminated most of the physical disk reads it then becomes all about the buffer gets (BG's). When disk reads are low the high-logical-BG queries immediately become the new top SQL. This is because logical BG's are all CPU and your db is now cpu-bound, which is right where you want it. So from this point it's back to item #1 and we prune and tune (thanks KG!)

    I could go on all day. Perhaps I should write a book?

Business is a good game -- lots of competition and minimum of rules. You keep score with money. -- Nolan Bushnell, founder of Atari

Working...