Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Education Math Programming Science

Astonishing Speedup In Solving Linear SDD Systems 157

eldavojohn writes "A new paper (PDF) out of Carnegie Mellon University shows how to solve symmetric diagonally dominant linear systems much faster than before. The technique employs graph theory, randomized algorithms, and linear algebra to achieve an astonishing advantage over current methods to solve such systems. From the article: 'The result is a significant decrease in computer run times. The Gaussian elimination algorithm runs in time proportional to s^3, where s is the size of the SDD system as measured by the number of terms in the system, even when s is not much bigger the number of variables. The new algorithm, by comparison, has a run time of s*[log(s)]^2. That means, if s = 1 million, that the new algorithm run time would be about a billion times faster than Gaussian elimination.' Developers out there who maintain matrix packages and linear algebra tools might want to take a peak at the paper. Anyone who has modeled real-world systems will be able to tell you that speedups in linear algebra algorithms have a weighty effect on efficiency in simulations — especially as one approaches the theoretical limits of optimality. This research is currently being presented at the IEEE Symposium on Foundations of Computer Science."
This discussion has been archived. No new comments can be posted.

Astonishing Speedup In Solving Linear SDD Systems

Comments Filter:
  • Grain of salt (Score:5, Informative)

    by l00sr ( 266426 ) on Friday October 22, 2010 @10:16AM (#33984928)

    Though the result sounds genuinely interesting, I'm not sure the comparison to Gaussian elimination is fair. The summary gives the O(s^3) asymptotic run time of Gaussian elimination for dense matrices, but there are much better alternatives for a sparse matrix, which is what the paper applies to. For example, if your matrix has the sparsity pattern of a planar graph, it has been known for 30 years that you can do Gaussian elimination in O(n^1.5). This result, however, seems to have the potential to be more widely applicable while producing a better asymptotic bound. So, sounds like great work, but not quite as amazing as the hype in the summary.

  • by EdZ ( 755139 ) on Friday October 22, 2010 @10:27AM (#33985034)

    Maybe their method.is applicaple to a wider variety of problems.

    From TFA, that's exactly what this is.

  • by ponraul ( 1233704 ) on Friday October 22, 2010 @10:29AM (#33985068)
    Comparing the solver's complexity to Gaussian-Elimination isn't useful. No one in their right mind uses direct solvers on large linear systems.

    Anyway, if the system is symmetric and dominant diagonal, SOR is the standard choice in iterative solvers.
  • by mcg1969 ( 237263 ) on Friday October 22, 2010 @10:35AM (#33985094)

    I agree with a previous poster who said it is unfair to compare this algorithm to Gaussian Elimination. Frankly, it seems to me that the poster has taken enough numerical analysis to know that GE is O(n^3), but is not familiar with the much wider body of research into efficient methods for solving structured linear systems.

    Symmetric diagonally dominant linear systems are among the best conditioned systems you could possibly construct, which means that it is a rare case that you would ever use GE. Instead, you would use an iterative method, such as conjugate gradients or a stationary method like the Chebyshev method discussed in the paper. While it does seem that they are proposing a novel way to construct an efficient iterative method, the truth is that once you've entered the realm of iterative methods it's not likely there is going to be a single approach that works best in every instance. Even the paper's author agree that this is a "conceptually simple and possibly practical iterative solver." They rightly recognize that what works well on paper may not always prove useful in practice. So I think we should take the authors' own advice and wait for further research into this approach.

  • Re:Grain of salt (Score:3, Informative)

    by blackcoot ( 124938 ) on Friday October 22, 2010 @10:47AM (#33985202)

    not really.

    the method applies to the _general_ (i.e. dense) case of diagonally dominant (i.e. |A_i,i| > |A_i,j| for all i,j) symmetric systems that show up all the time (grammian matrices, for example), not just the sparse case.

  • I can tell it's truly News for Nerds because I can barely understand what it's saying and it drops causal references to advanced mathematics--the stuff I only wish I'd had the fortitude to study in college.

    You more than likely did study this in college. This involves linear algebra, specifically the inversion of matrices and/or solving linear system. Ax=b, where A is an mxn matrix, and x and b are nx1 and mx1 vectors respectively. What we'd like to do is solve for x using x=A^-1 b, where A^-1 is an inverse matrix of some kind. But getting the inverse is a notoriously difficult problem.

    In fact, a large reason digital computers were invented at all was so that "large" matrices could be inverted. And by large, I mean 12x12 matrices (keep in mind this was the 1950s). Computer games, mp3 players, spreadsheets and the internet are all quite incidental byproducts of humanities quest to invert ever larger matrices, in less time. Though just about any half way sophisticated piece of software will invert a matrix at some point.

    The reason for this is as follows: When human beings want to solve any difficult problem, they approximate it with a linear model and solve the associated linear system. Hence the ability to solve linear systems quickly and accurately is of fundamental importance. This goes double in the modern world as we increasingly rely on the ability of computers to handle ever larger linear system--that is, to invert ever larger matrices.

    The biggest problem with matrices, say nxn matrices, is that the time taken for solving them by Gauss elimination goes up as 0(n^3). So while your desktop could probably invert a 1000x1000 matrix in a few seconds, it would take a year invert a million x million matrix, and would take around a million years to invert a billion x billion matrix. (Not that it could store either in memory). While Gauss elimination is a pretty poor algorithm efficiency-wise, this issues are unavoidable for general matrices.

    However, most matrices we actually want to invert in practice are "sparse" or "diagonally dominant". The first property means that most of the elements are zero. So in a billion x billion matrix, instead of storing 10^18 numbers, you'd only have to store a few billion. Diagonally dominant means that largest entries are along the diagonal. Both of these mean you can take a lot of--often hueristic--shortcuts in your inversion algorithims to cut down on time.

    These researchers claim O(s*log(s)^2) complexity for such systems. I suspect there are probably a lot of O(s^2*log(s)) system or the like anyway, but even still this is a nice improvement. I doubt it's as big a breakthrough--I suspect this is hyped anyway--but if it is an improvement you can kind of compare this to something like the Fast Fourier Transform speedup. Again, I doubt it's that large of an improvement.

    I'll finish by mentioning that this all falls under the umbrella of Linear Algebra, which is an absolutely essential skill of higher mathematics, and computational mathematics. Any geeks who prides themselves on their mathematical ability, but doesn't know their linear algebra (linear systems, the four fundamental subspaces, eigenvectors/values, rotation/transformation matrices, etc) shouldn't be holding their skills in such high regard. If you don't know your LA, or have forgotten, one of the best places to start is to watch Gilbert Strang's online lecture series [mit.edu] provided by MIT. These lectures are indispensable to anyone working with linear systems. After this, those who need in depth analysis of matrix algorithms should turn to something like "Matrix Algorithms" by G. W. Stewart [amazon.com] which goes into more detail and shows the more canonical algorithms. A little knowledge of linear algebra goes a long way in mathematics.

    ...Or failing all that, you could just open up MATLAB... or Octave. I recommend the educational route first though.

  • by mdmkolbe ( 944892 ) on Friday October 22, 2010 @12:07PM (#33986168)

    It lets us compute more precise simulations faster. This helps everything from better weather predictions, to faster simulations for industrial design, to game physics engines.

    Basically if you can solve X=A*B or Ax=b more quickly then you can run simulations faster. The paper shows a way to solve Ax=b faster provided A is diagonally dominant (it usually is). Here A, B and b are given, X and x are unknowns. Upper case is matrix. Lower case is vector.

    In theory simulations that took days (~10^15 cycles) will now take hundreds of microseconds (~10^5 cycles). In practice, we already have techniques that usually work on "most" inputs to these types of simulations. This paper presents a technique that works on "more" inputs to those systems. (But note that their runtime, n*[log(n)]^2, is expected time and not worst case time.) The potential impact is huge but the actual impact is yet to be seen.

    Disclaimer: I've published in this area before so I have some background understanding, but it's been a while and I've read only the abstract of the paper. Once I read the full paper I'll have a better idea of whether it lives up to the hype. I'm hopeful because this could be a very important result. I'm also cautious because they use an approximation algorithm (that is what the " epilon" is about) and also use expected time rather than worst case time. These factors may limit the usefulness of the algorithm.

  • Really cool (Score:3, Informative)

    by frank_adrian314159 ( 469671 ) on Friday October 22, 2010 @01:16PM (#33987264) Homepage

    The nice thing about this is that things like electrical circuit simulations, FEA matrices, etc. tend to be SDD. This may speed things up a bit.

  • Re:Grain of salt (Score:1, Informative)

    by Anonymous Coward on Friday October 22, 2010 @02:11PM (#33988106)

    If you took even a casual look at the actual paper (ie the first page summary), you would have seen that the paper is primarily concerned with new sparsification techniques.

    In other words, the whole point is that it can quickly and robustly sparsify a general class of SDD systems. The resulting sparse approximation is then easily solved as you note.

  • by Overunderrated ( 1518503 ) on Friday October 22, 2010 @02:33PM (#33988454)

    >"I'm also cautious because they use an approximation algorithm (that is what the " epilon" is about)"

    The iterative methods normally used for linear systems that this could possibly replace all have epsilon (error) terms as well. I do work on the applied numerical methods side, so it's going to take me a bit to parse through the graph-theory heavy stuff here, but it is potentially very exciting.

    Are there actually large-scale applications where gaussian elimination is actually used that makes this a valid comparison? It's my understanding that the roundoff error accumulated for systems of millions of unknowns in the "exact" direct gaussian method can even be worse than the far far faster methods for sparse iterative solvers.

  • Re:Simplex (Score:3, Informative)

    by TerranFury ( 726743 ) on Friday October 22, 2010 @04:02PM (#33989746)
    No, this is just Ax=b, without inequality constraints.

Stellar rays prove fibbing never pays. Embezzlement is another matter.

Working...