## Astonishing Speedup In Solving Linear SDD Systems 157 157

eldavojohn writes

*"A new paper (PDF) out of Carnegie Mellon University shows how to solve symmetric diagonally dominant linear systems much faster than before. The technique employs graph theory, randomized algorithms, and linear algebra to achieve an astonishing advantage over current methods to solve such systems. From the article: 'The result is a significant decrease in computer run times. The Gaussian elimination algorithm runs in time proportional to s^3, where s is the size of the SDD system as measured by the number of terms in the system, even when s is not much bigger the number of variables. The new algorithm, by comparison, has a run time of s*[log(s)]^2. That means, if s = 1 million, that the new algorithm run time would be about a billion times faster than Gaussian elimination.' Developers out there who maintain matrix packages and linear algebra tools might want to take a peak at the paper. Anyone who has modeled real-world systems will be able to tell you that speedups in linear algebra algorithms have a weighty effect on efficiency in simulations — especially as one approaches the theoretical limits of optimality. This research is currently being presented at the IEEE Symposium on Foundations of Computer Science."*
## May I be the first to say (Score:5, Funny)

Huh?

## Re: (Score:1)

## Re: (Score:2)

Huh??

## Re: (Score:2)

Developers out there who maintain matrix packages and linear algebra tools might want to take a

peakat the paper.Nevermind, I see now what ghostweed was talking about. Would help if he quoted that and made a new thread instead of replying to a random thread with seemingly random babble that makes Spamland [youtube.com] seem coherent.

## Re: (Score:3, Funny)

Tonight's the night I shall be talking about of flu the subject of word association football. This is a technique out a living much used in the practice makes perfect of psychoanalysister and brother and one that has occupied piper the majority rule of my attention squad by the right number one two three four the last five years to the memory. It is quite remarkable baker charlie how much the miller's son this so-called while you were out word association immigrants' problems influences the manner from heav

## Re: (Score:2)

New brooms sweep clear. Two wrongs don't make a right. A penny saved is a penny earned.

Interpretation: A flawed article will require a lot of effort to work with Loose lips sink

ships.

Yogi Berra Variant: It's not over till it's over. Common sense to a limit is Natural; But

more of it is Genius The weak can never forgive. Forgiveness is the attribute of the strong.

--Mahatma Gandhi

All frills and no knickers. As good as Greg. Theakston me, but Theakston my brother and

you'll have God to answer to First deserve th

## Re: (Score:2)

Frilled women without knickers ROCK!

## Re: (Score:2)

According to the email they are as good as Greg even!

## Re: (Score:2)

Its a fundamental paradigm shift in the basic efficiencies of solving linear equations. A reduction from quadratic omega to less than square.

## Re: (Score:2)

Let me spin it a lil'.

It could mean faster videogames, better porn clips decoding, and more responsive LOLCAT sites. Is OP interested, nao?

## Re:May I be the first to say (Score:5, Informative)

It lets us compute more precise simulations faster. This helps everything from better weather predictions, to faster simulations for industrial design, to game physics engines.

Basically if you can solve X=A*B or Ax=b more quickly then you can run simulations faster. The paper shows a way to solve Ax=b faster provided A is diagonally dominant (it usually is). Here A, B and b are given, X and x are unknowns. Upper case is matrix. Lower case is vector.

In theory simulations that took days (~10^15 cycles) will now take hundreds of microseconds (~10^5 cycles). In practice, we already have techniques that usually work on "most" inputs to these types of simulations. This paper presents a technique that works on "more" inputs to those systems. (But note that their runtime, n*[log(n)]^2, is expected time and not worst case time.) The potential impact is huge but the actual impact is yet to be seen.

Disclaimer: I've published in this area before so I have some background understanding, but it's been a while and I've read only the abstract of the paper. Once I read the full paper I'll have a better idea of whether it lives up to the hype. I'm hopeful because this could be a very important result. I'm also cautious because they use an approximation algorithm (that is what the " epilon" is about) and also use expected time rather than worst case time. These factors may limit the usefulness of the algorithm.

## Re: (Score:3, Informative)

>"I'm also cautious because they use an approximation algorithm (that is what the " epilon" is about)"

The iterative methods normally used for linear systems that this could possibly replace all have epsilon (error) terms as well. I do work on the applied numerical methods side, so it's going to take me a bit to parse through the graph-theory heavy stuff here, but it is potentially very exciting.

Are there actually large-scale applications where gaussian elimination is actually used that makes this a valid

## Better (Score:2)

Better in the sense that the results that can be produced for a given sized grid can be obtained more quickly. Hence, it is possible to dramatically increase the density of sampled points (finer mesh grid) that would yield more localize accuracy and hence potentially more accurate results. Generally speaking the more better data can be included, the more likely the result will make an accurate prediction given the boundary conditions of the model.

## Re: (Score:2, Interesting)

Or in limerick form, as published in a paper by Roe, LeVeque, and van Leer [freeshell.org],

So medium-term weather prediction

turns out to be merely a fiction.

It's just anyone's guess

if you ask how to dress

it will offer no useful restrictions.

## Wow? (Score:5, Funny)

Good gravy, now I understand why my non-tech wife just stares blankly at me when I'm describing my plain old IT job.

## Re: (Score:1, Funny)

She could also be autistic or wishing she were blind AND deaf.

## A Perfect Slashdot Article (Score:5, Insightful)

I'm serious. This is exactly what I want from Slashdot.

## Re: (Score:3, Insightful)

I was thinking something similar. I'm a fairly smart guy and this went way over my head. I love math and the problems it can solve when used creatively. I've just never had the energy to pursue it to a level these (and other) researchers have.

I feel myself chanting "I'm not worthy."

## Re:A Perfect Slashdot Article (Score:5, Insightful)

Mathematicians like it that way.

One of the math professors at Berkeley tells his students that math is messy - when you're working on a proof you start from one place, wander around for a while, then get to your destination.

Then you clean everything up and present it to the world as obvious.

## Re: (Score:2)

## Re: (Score:2)

Yes, but for mere mortals the intermediate steps are often critical to keep from getting lost on the way, at least given my level of abilities. I for one still admit to having difficulty following proofs of the fundamental theorem of algebra and I've have the advantage of several hundred years of alternate proofs. Although the steps are clear enough, understanding the leaps between them can still be difficult.

## Re: (Score:2)

Sadly, weak math skills can keep one wandering for a long time. Conceptualizing the route from starting point to destination is not often easy to establish, when one is not entirely sure what the destination is supposed to look like when you get there.

To the extent I understand the paper, it is that certain error bounds can be described that given various inequality relations specify that the correct answer can be shown to be within an ever-more confined region of solution space upon iteration given the as

## Re: (Score:2)

## Re: (Score:2)

Interestingly enough, if you took college level math, this was probably only one semester over your head. And not actually a particularly complicated topic, for example, if you took 3 semesters total of calculus (including high school level calculus), what's being described here is in fact much easier to understand.

## Re: (Score:2)

## Re: (Score:2)

Which is exactly what we

expectfrom Slashdot.## Re: (Score:2)

Okay. Now you've confused me. Do you mean it drops causal references to advanced mathematics, in which case this article would have a direct impact on the advance of advanced mathematics itself, or did you mean to say it drops casual references to advanced mathematics, in which case it is assumed that the

## This is exactly what I want from Slashdot, too (Score:3, Insightful)

I mean the article "In Florida, a Cell Phone Network With No Need For a Spectrum License"...

What is that?! Not news for nerds... Possibly news for techies.

## Re:A Perfect Slashdot Article (Score:5, Insightful)

Why do you need to go to college? Here, watch Linear Algebra [mit.edu] and Algorithms [mit.edu].

Also there is no remotely advanced math dropped in TFA, though the actual paper of course is cutting edge research.

## Re:A Perfect Slashdot Article (Score:4, Funny)

Or Khan Academy [khanacademy.org]. Out of curiosity, I watched all of Khan's math topics that weren't over my head. Next, I'm on to Addition 2!

## Gilbert Strang is awesome. (Score:3, Interesting)

If I had had Gilbert Strang as an instructor for linear algebra instead of who I did have, maybe I would understand what the article is talking about. Having watched those videos I repeatedly said to myself "oh thats what we were doing!" Are covariance matrices SDDs? If so could this be used to speed up principle component analysis?

## covariance matrices are generally not SSD (Score:2)

## Re: (Score:2)

## Re: (Score:2)

Never met them in the computer vision and image processing for example.

On the other hand, I meet them all the time in statistical programming, which makes me think that as image processing algorithms get more probabilistically based, speedups like the one discussed here may be very useful to people working in your field. I know they will be in mine, and I'm looking forward to seeing practical implementations.

## Re: (Score:2)

Sadly, too true. In my experience given the kind of covariance matrices that result from measures taken on fish, condition numbers of such matrices almost enter into a world of intractability, but not quite, so the choice of methods becomes quite important. The difficult is learning enough of the underlying mathematics to understand precisely how to choose among various potential methods and in being able to generate enough data to make it possible to obtain sufficiently stable covariance matrices to begi

## Re: (Score:2)

## Re:covariance matrices are generally not hardware (Score:2)

SSD - symmetric diagonally dominant

No, SSD is Solid State Drive, which is certainly a more common term to see on Slashdot than SDD. I had to read the summary a couple of times before I realized it was talking about mathematical arrays rather than persistent memory arrays. Then it made a lot more sense! :)

## Re: (Score:2)

To give some examples of places where this

doesmatter... Anything that "feels" like heat flow or diffusion.In all of the above examples, you're inverting a matrix known, depending on the field, as either the graph Laplacian, or the admittanc

## Re: (Score:2)

SSD - symmetric diagonally dominant, mean the diagonal element is more the sum of abs all the rest in the row. That is a very strong condition, which happen in very specific applications. Never met them in the computer vision and image processing for example.

I'm not familiar with using linear systems for simulations, but I would guess that any system that models inertia would end up SDD. Meaning a bias towards the existing state rather than any other cell's/particle's state. This is very different from imaging or audio, where the convolution for e.g. diffraction rings or echo cancellation isn't SDD.

## Re: (Score:2)

Are covariance matrices SDDs?

I think SDD is sufficient for positive definiteness, and therefore for a matrix to be a valid covariance matrix, but not necessary. Anyone got an answer on this?

## Re: (Score:2)

Really close.

Diagonally dominant ==> positive

semidefiniteStrictlydiagonally dominant ==> positive definiteFor understanding these things, I really like Gershgorin's circles [wikipedia.org].

In terms of random variables... It's possible for a covariance matrix to be only positive semidefinite. E.g., consider a zero-mean random variable x, with E(x^2)=1, and another one y = cx, with c a constant. Then the covariance matrix is [[1 c][c c^2]], which is not full rank (the second column is just 'c' times the

## Re: (Score:2)

Okay, thanks, that makes sense. (The covariance matrix of strictly linearly dependent r.v.s is kind of a degenerate case, though.) My main question is about implication in the other direction; I believe that DD ==> PSD and SDD ==> PD, but I'm guessing that the inverse is not true, i.e. PSD =/=> DD and PD =/=> SDD.

## Re: (Score:2)

Yeah, the converse is not true in general.

E.g.,

[ 4 -2 5 ]

M = [ -2 5 -5 ]

[ 5 -5 9 ]

It's PD but EVERY row violates the diagonal dominance property.

Basically, PD matrices are Grammians -- there exist vectors such that the elements of the matrix are the inner products of those vectors -- and, if you pick a bunch of vectors that are linearly-independent-but-not

## Re: (Score:2)

Can't agree with you more. I've struggled to better understand linear algebra for quite some time and found Strang's clarity on the subject of tremendous help. The way he takes you from elementary ideas to the important concepts associate with SVD is truly worth paying close attention to. It makes a lot of seemingly disjunct ideas come together in a meaningful way.

I only wish there were more additional advanced lecture series to follow up on where his two lecture series leave off.

## Re: (Score:2)

let me summarize: the same math that drives google's search (spectral graph theory) has been shown to be really useful in solving a very common class of linear equations if you're willing to roll the dice (randomize algorithm).

## eldavojohn (Score:2)

## Re:A Perfect Slashdot Article (Score:5, Informative)

You more than likely did study this in college. This involves linear algebra, specifically the inversion of matrices and/or solving linear system. Ax=b, where A is an mxn matrix, and x and b are nx1 and mx1 vectors respectively. What we'd like to do is solve for x using x=A^-1 b, where A^-1 is an inverse matrix of some kind. But getting the inverse is a notoriously difficult problem.

In fact, a large reason digital computers were invented at all was so that "large" matrices could be inverted. And by large, I mean 12x12 matrices (keep in mind this was the 1950s). Computer games, mp3 players, spreadsheets and the internet are all quite incidental byproducts of humanities quest to invert ever larger matrices, in less time. Though just about any half way sophisticated piece of software will invert a matrix at some point.

The reason for this is as follows: When human beings want to solve any difficult problem, they approximate it with a linear model and solve the associated linear system. Hence the ability to solve linear systems quickly and accurately is of fundamental importance. This goes double in the modern world as we increasingly rely on the ability of computers to handle ever larger linear system--that is, to invert ever larger matrices.

The biggest problem with matrices, say nxn matrices, is that the time taken for solving them by Gauss elimination goes up as 0(n^3). So while your desktop could probably invert a 1000x1000 matrix in a few seconds, it would take a year invert a million x million matrix, and would take around a million years to invert a billion x billion matrix. (Not that it could store either in memory). While Gauss elimination is a pretty poor algorithm efficiency-wise, this issues are unavoidable for general matrices.

However, most matrices we actually want to invert in practice are "sparse" or "diagonally dominant". The first property means that most of the elements are zero. So in a billion x billion matrix, instead of storing 10^18 numbers, you'd only have to store a few billion. Diagonally dominant means that largest entries are along the diagonal. Both of these mean you can take a lot of--often hueristic--shortcuts in your inversion algorithims to cut down on time.

These researchers claim O(s*log(s)^2) complexity for such systems. I suspect there are probably a lot of O(s^2*log(s)) system or the like anyway, but even still this is a nice improvement. I doubt it's as big a breakthrough--I suspect this is hyped anyway--but if it is an improvement you can kind of compare this to something like the Fast Fourier Transform speedup. Again, I doubt it's that large of an improvement.

I'll finish by mentioning that this all falls under the umbrella of Linear Algebra, which is an absolutely essential skill of higher mathematics, and computational mathematics. Any geeks who prides themselves on their mathematical ability, but doesn't know their linear algebra (linear systems, the four fundamental subspaces, eigenvectors/values, rotation/transformation matrices, etc) shouldn't be holding their skills in such high regard. If you don't know your LA, or have forgotten, one of the best places to start is to watch Gilbert Strang's online lecture series [mit.edu] provided by MIT. These lectures are indispensable to anyone working with linear systems. After this, those who need in depth analysis of matrix algorithms should turn to something like "Matrix Algorithms" by G. W. Stewart [amazon.com] which goes into more detail and shows the more canonical algorithms. A little knowledge of linear algebra goes a long way in mathematics.

...Or failing all that, you could just open up MATLAB... or Octave. I recommend the educational route first though.

## Re: (Score:2)

Then you would be better off visiting mathoverflow or stackoverflow. Thats where the real wizards get the job done. Slashdot is directed more to the computer-social scene, which is important in some respects, but not the place to expect answers to technical questions, unless its about the popularity of computer games and such.

## I think I have the fortitude (Score:2)

the real question is whether I have the intelligence to sufficiently understand it. I've been trying to better my understanding of linear algebra for a number of years new, but it doesn't come easily. It is at least comforting that I can begin to read research papers such as this and understand some of it.

Those who have more substantial knowledge and who make an effort to build ladders to help scale the wall of understanding, like Gilbert Strang, Arnold, Shilov, Gantmacher, and others, are really due seri

## Re: (Score:2)

I missed articles like these.

## Re: (Score:2)

It may also have an application in relational databases, where some searches (and joins) are in fact simple matrix equations.

This algorithm could possibly be implemented on the database server side to improve performance.

## Re: (Score:2)

I can tell it's truly News for Nerds because I can barely understand what it's saying and it drops causal references to advanced mathematics

I recommend you start visiting arXiv [arxiv.org] then.

Are you suggesting the OP, a self-described interested lay person,

learnsor even merefollowmathematic research by reading arXiv? If so, WTF!?arXiv is a pre-print archive of original research articles, not exactly a welcoming place for a non-mathematician (or non-subject specialist, e.g. physics, and computer science also use it). Even with an undergrad degree in mathematics, I find it a difficult (and/or useless) place to try to follow progress in the field, without the editorial assistants to filter the

## Re: (Score:2)

## Sounds like multigrid (Score:5, Interesting)

Multigrid [wikipedia.org] is theoretically O(s), so I don't immediately see how this is such a huge leap. Of course the actual complexity also depends on the problem and the implementation. Maybe their method.is applicaple to a wider variety of problems.

Also, the "iterated sparsifying" sounds a lot like algebraic multigrid.

## Re:Sounds like multigrid (Score:4, Interesting)

I had the same thought but I guess multigrid is limited to specific matrices while the new algorithm seems to be more generic. Not sure though, my last contact with multigrid lies 14 years ago (when algebraic multigrid was a hot topic in science).

## Re: (Score:3, Informative)

Maybe their method.is applicaple to a wider variety of problems.

From TFA, that's exactly what this is.

## Re: (Score:2)

Same thoughts here. I cannot understand why this is a huge leap practically speaking.

However, the new method might be of theoretical interest, since it uses techniques which are totally different from multigrid.

The fact that it applies only to SDD matrices is a bit disappointing though (multigrid can be applied to a wider range of problems).

## Re: (Score:3, Interesting)

Multigrid [wikipedia.org] is theoretically O(s), so I don't immediately see how this is such a huge leap. Of course the actual complexity also depends on the problem and the implementation. Maybe their method.is applicaple to a wider variety of problems.

Also, the "iterated sparsifying" sounds a lot like algebraic multigrid.

I think the wider variety of problems is exactly the major advantage. A symmetrically diagonally dominant matrix is not necessarily sparse, but it does come up a lot. I know from experience that nodal-analysis based circuit simulators will end up like that most of the time, since each matrix cell represents a node connection, and when you have a circuit element between node i and j, it's obviously also between j and i (exceptions made when modeling things like transistors, because you need to handle the g

## Re: (Score:2)

how so?

## Re:Sounds like multigrid (Score:5, Funny)

I know an O(s) algorithm for solving linear systems, but it only works on diagonal matrices ...

## Grain of salt (Score:5, Informative)

Though the result sounds genuinely interesting, I'm not sure the comparison to Gaussian elimination is fair. The summary gives the O(s^3) asymptotic run time of Gaussian elimination for dense matrices, but there are much better alternatives for a sparse matrix, which is what the paper applies to. For example, if your matrix has the sparsity pattern of a planar graph, it has been known for 30 years that you can do Gaussian elimination in O(n^1.5). This result, however, seems to have the potential to be more widely applicable while producing a better asymptotic bound. So, sounds like great work, but not quite as amazing as the hype in the summary.

## Re: (Score:2)

Also the summary seems to have a fundamental misunderstanding of asymptotic analysis :)

## Re: (Score:3, Informative)

not really.

the method applies to the _general_ (i.e. dense) case of diagonally dominant (i.e. |A_i,i| > |A_i,j| for all i,j) symmetric systems that show up all the time (grammian matrices, for example), not just the sparse case.

## Re: (Score:2)

touche.

## Re: (Score:2)

a couple thoughts:

1) approximate randomized algorithms are your friend, especially when we talk about things like (RAN|MAP|MLE|M)SAC in model fitting. for some classes of problems, random approximate algorithms are the *only* attack that we have that are feasible in terms of time/space complexity.

2) your analysis fails to account for the hidden leading constant (otherwise there's no reason to consider quicksort when you have heapsort) and what happens when N gets to be large enough. the fact is that for big

## Re: (Score:3, Interesting)

This algorithm is supposed to run in O(n log(n)^2) time, which is actually better than O(n^1.5). Not a lot better mind you, but if you make n very large and don't have a lot of overheads, you should see improvements.

## Re: (Score:2)

But which is the new run-time? it's vastly different depending on order of operation:

(1000000*log(1000000))^2 -> (1000000*6)^2 -> 36,000,000,000,000 ->36,000,000M->36 million million

1000000*(log(1000000)^2) 1000000*(6^2) -> 1000000*36 = 36 million

## Re: (Score:2)

My thoughts exactly...

Symmetric, (Strictly) diagonally dominant matrices are great: Non-singular, real spectrum, diagonalizable...In fact, purely from the fact that the eigenvalues can be bounded away from 0, many iterative methods will have provably fast O(n^2) convergence...beating the classic O(n^3) by an order of magnitude.

I'm not up to speed in the particular tricks used for the Symmetric, DD regime, but certainly one would only "naively" try solving this using Gaussian elimination, due to the special

## WTF, Over? (Score:5, Funny)

Who knew that when I got up to check my daily's (Incl /.) that I'd end up with scrambled brains for breakfast.

## Re: (Score:2)

Mmmmm ..... brains!!!

## Take a "peak"? (Score:5, Funny)

Take a Mt. McKinley at the paper? Or a K2?

Spell checkers are not a replacement for actually reading what you've written.

## Re: (Score:2)

Maybe it was a pun?

## Re: (Score:1)

Or maybe spell checkers are not a replacement for the editor's work, as not all submitters in Slashdot have English as their first language and peek/peak aren't that hard to confuse (as then/than, farther/further, ...).

## 2,000 year-old problem for developers (Score:4, Funny)

I know what you mean. I remember Pythagoras, Euclid and Aristotle writing about sweating over their laptops while trying to smooth-out their proofs and algorthims.

## Re: (Score:1, Funny)

Archimedes too.. he used to move the sand grains of his finite state machine with a stick to perform such calculations :)

## Why use a direct solver? (Score:3, Informative)

Anyway, if the system is symmetric and dominant diagonal, SOR is the standard choice in iterative solvers.

## Re: (Score:2)

Comparing the solver's complexity to Gaussian-Elimination isn't useful. No one in their right mind uses direct solvers on large linear systems.

It's useful because every other solver is compared to the same standard.

## Not worthy of the front page. (Score:5, Informative)

I agree with a previous poster who said it is unfair to compare this algorithm to Gaussian Elimination. Frankly, it seems to me that the poster has taken enough numerical analysis to know that GE is O(n^3), but is not familiar with the much wider body of research into efficient methods for solving structured linear systems.

Symmetric diagonally dominant linear systems are among the best conditioned systems you could possibly construct, which means that it is a rare case that you would ever use GE. Instead, you would use an iterative method, such as conjugate gradients or a stationary method like the Chebyshev method discussed in the paper. While it does seem that they are proposing a novel way to construct an efficient iterative method, the truth is that once you've entered the realm of iterative methods it's not likely there is going to be a single approach that works best in every instance. Even the paper's author agree that this is a "conceptually simple and

possiblypractical iterative solver." They rightly recognize that what works well on paper may not always prove useful in practice. So I think we should take the authors' own advice and wait for further research into this approach.## Re: (Score:2)

In the case of things like Genetic Algorithms, the simplified problem is often the "One Max" problem and the hard problem is the degenerate "Trap" problem (where the next-best solutions are all maximally far away in state space.)

In the theoretical the

## Re: (Score:2)

Yes, that's a great algorithm, I use it too sometimes. Michael Saunders was one of my dissertation readers. Wickedly smart man, and more importantly, his software *works*.

## SSD (Score:2)

## Re: (Score:2)

## Re: (Score:2)

Wait, it isn't? It said something about computer run times. Run time isn't a new name for fetch cycle?

## Does this mean anything for crypto? (Score:2)

Is this particular kind of funky advanced math used for anything in crypto or information security and does this new speedup help crack that?

## Re: (Score:2)

There are probably no implication for cryptography. Solving linear system is a well known and polynomial problem. We can do it faster now for some well structured matrices. But We already had quite fast algorihtm for doing it. Cryptography usually does not rely on solving linear systems, they are way too easy to solve.

## Smart grid applications (Score:2)

This could be interesting for predictive control of the power grid based on a real time model. Current (no pun intended) technology can't do a real time simulation, or must make some crippling simplification assumptions, or is based on some proprietary algorithms (which may be junk, but we can't validate them). Problems arising from variability of wind energy sources or effects of faults on system stability could be handled much more easily.

Also, maybe someone can incorporate this into SPICE [wikipedia.org] so I can run s

## Applications (Score:3, Insightful)

Just to give people an idea of how useful this is, here are a couple examples of real-world problems that this may help with:

Aerodynamics simulations

Weather and ocean models

Electric and magnetic field models

Earthquake and geophysics models

It may also be useful for modeling wave processes (light, sound, etc) depending on how they're handled. As a general rule, any time you use a computer to solve problems involving electricity and magnetism or the pressures and stresses on solid or fluid materials, you're going to be encountering linear symmetric diagonally-dominant matrices.

In many applications, approximate solutions are faster and more useful than the exact solution described here, but it's still a big deal.

## The Classic Napoleon Dynamite Problem Solved! (Score:2)

Of course, if it can do that....

## Help? (Score:2)

I don't suppose one of you hardware/math geeks could explain to the hordes of lowly IT/storage/server guys what this actually means to us, pragmatically?

I can't even tell at which 'level' this speed-up would occur. Software support/implementation for SSDs? SSD controllers? What are the practical implications?

## Re: (Score:2)

SDD != SSD.

"SDD" stands for "symmetric diagonally dominant."

## Really cool (Score:3, Informative)

The nice thing about this is that things like electrical circuit simulations, FEA matrices, etc. tend to be SDD. This may speed things up a bit.

## Re: (Score:2)

## Re: (Score:2)

Right, sparse or loosely-coupled systems; that means these concepts can model data that is made up mostly of zeros.

It has to be sparse, because O (s * (ln (s))^2) doesn't even allow you to look at each of the s^2 elements of the matrix when s is large enough.

There must be some dependency on _how_ sparse the matrix is, or something like "O (s * (ln (s))^2) if the number of elements off the diagonal is at most O (s * ln (s))".

## Re: (Score:2)

Sparse symmetric systems are actually *very* common in the physical world. The numerical solution to any elliptic partial differential equation will generally boil down to the solution of a sparse symmetric matrix equation -- that includes the static Maxwell's equations for electricity and magnetism, pressure equations for fluid flow, and many others.

## Re: (Score:2)

If anyone gives a crap the application that comes to mind is bitmap compression & maybe some steganography applications.And FEA and electronic circuit simulation and molecular simulations (with and without solvation) and a whole lot o'analog crap that gets simulated every day. Very sparse matrix algebra. Although I still wonder how well the numerics hold up under stiffness, ill-conditioning, etc.

## Re: (Score:2)

Uh. Math?

1000000 ^ 3 = 1000000000000000000

1000000 * (log(1000000))^2 = 169000000

1000000000000000000 / 169000000 = 5917159763

So it's more like 5.9 billion -- but "billion" is the right order of magnitude.

## Re: (Score:2)

Although I should qualify that -- we're talking big-O analysis, not small-o. Big-O only gets you complexity, not run-time. So you can't necessarily make direct comparisons like this. (The easy way to think about this is: you can design a system that performs an analysis in O(n) time, but if the run time of that system is actually 1 year per element, then it can be beat by a reasonably fast O(n^2) algorithm for almost every practical value of n.)

## Re: (Score:3, Informative)

## Re: (Score:2)

## Re: (Score:2)