Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Education Math Programming Science

Astonishing Speedup In Solving Linear SDD Systems 157

eldavojohn writes "A new paper (PDF) out of Carnegie Mellon University shows how to solve symmetric diagonally dominant linear systems much faster than before. The technique employs graph theory, randomized algorithms, and linear algebra to achieve an astonishing advantage over current methods to solve such systems. From the article: 'The result is a significant decrease in computer run times. The Gaussian elimination algorithm runs in time proportional to s^3, where s is the size of the SDD system as measured by the number of terms in the system, even when s is not much bigger the number of variables. The new algorithm, by comparison, has a run time of s*[log(s)]^2. That means, if s = 1 million, that the new algorithm run time would be about a billion times faster than Gaussian elimination.' Developers out there who maintain matrix packages and linear algebra tools might want to take a peak at the paper. Anyone who has modeled real-world systems will be able to tell you that speedups in linear algebra algorithms have a weighty effect on efficiency in simulations — especially as one approaches the theoretical limits of optimality. This research is currently being presented at the IEEE Symposium on Foundations of Computer Science."
This discussion has been archived. No new comments can be posted.

Astonishing Speedup In Solving Linear SDD Systems

Comments Filter:
  • by Anonymous Coward on Friday October 22, 2010 @10:10AM (#33984876)

    Huh?

    • take a peek=have a look take a peak=summit Everest fit of pique=be irritated Oh, and sherbet is the dessert. Sure Bert is Sesame Street. Remember that.
      • Huh??

        • Developers out there who maintain matrix packages and linear algebra tools might want to take a peak at the paper.

          Nevermind, I see now what ghostweed was talking about. Would help if he quoted that and made a new thread instead of replying to a random thread with seemingly random babble that makes Spamland [youtube.com] seem coherent.

          • Re: (Score:3, Funny)

            by Gilmoure ( 18428 )

            Tonight's the night I shall be talking about of flu the subject of word association football. This is a technique out a living much used in the practice makes perfect of psychoanalysister and brother and one that has occupied piper the majority rule of my attention squad by the right number one two three four the last five years to the memory. It is quite remarkable baker charlie how much the miller's son this so-called while you were out word association immigrants' problems influences the manner from heav

            • New brooms sweep clear. Two wrongs don't make a right. A penny saved is a penny earned.
              Interpretation: A flawed article will require a lot of effort to work with Loose lips sink
              ships.
              Yogi Berra Variant: It's not over till it's over. Common sense to a limit is Natural; But
              more of it is Genius The weak can never forgive. Forgiveness is the attribute of the strong.
              --Mahatma Gandhi
              All frills and no knickers. As good as Greg. Theakston me, but Theakston my brother and
              you'll have God to answer to First deserve th

    • by Nadaka ( 224565 )

      Its a fundamental paradigm shift in the basic efficiencies of solving linear equations. A reduction from quadratic omega to less than square.

      • Let me spin it a lil'.
        It could mean faster videogames, better porn clips decoding, and more responsive LOLCAT sites. Is OP interested, nao?

    • by mdmkolbe ( 944892 ) on Friday October 22, 2010 @12:07PM (#33986168)

      It lets us compute more precise simulations faster. This helps everything from better weather predictions, to faster simulations for industrial design, to game physics engines.

      Basically if you can solve X=A*B or Ax=b more quickly then you can run simulations faster. The paper shows a way to solve Ax=b faster provided A is diagonally dominant (it usually is). Here A, B and b are given, X and x are unknowns. Upper case is matrix. Lower case is vector.

      In theory simulations that took days (~10^15 cycles) will now take hundreds of microseconds (~10^5 cycles). In practice, we already have techniques that usually work on "most" inputs to these types of simulations. This paper presents a technique that works on "more" inputs to those systems. (But note that their runtime, n*[log(n)]^2, is expected time and not worst case time.) The potential impact is huge but the actual impact is yet to be seen.

      Disclaimer: I've published in this area before so I have some background understanding, but it's been a while and I've read only the abstract of the paper. Once I read the full paper I'll have a better idea of whether it lives up to the hype. I'm hopeful because this could be a very important result. I'm also cautious because they use an approximation algorithm (that is what the " epilon" is about) and also use expected time rather than worst case time. These factors may limit the usefulness of the algorithm.

      • Re: (Score:3, Informative)

        >"I'm also cautious because they use an approximation algorithm (that is what the " epilon" is about)"

        The iterative methods normally used for linear systems that this could possibly replace all have epsilon (error) terms as well. I do work on the applied numerical methods side, so it's going to take me a bit to parse through the graph-theory heavy stuff here, but it is potentially very exciting.

        Are there actually large-scale applications where gaussian elimination is actually used that makes this a valid

  • Wow? (Score:5, Funny)

    by JargonScott ( 258797 ) on Friday October 22, 2010 @10:12AM (#33984896)

    Good gravy, now I understand why my non-tech wife just stares blankly at me when I'm describing my plain old IT job.

    • Re: (Score:1, Funny)

      by Anonymous Coward

      She could also be autistic or wishing she were blind AND deaf.

  • by GrifterCC ( 673360 ) on Friday October 22, 2010 @10:12AM (#33984898)
    I can tell it's truly News for Nerds because I can barely understand what it's saying and it drops causal references to advanced mathematics--the stuff I only wish I'd had the fortitude to study in college.

    I'm serious. This is exactly what I want from Slashdot.
    • Re: (Score:3, Insightful)

      by jgagnon ( 1663075 )

      I was thinking something similar. I'm a fairly smart guy and this went way over my head. I love math and the problems it can solve when used creatively. I've just never had the energy to pursue it to a level these (and other) researchers have.

      I feel myself chanting "I'm not worthy."

      • by ceoyoyo ( 59147 ) on Friday October 22, 2010 @10:29AM (#33985064)

        Mathematicians like it that way.

        One of the math professors at Berkeley tells his students that math is messy - when you're working on a proof you start from one place, wander around for a while, then get to your destination.

        Then you clean everything up and present it to the world as obvious.

        • According to E. T. Bell, Gauss was particularly prone to condensing the proof so much that it was hard to understand. His motto was "few, but polished", amazing for so prolific a man.
          • Yes, but for mere mortals the intermediate steps are often critical to keep from getting lost on the way, at least given my level of abilities. I for one still admit to having difficulty following proofs of the fundamental theorem of algebra and I've have the advantage of several hundred years of alternate proofs. Although the steps are clear enough, understanding the leaps between them can still be difficult.

        • Sadly, weak math skills can keep one wandering for a long time. Conceptualizing the route from starting point to destination is not often easy to establish, when one is not entirely sure what the destination is supposed to look like when you get there.

          To the extent I understand the paper, it is that certain error bounds can be described that given various inequality relations specify that the correct answer can be shown to be within an ever-more confined region of solution space upon iteration given the as

        • by mldi ( 1598123 )
          So true. Contrary to popular belief, good mathematicians are the most creative people I know.
      • by Surt ( 22457 )

        Interestingly enough, if you took college level math, this was probably only one semester over your head. And not actually a particularly complicated topic, for example, if you took 3 semesters total of calculus (including high school level calculus), what's being described here is in fact much easier to understand.

      • I think it's just a matter of what you specialize in. To me, this story is really interesting, and the summary is perfectly understandable. But other times, I just scratch my head at stories that other people get excited about. I deal with solving sparse matrix systems every day, but I don't know the first thing about, say, the latest Mandriva release, or flash drivers. Or why I should care. Anyway, I'm off to read the paper now...
    • Yup. Then we see:

      Developers out there who maintain matrix packages and linear algebra tools might want to take a peak at the paper.

      Which is exactly what we expect from Slashdot.

    • by splutty ( 43475 )

      I can tell it's truly News for Nerds because I can barely understand what it's saying and it drops causal references to advanced mathematics--the stuff I only wish I'd had the fortitude to study in college.

      Okay. Now you've confused me. Do you mean it drops causal references to advanced mathematics, in which case this article would have a direct impact on the advance of advanced mathematics itself, or did you mean to say it drops casual references to advanced mathematics, in which case it is assumed that the

    • I mean the article "In Florida, a Cell Phone Network With No Need For a Spectrum License"...

      What is that?! Not news for nerds... Possibly news for techies.

    • by Peach Rings ( 1782482 ) on Friday October 22, 2010 @10:27AM (#33985038) Homepage

      Why do you need to go to college? Here, watch Linear Algebra [mit.edu] and Algorithms [mit.edu].

      Also there is no remotely advanced math dropped in TFA, though the actual paper of course is cutting edge research.

      • by jdgeorge ( 18767 ) on Friday October 22, 2010 @10:39AM (#33985118)

        Or Khan Academy [khanacademy.org]. Out of curiosity, I watched all of Khan's math topics that weren't over my head. Next, I'm on to Addition 2!

      • If I had had Gilbert Strang as an instructor for linear algebra instead of who I did have, maybe I would understand what the article is talking about. Having watched those videos I repeatedly said to myself "oh thats what we were doing!" Are covariance matrices SDDs? If so could this be used to speed up principle component analysis?

        • SSD - symmetric diagonally dominant, mean the diagonal element is more the sum of abs all the rest in the row. That is a very strong condition, which happen in very specific applications. Never met them in the computer vision and image processing for example.
          • Perhaps not, but I bet the condition number of any said matrix is also very high. Although perhaps rare in your case, they would still be one of the toughest nuts to crack when they do show up. So this is still good news. If their method works as advertised.
          • Never met them in the computer vision and image processing for example.

            On the other hand, I meet them all the time in statistical programming, which makes me think that as image processing algorithms get more probabilistically based, speedups like the one discussed here may be very useful to people working in your field. I know they will be in mine, and I'm looking forward to seeing practical implementations.

          • Sadly, too true. In my experience given the kind of covariance matrices that result from measures taken on fish, condition numbers of such matrices almost enter into a world of intractability, but not quite, so the choice of methods becomes quite important. The difficult is learning enough of the underlying mathematics to understand precisely how to choose among various potential methods and in being able to generate enough data to make it possible to obtain sufficiently stable covariance matrices to begi

            • If the condition number is very high, all this means is that the matrix is semi-definite, which tells you that some of your random variables are actually linear combinations of others. A conjugate gradient solver should be able to compute the minimum-norm solution for you easily. Or do PCA; this'll directly tell you what the independent variables are (all the components corresponding to nonzero eigenvalues), and you can just use those coordinates and work with a smaller, full-rank matrix.
          • SSD - symmetric diagonally dominant

            No, SSD is Solid State Drive, which is certainly a more common term to see on Slashdot than SDD. I had to read the summary a couple of times before I realized it was talking about mathematical arrays rather than persistent memory arrays. Then it made a lot more sense! :)

          • To give some examples of places where this does matter... Anything that "feels" like heat flow or diffusion.

            • The computer graphics problem of radiosity rendering (where light makes multiple bounces.)
            • Solving for voltages in a resistor network with various sources in it.
            • Computing the pressure induced in an incompressible fluid by a force field (a key step in fluid simulation).

            In all of the above examples, you're inverting a matrix known, depending on the field, as either the graph Laplacian, or the admittanc

          • by emt377 ( 610337 )

            SSD - symmetric diagonally dominant, mean the diagonal element is more the sum of abs all the rest in the row. That is a very strong condition, which happen in very specific applications. Never met them in the computer vision and image processing for example.

            I'm not familiar with using linear systems for simulations, but I would guess that any system that models inertia would end up SDD. Meaning a bias towards the existing state rather than any other cell's/particle's state. This is very different from imaging or audio, where the convolution for e.g. diffraction rings or echo cancellation isn't SDD.

        • Are covariance matrices SDDs?

          I think SDD is sufficient for positive definiteness, and therefore for a matrix to be a valid covariance matrix, but not necessary. Anyone got an answer on this?

          • Really close.

            Diagonally dominant ==> positive semidefinite

            Strictly diagonally dominant ==> positive definite

            For understanding these things, I really like Gershgorin's circles [wikipedia.org].

            In terms of random variables... It's possible for a covariance matrix to be only positive semidefinite. E.g., consider a zero-mean random variable x, with E(x^2)=1, and another one y = cx, with c a constant. Then the covariance matrix is [[1 c][c c^2]], which is not full rank (the second column is just 'c' times the

            • Okay, thanks, that makes sense. (The covariance matrix of strictly linearly dependent r.v.s is kind of a degenerate case, though.) My main question is about implication in the other direction; I believe that DD ==> PSD and SDD ==> PD, but I'm guessing that the inverse is not true, i.e. PSD =/=> DD and PD =/=> SDD.

              • Yeah, the converse is not true in general.
                E.g.,

                [ 4 -2 5 ]
                M = [ -2 5 -5 ]
                [ 5 -5 9 ]

                It's PD but EVERY row violates the diagonal dominance property.

                Basically, PD matrices are Grammians -- there exist vectors such that the elements of the matrix are the inner products of those vectors -- and, if you pick a bunch of vectors that are linearly-independent-but-not
      • Can't agree with you more. I've struggled to better understand linear algebra for quite some time and found Strang's clarity on the subject of tremendous help. The way he takes you from elementary ideas to the important concepts associate with SVD is truly worth paying close attention to. It makes a lot of seemingly disjunct ideas come together in a meaningful way.

        I only wish there were more additional advanced lecture series to follow up on where his two lecture series leave off.

    • let me summarize: the same math that drives google's search (spectral graph theory) has been shown to be really useful in solving a very common class of linear equations if you're willing to roll the dice (randomize algorithm).

    • props to eldavojohn too - most of the stuff he posts is interesting, and the summaries are usually good too.
    • I can tell it's truly News for Nerds because I can barely understand what it's saying and it drops causal references to advanced mathematics--the stuff I only wish I'd had the fortitude to study in college.

      You more than likely did study this in college. This involves linear algebra, specifically the inversion of matrices and/or solving linear system. Ax=b, where A is an mxn matrix, and x and b are nx1 and mx1 vectors respectively. What we'd like to do is solve for x using x=A^-1 b, where A^-1 is an inverse matrix of some kind. But getting the inverse is a notoriously difficult problem.

      In fact, a large reason digital computers were invented at all was so that "large" matrices could be inverted. And by large, I mean 12x12 matrices (keep in mind this was the 1950s). Computer games, mp3 players, spreadsheets and the internet are all quite incidental byproducts of humanities quest to invert ever larger matrices, in less time. Though just about any half way sophisticated piece of software will invert a matrix at some point.

      The reason for this is as follows: When human beings want to solve any difficult problem, they approximate it with a linear model and solve the associated linear system. Hence the ability to solve linear systems quickly and accurately is of fundamental importance. This goes double in the modern world as we increasingly rely on the ability of computers to handle ever larger linear system--that is, to invert ever larger matrices.

      The biggest problem with matrices, say nxn matrices, is that the time taken for solving them by Gauss elimination goes up as 0(n^3). So while your desktop could probably invert a 1000x1000 matrix in a few seconds, it would take a year invert a million x million matrix, and would take around a million years to invert a billion x billion matrix. (Not that it could store either in memory). While Gauss elimination is a pretty poor algorithm efficiency-wise, this issues are unavoidable for general matrices.

      However, most matrices we actually want to invert in practice are "sparse" or "diagonally dominant". The first property means that most of the elements are zero. So in a billion x billion matrix, instead of storing 10^18 numbers, you'd only have to store a few billion. Diagonally dominant means that largest entries are along the diagonal. Both of these mean you can take a lot of--often hueristic--shortcuts in your inversion algorithims to cut down on time.

      These researchers claim O(s*log(s)^2) complexity for such systems. I suspect there are probably a lot of O(s^2*log(s)) system or the like anyway, but even still this is a nice improvement. I doubt it's as big a breakthrough--I suspect this is hyped anyway--but if it is an improvement you can kind of compare this to something like the Fast Fourier Transform speedup. Again, I doubt it's that large of an improvement.

      I'll finish by mentioning that this all falls under the umbrella of Linear Algebra, which is an absolutely essential skill of higher mathematics, and computational mathematics. Any geeks who prides themselves on their mathematical ability, but doesn't know their linear algebra (linear systems, the four fundamental subspaces, eigenvectors/values, rotation/transformation matrices, etc) shouldn't be holding their skills in such high regard. If you don't know your LA, or have forgotten, one of the best places to start is to watch Gilbert Strang's online lecture series [mit.edu] provided by MIT. These lectures are indispensable to anyone working with linear systems. After this, those who need in depth analysis of matrix algorithms should turn to something like "Matrix Algorithms" by G. W. Stewart [amazon.com] which goes into more detail and shows the more canonical algorithms. A little knowledge of linear algebra goes a long way in mathematics.

      ...Or failing all that, you could just open up MATLAB... or Octave. I recommend the educational route first though.

    • Then you would be better off visiting mathoverflow or stackoverflow. Thats where the real wizards get the job done. Slashdot is directed more to the computer-social scene, which is important in some respects, but not the place to expect answers to technical questions, unless its about the popularity of computer games and such.

    • the real question is whether I have the intelligence to sufficiently understand it. I've been trying to better my understanding of linear algebra for a number of years new, but it doesn't come easily. It is at least comforting that I can begin to read research papers such as this and understand some of it.

      Those who have more substantial knowledge and who make an effort to build ladders to help scale the wall of understanding, like Gilbert Strang, Arnold, Shilov, Gantmacher, and others, are really due seri

    • I missed articles like these.

  • by azaris ( 699901 ) on Friday October 22, 2010 @10:13AM (#33984906) Journal

    Multigrid [wikipedia.org] is theoretically O(s), so I don't immediately see how this is such a huge leap. Of course the actual complexity also depends on the problem and the implementation. Maybe their method.is applicaple to a wider variety of problems.

    Also, the "iterated sparsifying" sounds a lot like algebraic multigrid.

    • by HighFlyer ( 60002 ) on Friday October 22, 2010 @10:24AM (#33985000) Homepage

      I had the same thought but I guess multigrid is limited to specific matrices while the new algorithm seems to be more generic. Not sure though, my last contact with multigrid lies 14 years ago (when algebraic multigrid was a hot topic in science).

    • Re: (Score:3, Informative)

      by EdZ ( 755139 )

      Maybe their method.is applicaple to a wider variety of problems.

      From TFA, that's exactly what this is.

      • Same thoughts here. I cannot understand why this is a huge leap practically speaking.
        However, the new method might be of theoretical interest, since it uses techniques which are totally different from multigrid.
        The fact that it applies only to SDD matrices is a bit disappointing though (multigrid can be applied to a wider range of problems).

    • Re: (Score:3, Interesting)

      by TrekkieGod ( 627867 )

      Multigrid [wikipedia.org] is theoretically O(s), so I don't immediately see how this is such a huge leap. Of course the actual complexity also depends on the problem and the implementation. Maybe their method.is applicaple to a wider variety of problems.

      Also, the "iterated sparsifying" sounds a lot like algebraic multigrid.

      I think the wider variety of problems is exactly the major advantage. A symmetrically diagonally dominant matrix is not necessarily sparse, but it does come up a lot. I know from experience that nodal-analysis based circuit simulators will end up like that most of the time, since each matrix cell represents a node connection, and when you have a circuit element between node i and j, it's obviously also between j and i (exceptions made when modeling things like transistors, because you need to handle the g

    • how so?

    • by DriedClexler ( 814907 ) on Friday October 22, 2010 @11:12AM (#33985410)

      I know an O(s) algorithm for solving linear systems, but it only works on diagonal matrices ...

  • Grain of salt (Score:5, Informative)

    by l00sr ( 266426 ) on Friday October 22, 2010 @10:16AM (#33984928)

    Though the result sounds genuinely interesting, I'm not sure the comparison to Gaussian elimination is fair. The summary gives the O(s^3) asymptotic run time of Gaussian elimination for dense matrices, but there are much better alternatives for a sparse matrix, which is what the paper applies to. For example, if your matrix has the sparsity pattern of a planar graph, it has been known for 30 years that you can do Gaussian elimination in O(n^1.5). This result, however, seems to have the potential to be more widely applicable while producing a better asymptotic bound. So, sounds like great work, but not quite as amazing as the hype in the summary.

    • Also the summary seems to have a fundamental misunderstanding of asymptotic analysis :)

    • Re: (Score:3, Informative)

      by blackcoot ( 124938 )

      not really.

      the method applies to the _general_ (i.e. dense) case of diagonally dominant (i.e. |A_i,i| > |A_i,j| for all i,j) symmetric systems that show up all the time (grammian matrices, for example), not just the sparse case.

    • Re: (Score:3, Interesting)

      For example, if your matrix has the sparsity pattern of a planar graph, it has been known for 30 years that you can do Gaussian elimination in O(n^1.5).

      This algorithm is supposed to run in O(n log(n)^2) time, which is actually better than O(n^1.5). Not a lot better mind you, but if you make n very large and don't have a lot of overheads, you should see improvements.

      • So using TFA's numbers, which is it? Original run-time would have been: 1m^3 = 1,000,000,000,000,000,000 -> 1,000,000,000,000 million -> 1,000,000 million million -> 1 million million million
        But which is the new run-time? it's vastly different depending on order of operation:
        (1000000*log(1000000))^2 -> (1000000*6)^2 -> 36,000,000,000,000 ->36,000,000M->36 million million
        1000000*(log(1000000)^2) 1000000*(6^2) -> 1000000*36 = 36 million
    • My thoughts exactly...

      Symmetric, (Strictly) diagonally dominant matrices are great: Non-singular, real spectrum, diagonalizable...In fact, purely from the fact that the eigenvalues can be bounded away from 0, many iterative methods will have provably fast O(n^2) convergence...beating the classic O(n^3) by an order of magnitude.

      I'm not up to speed in the particular tricks used for the Symmetric, DD regime, but certainly one would only "naively" try solving this using Gaussian elimination, due to the special

  • WTF, Over? (Score:5, Funny)

    by toygeek ( 473120 ) on Friday October 22, 2010 @10:19AM (#33984960) Journal

    Who knew that when I got up to check my daily's (Incl /.) that I'd end up with scrambled brains for breakfast.

    • Who knew that when I got up to check my daily's (Incl /.) that I'd end up with scrambled brains for breakfast.

      Mmmmm ..... brains!!!

  • by jcwren ( 166164 ) on Friday October 22, 2010 @10:20AM (#33984964) Homepage

    Take a Mt. McKinley at the paper? Or a K2?

    Spell checkers are not a replacement for actually reading what you've written.

    • Maybe it was a pun?

    • Or maybe spell checkers are not a replacement for the editor's work, as not all submitters in Slashdot have English as their first language and peek/peak aren't that hard to confuse (as then/than, farther/further, ...).

  • by digitaldc ( 879047 ) * on Friday October 22, 2010 @10:20AM (#33984968)
    "Solving these linear systems can be time consuming on even the fastest computers and is an enduring computational problem that mathematicians have sweated for 2,000 years."

    I know what you mean. I remember Pythagoras, Euclid and Aristotle writing about sweating over their laptops while trying to smooth-out their proofs and algorthims.
    • Re: (Score:1, Funny)

      by Anonymous Coward

      Archimedes too.. he used to move the sand grains of his finite state machine with a stick to perform such calculations :)

  • by ponraul ( 1233704 ) on Friday October 22, 2010 @10:29AM (#33985068)
    Comparing the solver's complexity to Gaussian-Elimination isn't useful. No one in their right mind uses direct solvers on large linear systems.

    Anyway, if the system is symmetric and dominant diagonal, SOR is the standard choice in iterative solvers.
    • by emt377 ( 610337 )

      Comparing the solver's complexity to Gaussian-Elimination isn't useful. No one in their right mind uses direct solvers on large linear systems.

      It's useful because every other solver is compared to the same standard.

  • by mcg1969 ( 237263 ) on Friday October 22, 2010 @10:35AM (#33985094)

    I agree with a previous poster who said it is unfair to compare this algorithm to Gaussian Elimination. Frankly, it seems to me that the poster has taken enough numerical analysis to know that GE is O(n^3), but is not familiar with the much wider body of research into efficient methods for solving structured linear systems.

    Symmetric diagonally dominant linear systems are among the best conditioned systems you could possibly construct, which means that it is a rare case that you would ever use GE. Instead, you would use an iterative method, such as conjugate gradients or a stationary method like the Chebyshev method discussed in the paper. While it does seem that they are proposing a novel way to construct an efficient iterative method, the truth is that once you've entered the realm of iterative methods it's not likely there is going to be a single approach that works best in every instance. Even the paper's author agree that this is a "conceptually simple and possibly practical iterative solver." They rightly recognize that what works well on paper may not always prove useful in practice. So I think we should take the authors' own advice and wait for further research into this approach.

    • You are correct. There are quite a few algorithm fields where papers use 'standardized' problems to compare their results to other methods, but those problems contain only extremely simplified and extremely degenerate cases, rather than practical use cases.

      In the case of things like Genetic Algorithms, the simplified problem is often the "One Max" problem and the hard problem is the degenerate "Trap" problem (where the next-best solutions are all maximally far away in state space.)

      In the theoretical the
  • by allanw ( 842185 )
    Who else read the title and was excited thinking there was an astonishing speed-up in SSD systems?
  • Is this particular kind of funky advanced math used for anything in crypto or information security and does this new speedup help crack that?

    • by godrik ( 1287354 )

      There are probably no implication for cryptography. Solving linear system is a well known and polynomial problem. We can do it faster now for some well structured matrices. But We already had quite fast algorihtm for doing it. Cryptography usually does not rely on solving linear systems, they are way too easy to solve.

  • This could be interesting for predictive control of the power grid based on a real time model. Current (no pun intended) technology can't do a real time simulation, or must make some crippling simplification assumptions, or is based on some proprietary algorithms (which may be junk, but we can't validate them). Problems arising from variability of wind energy sources or effects of faults on system stability could be handled much more easily.

    Also, maybe someone can incorporate this into SPICE [wikipedia.org] so I can run s

  • Applications (Score:3, Insightful)

    by goodmanj ( 234846 ) on Friday October 22, 2010 @11:28AM (#33985568)

    Just to give people an idea of how useful this is, here are a couple examples of real-world problems that this may help with:

    Aerodynamics simulations
    Weather and ocean models
    Electric and magnetic field models
    Earthquake and geophysics models

    It may also be useful for modeling wave processes (light, sound, etc) depending on how they're handled. As a general rule, any time you use a computer to solve problems involving electricity and magnetism or the pressures and stresses on solid or fluid materials, you're going to be encountering linear symmetric diagonally-dominant matrices.

    In many applications, approximate solutions are faster and more useful than the exact solution described here, but it's still a big deal.

  • The world has long been awaiting the algorithm that can predict whether a Netflix viewer will love or hate Napoleon Dynamite.

    Of course, if it can do that....

  • I don't suppose one of you hardware/math geeks could explain to the hordes of lowly IT/storage/server guys what this actually means to us, pragmatically?

    I can't even tell at which 'level' this speed-up would occur. Software support/implementation for SSDs? SSD controllers? What are the practical implications?

  • Really cool (Score:3, Informative)

    by frank_adrian314159 ( 469671 ) on Friday October 22, 2010 @01:16PM (#33987264) Homepage

    The nice thing about this is that things like electrical circuit simulations, FEA matrices, etc. tend to be SDD. This may speed things up a bit.

THEGODDESSOFTHENETHASTWISTINGFINGERSANDHERVOICEISLIKEAJAVELININTHENIGHTDUDE

Working...