Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Open Source Intel Security Linux

Linus Torvalds Sees Lots of Hardware Headaches Ahead (devops.com) 205

Linux founder Linus Torvalds "warns that managing software is about to become a lot more challenging, largely because of two hardware issues that are beyond the control of DevOps teams," reports DevOps.com.

An anonymous reader shares their report about Torvalds remarks at the KubeCon + CloudNative + Open Source Summit China conference: The first, Torvalds said, is the steady stream of patches being generated for new cybersecurity issues related to the speculative execution model that Intel and other processor vendors rely on to accelerate performance... Each of those bugs requires another patch to the Linux kernel that, depending on when they arrive, can require painful updates to the kernel, Torvalds told conference attendees. Short of disabling hyperthreading altogether to eliminate reliance on speculative execution, each patch requires organizations to update both the Linux kernel and the BIOS to ensure security. Turning off hyperthreading eliminates the patch management issue, but also reduces application performance by about 15 percent.

The second major issue hardware issue looms a little further over the horizon, Torvalds said. Moore's Law has guaranteed a doubling of hardware performance every 18 months for decades. But as processor vendors approach the limits of Moore's Law, many developers will need to reoptimize their code to continue achieving increased performance. In many cases, that requirement will be a shock to many development teams that have counted on those performance improvements to make up for inefficient coding processes, he said.

This discussion has been archived. No new comments can be posted.

Linus Torvalds Sees Lots of Hardware Headaches Ahead

Comments Filter:
  • by AlanObject ( 3603453 ) on Sunday June 30, 2019 @11:51AM (#58849994)
    Rong:

    Moore's Law has guaranteed a doubling of hardware performance every 18 months for decades.

    Rite:

    Moore's law is the observation that the number of transistors in a dense integrated circuit doubles about every two years.

    • Re: (Score:3, Insightful)

      Comment removed based on user account deletion
      • Both statements are correct. The first is a necessary consequence of the second.

        False. One claims guarantees about something without having a causal relationship that would allow a guarantee, and where no physical law is involved that is symmetrical with the supposed guarantee. The other claim is a retrospective observation with no causal relationship required for it to be true.

        Just, pathetic analysis.

        • False. One claims guarantees about something without having a causal relationship that would allow a guarantee,

          Yes, in theory you're right: CPU maker companies could be spending the extra transistor budget into making nice picture (not *computing* pictures with the CPUs. But physically drawing the mug shot of the CEO on the silicon of the CPU out of transistors) or any other feature that has nothing to do with the computing performance of the CPUs.

          But in practice, such frivolities are extremely rare (e.g.: a tiny taunt, left for USSR reverse-engineers who would be decapping and analysing it) [fsu.edu].

          In practice, most of the

          • That would have been a reasonable argument 15 years ago before adding transistors stopped helping as much as it used to.

            Don't just wave your hands about "extra performance," you have to get into specifics or it is just hand-wavy nonsense. If you're only waving your hands, at least know what the mainstream beliefs in the industry are; that Moore's law broke down. Years ago. The benefits started evaporating at the top end. At the bottom end, sure, a small processor is easy to make more powerful by adding tran

            • Let's not forget GPUs. They seem to be able to turn an extra billion transistors into real performance improvements far more effectively than CPUs. Good news as long as your workload is GPU-friendly.

              • Because "GPU" is a euphemistic description of a cluster of logic units with a programmable bus.

                Taking this further is going to require a big rethink in motherboards, and few are working on that these days.

                The problem is worse than even Linus realizes yet, because eventually some company is going to introduce a better MB, and change will bring short-term pain.

            • Don't just wave your hands about "extra performance," you have to get into specifics or it is just hand-wavy nonsense.

              The thing is, what counts as *more performance* has evolved over time.

              Nowadays, it's more cores and threads.

              Which also means...

              If you read what Linus is saying, he's pointing out what I'm just repeating; if application developers want to continue having their apps run faster over time, they're going to have to actually make the software more efficient because the hardware is no longer speeding up.

              The modern CPU will *STILL* provide more operations-per-second in the near future, they will be more performing for that definition of performance.

              The problem is NOT that the CPUs will suddenly stop getting better, the problem is that *THE WAY* they will be getting better (in terms of ops) will NOT TRANSLATE into LEGACY CODE getting EXECUTED FASTER.

              New code will be needed to be rewr

              • You can't just learn to write parallel code, it is the algorithms that have to change not the code. The majority of application developers aren't even in charge of architecture. And switching to horizontal algorithms is hard in many cases. Lots of moving parts.

                You can't rely on changes that big being effective. It is a crap shoot.

                I already have hardware AES that is fast, Chacha or something isn't going to be a meaningful difference in a hardware-supported case. But that is a good example of an area where mi

          • So in practice, the extra silicon goes into extra performance and the first claim of the top poster is more or less correct,

            That's only true if "more or less" is synonymous with "not even close".

            Moore's Law only supports the idea that extra silicon tends to lead to improved performance to some extent.
            That has nothing to do with AC's claim that

            Moore's Law has guaranteed a doubling of hardware performance every 18 months for decades.

      • by Agripa ( 139780 )

        Both statements are correct. The first is a necessary consequence of the second.

        Not it is not.

        For some process generations, the cost per transistors halved even at the expense of significantly lower performance. The jump from bipolar to MOS was this way.

        Moore's law is only about the economics of increasing integration and this can come about through greater density, larger dies, or better packaging. It has nothing to do with performance.

    • by Anonymous Coward

      Moore's law is the observation that the number of transistors in a dense integrated circuit doubles about every two years.

      Somewhat wrong. Moore's law was original that observation, but quickly Intel took as their mantra because it gave them the competitive edge to be the ones developing the technology to constantly being the top dog. The aspect of doubling hardware performance every 18 months became the obsessive goal of Intel and later AMD because it was the only way to really finance said technological

    • by gtall ( 79522 )

      Thank you.

  • But as processor vendors approach the limits of Moore's Law, many developers will need to reoptimize their code to continue achieving increased performance. In many cases, that requirement will be a shock to many development teams that have counted on those performance improvements to make up for inefficient coding processes, he said.

    Take back the C language, some GOTO statement here and there, pointers, pointers, arrays, pointers and you're done.

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      Take back the C language, some GOTO statement here and there, pointers, pointers, arrays, pointers and you're done.

      The odds of the average webmonkey^H^H^H^H^H^Hdeveloper being able to cope with C are somewhere between zero and none. Those losers can't even write a Hello, world program without a dozen bloated frameworks.

    • Re: (Score:2, Insightful)

      Comment removed based on user account deletion
      • https://lmgtfy.com/?q=high+per... [lmgtfy.com]

        In my limited 35 years of computing experience, people like youcan mot code either ... they just believe they can.

        • Most high performance Java implies you have a library implementation in ASM. That has always been true.

          • No it does not. Or do you really think computer scientists are so stupid to call something "high performance X" when X is just a layer over ASM? That would not make any sense at all ...

            • Computer scientists, by definition, work in academia. They are a type of teacher.

              I'm an actual programmer. I really do not in any way actually care what academics choose to call something.

              My comment about how high performance java libraries are actually written has been true all the back to the 90s.

              You don't seem to comprehend what a library is. There is absolutely nothing about the concept of a library that suggests it must be written in the client language or else suffer being misnamed by you. And librari

              • My comment about how high performance java libraries are actually written has been true all the back to the 90s.
                Yeah, perhaps it was true 1995.

                Now high performance Java is ... surprise surprise ... high performance Java.

                A no brainer actually.

                The rest of your post makes no sense to me. Unless you want to insult me, but then you failed.

                • Java is not self-hosting.

                  Just because you don't know what is in the box, doesn't make it 100% "Java."

                  • There are enough "self hosting" Java implementations.

                    And if something is self hosting or not has nothing to do with the topic.

                    The point is that Java is on par with C++ regarding high performance computing, only fortran is faster (and I actually don't grasp why that is ... :P )

                    • Waving your hands doesn't cause the implementations to be self-hosting.

                      It seems to you that they are, because you always have the VM, and everything you need is included in the implementation. But that only implies you don't need to know the details, it doesn't imply that they didn't work hard, or that performance comes at the same time as the Java ideology. There is no purity in Java, it just looks that way to surface-level code monkeys. Because the VM hides anything ugly.

                      Fortran is only faster than C/C++

        • I believe that you're describing the Dunning-Kruger effect.

      • We need to switch to massive parallel computing. Have 1,000,000 cores each with its own memory and dedicated data paths to nearby cores. Develop and use optical computers for the stuff that can't be run as parallel tasks. Make better optimizing compilers to tune those "easy" interpreted languages into good code.
        • by gtall ( 79522 )

          So you want people who cannot even write single threaded code loose on multithreaded environments that will slice and dice their code? I'm guessing this isn't a recipe for success.

      • by godrik ( 1287354 )

        While there are performance problems with some languages, I do not believe that switching languages will make as much of a difference as you think.

        python is a terrible language for performance (unless you use prebuilt C libraries and one do the plumbing in python) because of the language. The typing system in python prevents most optimizations; it causes even very simple codes can be unoptimizable by the compiler because of how the language is designed; I famously rewrote one of my colleagues numerical code

      • by HiThere ( 15173 )

        Or improve the way compilers for, say, Java, optimize the language. There's no inherent reason that Java couldn't be optimized to the level of C code, if not of assembler. The language would need to change a bit, to allow primitive types to be used in more places, to allow reasonable manipulation of bit fields, to allow specification of the layout of bytes in memory, etc. but that needn't change the language drastically. It might, admittedly, require a drastic change of the libraries.

        But really, if a com

      • by gtall ( 79522 )

        The problem is that we HAVE coding schools. Learning to code, even well, allows you to do squat on real hard problems to which we put computing. Physics, math, chemistry, biology, psychiatry, etc. have no use for mere coders.

      • by Bengie ( 1121981 )
        You must know something that others don't because as far as I know, there is no objective way to measure how good someone is at novel problem solving, which is a fundamental requirement to test how good someone is at optimizations. Regurgitating rules of thumb aka "Best Practices" does not count. A simple example. I had a bulk SQL query that was taking a long time to run because the datamodel was very normalized and the tables where huge. My gut reaction was to rewrite it using correlated subqueries, but ev
    • Take back the C language, some GOTO statement here and there, pointers, pointers, arrays, pointers and you're done.

      There was a time, until the 1980's, that C was close to the way the hardware works. That hasn't been the case for decades. Nowadays it's just another high-level language running on top of a JIT (hyperthreading) that compiles to a VM (emulated x86-64 pseudo-machine code) running on top of the actual hardware.

      Want programmers to program at the actual underlying hardware level? Be prepared to throw away all of the above, and for a language that is massively parallel, with vectors as its main data type, and 128

  • by mark-t ( 151149 ) <markt.nerdflat@com> on Sunday June 30, 2019 @12:04PM (#58850054) Journal
    Moore's law was not about performance of computers, it was about the number of effective transistors per square unit of area on silicon. More transistors may allow some things to be done faster at the hardware level, but that's simply because more transistors can do more stuff in the first place... instead of having to, for instance, do some common function in software in assembly, a circuit might designed to perform that particular function in hardware, and a cpu instruction added to benefit form the change. This is just an example, but it basically boils down to taking different approaches to produce performance improvements more than simply increasing the number of components on a chip. Using all of the same algorithms, area of silicon, and transistor counts, the only other thing that might significantly make a computer faster is having transistors that could switch states faster, and while transistors have gotten faster at switching since Moore made the infamous observation which bears his moniker, the rate at which they do so is entirely orthogonal to Moore's Law.
    • However it implies smaller transistors and smaller transistors are faster transistors, so performance scales faster than density.

      • by mark-t ( 151149 )
        Good point... so I misspoke about transistor switching speeds being orthogonal to Moore's Law then... but regardless, Moore's Law was not a projection about how performance would improve (except insomuch as smaller transistors may switch faster) as much as it was about how many transistors would fit into a square unit of silicon. Different design approaches taken over the years have resulted in far more of a difference in performance than transistor size..
    • and while transistors have gotten faster at switching since Moore made the infamous observation which bears his moniker, the rate at which they do so is entirely orthogonal to Moore's Law

      You've got this exactly backwards.

      And Moore's law was not misunderstood, it just happened to function equally well in a number of nearly parallel axes, concurrently, for a long time, so there was little need to be pedantic.

      It's completely standard in linguistics for one member of a bundle of related concepts to stand in for

      • by mark-t ( 151149 )
        Moore's Law is about transistor count, or transistor size, and while transistor size does affect switching speed which in turn will have a direct effect on computing performance (which I admitted I got wrong in response to another commenter above), far more performance improvement is gained by using the increased transistor count to improve versatility and doing things in hardware that may have been formerly done by firmware or software, and, in particular, this performance increase is *NOT* linearly connec
    • Moore's law was not about performance of computers, it was about the number of effective transistors per square unit of area on silicon.

      Moore's law is really about COST per capability and little else. Reductions in feature size were a significant driver yet by no means the only one.

  • Hyperthreading (Score:4, Interesting)

    by Spazmania ( 174582 ) on Sunday June 30, 2019 @12:07PM (#58850068) Homepage

    If you've enabled hyperthreading on a server you're probably doing it wrong even before you consider the security implications. Hyperthreading doubles the pressure on your tiny 32kb CPU L1 instruction cache. It's possible you've increased the total processing capacity of the server but you have for certain reduced the throughput of any single thread, impairing the latency experienced by your users.

    • Re:Hyperthreading (Score:5, Informative)

      by Misagon ( 1135 ) on Sunday June 30, 2019 @01:01PM (#58850348)

      No. There are server workloads where hyperthreading has shown to fit the best.
      That is some IBM POWER8 and POWER9 server CPU:s have up to eight threads per core.

    • by GuB-42 ( 2483988 )

      Hyperthreading results in improved performance on most workloads.
      It may be detrimental in some rare cases, and cache pressure may or may not be the cause. But clearly, that's something you need to test before blindly following some advice.

      I don't really buy the increased latency argument unless you have hard real-time constraints. We actually have a server with HT disabled for that reason. And while it is good for running avionics simulation reliably (its purpose), it runs like crap for pretty much every ot

      • by Bert64 ( 520050 )

        The reason being that most code is compiled for a generic cpu, it's not optimized for the specific model and cache/memory sizes and latencies you have...
        The ideal case is code optimized for your exact hardware. which knows exactly how big the caches are and how slowly memory responds so it can keep the cpu optimally busy at all times.

    • by thegarbz ( 1787294 ) on Sunday June 30, 2019 @01:25PM (#58850438)

      but you have for certain reduced the throughput of any single thread

      Please wait... you are number 37589 in the queue to get content being served by this server's single thread.

  • Hipster language (Score:3, Insightful)

    by Anonymous Coward on Sunday June 30, 2019 @12:10PM (#58850084)

    Ninety percent of the problem with modern software perf appears to be due to hipster languages.

    First, the languages themselves are very slow.

    Second, people weaned on them have NO IDEA what is happening at the hardware level, because they are so far removed from it. It's also common for them to lack much understanding of time complexities.

    Native compiled languages and learning the fundamentals are where it's at. You cannot teach a whole generation of coders by skipping the fundamentals, because they never learn a basis of reality.

    Net effect: we now use megabytes where kilobytes would do, and a billion machine cycles where a million would do.

    • Agreed
      I've actually had someone spend time re-writing a C routine into python because it was a "better" language - ending up with something many times slower both to load and to execute. This as a file-in, file-out routine, no user interface.

      Modern languages are great - for complex projects where performance is not an issue, but they can be extremely slow for computational stuff.

  • i have several SDR receivers that work in Linux without needing a kernel module, an AirspyHF+, Hackrf-One, RTL-SDR, and an SDRplay, they all have shared object libraries but dont require a kernel module in order to function, i think more peripherals will have to go that route in the future, where some shared object libraries be GNU/GPL FOSS so they can easily be built on any linux distro, the airspy and hackrf and rtl-sdr have kernel modules but i have to blacklist them in /etc/modprobe.d because they tend
  • by KiloByte ( 825081 ) on Sunday June 30, 2019 @12:14PM (#58850108)

    It's not just software that fails to deliver: even interconnects within a CPU often can't handle a many-core setup. One workload from a personal project of mine involves simultaneously decompressing 1292 xz files (750MB total, 2.5GB uncompressed). On a 64-way 2990WX a -j1 run takes around 11 times as long as -j64 -- with the workload being nearly perfectly parallelizeable you'd expect a near-64-times difference. As far as I tell, the bottleneck here is ridiculously bad L3 cache bandwidth; Intel boxes have nowhere this bad a penalty. Of course, a reader of Slashdot can cite screw-ups from Intel elsewhere. Chip design is hard.

    And when it comes to writing software... I feel so short [bholley.net] these days.

    • by godrik ( 1287354 )

      That's a classic problem you are pointing at. When you add cores, you add flop/iop per second. But you don't add bytes per second. Depending on your use case, you can have a problem that is core-bound on one core, but memory bound on a multicore system.
      It is a fundamental design problem, all resource are expensive, most applications do not saturate the memory bus, they saturate instructions. So architectures have been built to increase instructions bandwidth rather than memory bandwidth. You typically satur

    • by amorsen ( 7485 )

      Have you tried doing it on tmpfs?

      Also, xz supports threads. Have you checked that only use one CPU is used when you run -j1?

      L3 cache bandwidth should not be a problem. I would be surprised if xz benefited much from anything beyond L1 cache. I would be interested in your absolute performance numbers, that could give a hint about where the bottleneck is.

      • > Have you tried doing it on tmpfs?

        Which is RAM based, not real storage media based. Many cases where tmpfs optimizes things can be better replaced by doing all the work directly in memory.

        • There's no big difference between tmpfs and page cache. Files you fetch from the network go into the page cache, then the system slowly starts writeout. Stuff I uncompress also goes into the page cache, and I don't care before reporting all-done. The user can then sync if he/she wants.

          • The difference is large enough that other processes can write to files in a tmpfs, or read from them directly. It's also large enough that files left behind in tmpfs can keep memory occupied even after the original process has failed and closed all of its file descriptors. If your system resources, especially RAM, are generous, you may not notices difficulties with such debris. If your resources are precious, the distinctions can cause quite a lot of difficulty.

            • So can they write to files in the page cache, no matter if they went through writeout or not yet. Or read them, until evicted. And if you have swap configured (which most distros set up by default), tmpfs pages get swapped out if there's memory pressure.

              • Overwhelming tmpfs with content, such as overflowing "/tmp/", can cause SSH connections to be impossible to start for non-root logins. (This happened to a colleague last week.)

  • Comment removed based on user account deletion
    • by Simon Rowe ( 1206316 ) on Sunday June 30, 2019 @12:18PM (#58850128)
      All computer graduates should be made to write code to run on 8-bit micros. Being limited to 64KB with simple instruction sets like the 6502 focuses the mind.
      • And who is going to teach them ?
        • And who is going to teach them ?

          If they can't learn from the manuals, the subject is already too technical for them.

        • Teach them? Pfft. We had to learn ourselves. At least these days you have emulators that load instantaneously. Back in the day it was load from tape, if you had remembered to save. ... lawn, ... mumble ...
          • Hey you Kids - get off my 8-bit lawn! You 64-bit do-nothin's wouldn't know a bit from a byte if 64k of them were stuffed up your cache. When I was your age, I could toggle in an octal boot loader in a lot less than 12 parsecs. We had to learn through trial-and-error, not these wiz-bang web documents you snot-nosed whipper-snappers fling around like Arianna urls. Btw, pass the paper-tape reader, would ya? I need to flush my data buffer.

        • by tepples ( 727027 )

          forums.nesdev.com seems to be patient enough at teaching people how to program a 6502.

      • by sgage ( 109086 )

        "All computer graduates should be made to write code to run on 8-bit micros. Being limited to 64KB with simple instruction sets like the 6502 focuses the mind."

        My first computer job was programming 6502 assembler back in 1980. But I really hit my stride with 8080/Z80 assembler a bit later. Good ol' CP/M ;-)

        • by Kaenneth ( 82978 )

          Lucky you with an Assembler, I still remember LDA 0 as "POKE 49152,169" "POKE 49153,0"

          was able to save up for a tape drive eventually!

        • But I really hit my stride with 8080/Z80 assembler a bit later.

          Did you ever figure out how to access struct fields efficiently on an 8080? I ask because I've taken up programming for the Game Boy, whose LR35902 SoC contains a Sharp SM83 core. The SM83 is mostly an 8080 but with some but not all Z80 extensions. Its opcode table [github.io] has the bit manipulation (CB) set but not the IX or IY sets. One

          - On the Z80, you'd put a pointer to the base of the struct into IX and use IX+offset addressing.
          - On the 6502, you'd use "structure of arrays" paradigm, stripe each struct field (or

      • I do a lot of 8bit programming on micros, and the biggest I use is 32K. Usually less.

        64K with only 8bit is actually pretty roomy. Most of the ones that big, you can get a 32bit ARM based with 256K or something for the same price.

        Usually 64K 8bit means it started smaller, but got bloated beyond original expectations. If they had know it was going to happen, they would have gone for 16 or 32 bit.

        • by tepples ( 727027 )

          Usually 64K 8bit means it started smaller, but got bloated beyond original expectations.

          Such as the Apple IIe following the Apple II, the Commodore 64 following the VIC-20, the ZX Spectrum 128K following the 48K, and the NES with mapper chips following the NES with only NROM (32K program and 8K character generator ROMs).

      • Comment removed based on user account deletion
  • That software development needs to catch-up with hardware development was nicely presented by Jonathan Blow [youtube.com].

  • by Anonymous Coward

    If we can eliminate an extremely expensive nasty class of security vulnerabilities for a mere 15% performance penalty then we should just bite the bullet and do that. It will be cheaper to buy another 20% compute capacity than to keep mitigating those vulnerabilities and pay for the resulting cleanup when someone doesn't.

  • When this all first started I resolved the issue by disabling hyperthreading. As a result I'm ahead of any new flaws found in the speculative execution model. I think we all just need to accept we don't actually have the performance benefits of hyperthreading if its insecure. And, regardless how many times it is patched, it is always going to be insecure.

  • Costs (Score:5, Interesting)

    by michael_cain ( 66650 ) on Sunday June 30, 2019 @01:08PM (#58850366) Journal
    Does anyone have an idea about how the costs would compare for a two-core chip where the cores support hyperthreading, and a four-core chip where the cores do not? Not hyperthreading disabled, but cores designed without hyperthreading at all. Ditto speculative execution, I suppose.
    • I searched a bit on the intel website (use advanced search). For example

      https://ark.intel.com/content/... [intel.com] (2c/4t, $161)

      https://ark.intel.com/content/... [intel.com] (4c/4t, $131)

      They are not exactly the same otherwise (both mobile, but different packages and clock speeds), but I'd expect that you can find a better apples-to-apples comparison if search harder. Intel product names are impenetrable, unfortunately.

  • by Anonymous Coward

    It still is probably cheaper to disable hyperthreading and buy 50% more CPU's than it is to recode most stuff to make it more efficient. Sad but true. I write exclusively in C and find it pretty lonely. I meet very few programmers interested in optimizing.

  • Disabling hyperthreading doesn't disable speculative execution. Speculative execution is a fundamental feature which, along with branch prediction, enables modern out-of-order CPUs to have deep pipelines (and therefore high throughput). It can't be disabled.

    • by iggymanz ( 596061 ) on Sunday June 30, 2019 @02:37PM (#58850760)

      disabling hyperthreading does remove many of the speculative execution vulnerabilities though not all of them, the buffer/caching system for it are a big part of the problem

    • by Anonymous Coward

      Speculative Execution can be though of as fucking someone up the ass and blowing your load before you know what sex they are. If they turn out not to be the sex you prefer, then you use some kleenex and an enema to clean out the "wrong path". If they turn out to be the sex you prefer, then there is no need for kleenex and an enema because the goo is already in the correct hole.

      Hyperthreading is a truck having two (or more) people inside and comes in two forms. In the normal Intel form you have one dangly

  • Comment removed based on user account deletion
  • by drnb ( 2434720 ) on Sunday June 30, 2019 @03:42PM (#58851098)

    can require painful updates to the kernel

    Well that's the cost of a monolithic kernel, he should have went microkernel. :-)

    • by gtall ( 79522 )

      Not really, a micro-kernel now has a lot of satellite "services" running around. They need to be coded just as well. And then there is the scheduling of those things which hyperthreading is still going to impact.

      • by drnb ( 2434720 )
        The point is that the code is more manageable when creating it or maintaining it, hence less painful.
  • DevOps (Score:3, Insightful)

    by Darinbob ( 1142669 ) on Sunday June 30, 2019 @03:44PM (#58851112)

    Can we just stop using the "DevOps" term? It's a stupid idea anyway, mostly a way to make employees do two jobs in order to hire fewer people and it only applies to a very narrow range of activities. Continuous rollout to customers is a bad idea anyway, even for web sites.

    • by gtall ( 79522 )

      Ah, but it makes PHB's feel so much happier because they get to point at "deliverables" to answer questions about whether the project is succeeding. Building a dirty snowball and calling it software just because you can roll out smaller dirty snowballs as time progresses isn't progress. It also make integration a nightmare. Large software projects are just hard, no amount of management voodoo is going to fix that.

    • by uncqual ( 836337 )

      Hey, it worked well for Boeing.

  • The good news is that there are nearly untold opportunities to improve performance of much software that was thrown together, often from scraps of poorly matched open source projects being forced into roles they were never optimized or intended for, in haste to get a "product" (something we used to call a "prototype" and threw away before starting coding Release 1.0) out the door without any attention to tuning, let alone optimization.

    Few people do the math -- I once worked at a company that had thousands o

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...