Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
GNU is Not Unix Programming IT Technology

GCC 4.0 Preview 684

Reducer2001 writes "News.com is running a story previewing GCC 4.0. A quote from the article says, '(included will be) technology to compile programs written in Fortran 95, an updated version of a decades-old programming language still popular for scientific and technical tasks, Henderson said. And software written in the C++ programming language should run faster--"shockingly better" in a few cases.'"
This discussion has been archived. No new comments can be posted.

GCC 4.0 Preview

Comments Filter:
  • OpenMP? (Score:3, Interesting)

    by Anonymous Coward on Monday March 14, 2005 @06:02PM (#11937955)
    What I'd like to see is features like OpenMP for thread-level parallalism.
    • Re:OpenMP? (Score:5, Informative)

      by Anonymous Coward on Monday March 14, 2005 @06:06PM (#11938000)
      Sun has a nice summary of OpenMP here [sun.com]

      It's pretty cool. You write a loop like this:

      #pragma omp parallel for private(sum) reduction(+: sum)
      for(ii = 0; ii < n; ii++){
      sum = sum + some_complex_long_fuction(a[ii]);
      }
      and the complier will handle the creation and syncronization of all the threads for you. Here's a OpenMP for GCC [nongnu.org] project on the FSF site. Looks like it's still in the "planning" state, though, so I'm guessing it's not in GCC 4.X.
      • Re:OpenMP? (Score:5, Informative)

        by multipart ( 732754 ) on Monday March 14, 2005 @06:18PM (#11938157)
        We're working on the necessary infrastructure to associate the pragmas with the syntactic constructs they apply to. Actually parsing the OpenMP directives was already implemented - twice - but GCC does not support pragmas with a lexical context yet. This is needed for a bunch of C extensions, so we're working on that. This is probably GCC 4.1 material. After that, actually generating concurrent code from OpenMP pragmas is next.
  • Is it just me, or is compiling C++ code an order of magnitude slower than compiling C code? (exaggeration) I'm sure there's a very good reason why this is so, but it still doesn't make me happy.
    • It does take longer to compile C++. The solution to this is to keep Slashdot open in a browser. Back in the days before Slashdot, when compiling took even longer, programmers actually used to go ape-shit watching the compiler. We live in wonderful times.
    • by KhaZ ( 160984 ) on Monday March 14, 2005 @06:41PM (#11938392) Homepage
      Maybe someone's already said this, but look into three projects to speed up your compile:

      1) make (or some equiv). Yes, I said make.

      GNU make accepts a -j parameter, to thread builds. Only really useful on hyperthreading or multiprocessor boxes, however. That said, if you use:

      2) http://distcc.samba.org/ [samba.org]: distcc. You can distributedly compile your apps across other machines with a similar setup. Only really helpful if you have more then one box.

      3) http://ccache.samba.org/ [samba.org]: ccache. This is a C/C++ compiler. Only really useful for iterative development, and if you're doing a lot of make clean/make, as it'll cache things that don't to be rebuilt.

      Just some suggestions. Also, check out prelink, to prelink anything using shared libraries (trade space-savings into performance) and make startup code run faster in some cases.

      Hope that helps!

      ++Informative? Pwetty pwease?
    • by captaineo ( 87164 ) on Monday March 14, 2005 @09:41PM (#11939933)
      This has more to do with the habits of C++ programmers rather than the language itself. If you take a random piece of C code and compile it as C++, it will probably take no more than 2-3x more time (the slowdown being due to a larger compiler binary, more sophisticated type-checking, etc). However what is often considered "good C++ programming style" involves inlining far more code than is the norm for C. (e.g. some STL implementations are entirely inline, whereas it would take a pretty crazy C programmer to implement hash tables and heaps inline). That's what blows up the compile time (and binary size).

      The extra compile time buys you more inlining (which can be either good or bad for performance, depending on cache behavior) and also type-safe templates which are not acheivable in C (without ugly hacks).
  • watch out (Score:5, Funny)

    by fanblade ( 863089 ) on Monday March 14, 2005 @06:03PM (#11937970) Journal
    "...software written in the C++ programming language should run faster--..."

    Is this the programmer's way of saying it will run at some speed less than faster?
  • C++ compiler (Score:5, Insightful)

    by pchan- ( 118053 ) on Monday March 14, 2005 @06:03PM (#11937971) Journal
    But will it compile C++ any faster? The difference between compile times of C and C++ files is staggering. Compiling Qt/KDE takes forever with gcc 3.x.
    • I'm so sorry, ... (Score:5, Informative)

      by kompiluj ( 677438 ) on Monday March 14, 2005 @06:09PM (#11938037)
      but the reason it takes forever to compile KDE lies in fact that it uses extensively the templates. While templates (a.k.a. generics) are a very useful language feature, they increase compile times. Including support for export template feature could help but only when anybody would use it in their code.
      You can make an experiment and try compiling KDE with Intel C++ or Comeau C++ compilers, and see that not much can be gained comparing to GCC.
    • Re:C++ compiler (Score:5, Informative)

      by bill_mcgonigle ( 4333 ) * on Monday March 14, 2005 @06:23PM (#11938201) Homepage Journal
      But will it compile C++ any faster?

      Yes, from here [apple.com]: "
      GCC 4.0 features an entirely new C++ parser. The new parser is tremendously faster than the one in GCC 3.3 and will have a noticeable benefit when building C++ based projects.
    • Re:C++ compiler (Score:5, Interesting)

      by Surt ( 22457 ) on Monday March 14, 2005 @06:27PM (#11938250) Homepage Journal
      It claims the c++ front end is as much as 25% faster.
    • Re:C++ compiler (Score:5, Informative)

      by Anonymous Coward on Monday March 14, 2005 @06:42PM (#11938401)
      Apparently. Found via google:

      http://people.redhat.com/bkoz/benchmarks/

      Doesn't look public though.
    • Re:C++ compiler (Score:4, Interesting)

      by vsprintf ( 579676 ) on Monday March 14, 2005 @07:07PM (#11938670)

      But will it compile C++ any faster?

      I don't care if it compiles any faster, just as long as it compiles correctly. We were in the middle of a port of a major system to Linux recently, and the sysadmins decided we really need to install some patches. I shoulda' known better. I shoulda' said no.

      They applied the Red Hat AS patches (which included patches to gcc) on the target machine, and suddenly newly compiled programs that had been working for years had memory overwrite problems. Strings and char arrays would contain things that should be in adjacent memory. The most obvious difference was the newly compiled code was much smaller than that produced by the unpatched gcc.

      Luckily, we had another Red Hat AS machine which had not been patched, and I moved all the development work there. Then I promised the admins that I'd go postal if they touched gcc on that box. So far, so good, but I'd really appreciate it if the gcc guys would get it right before releasing stuff. One of the promised results of the above mentioned patch was a significant reduction in size. They got that part right at least.

  • Mudflap (Score:5, Insightful)

    by SteelV ( 839704 ) on Monday March 14, 2005 @06:05PM (#11937990)
    "GCC 4.0 also introduces a security feature called Mudflap, which adds extra features to the compiled program that check for a class of vulnerabilities called buffer overruns, Mitchell said. Mudflap slows a program's performance, so it's expected to be used chiefly in test versions, then switched off for finished products." - from the article

    I really love this feature, it will probably cut down on a great deal of problems. My only concern is that some devs will think running it all the time is OK (read: "Mudflap slows a program's performance"), so hopefully that's not the case.

    More detailed information on the mudflap system can be found here [gnu.org].
    • Re:Mudflap (Score:3, Insightful)

      My only concern is that some devs will think running it all the time is OK

      For some users and some classes of applications, it will be OK. Sometimes security is more important than performance, and you can't imagine the weird stuff your code sees when it's in the customers' hands.
    • Re:Mudflap (Score:4, Interesting)

      by idlake ( 850372 ) on Monday March 14, 2005 @06:34PM (#11938316)
      My only concern is that some devs will think running it all the time is OK (read: "Mudflap slows a program's performance"), so hopefully that's not the case.

      I'll agree with you on this much: C+Mudflap is not the way to fix buffer overrun problems. The problem isn't that runtime safety is costly--it isn't--the problem is that adding runtime safety to the C programming language post hoc is costly because of C's screwed up pointer semantics. That's why Mudflap costs you a factor of 3-5 in terms of performance on benchmarks, when runtime safety in another language really should only cost you a few percent overhead at most.

      Mudflap will probably not be used much for testing (people already have good tools for that they don't use) and it has too much overhead for most production use. The biggest thing Mudflap will do is perpetuate the myth that runtime safety is costly.
    • quote... (Score:5, Funny)

      by Cryptnotic ( 154382 ) * on Monday March 14, 2005 @07:00PM (#11938593)
      "They that can give up high performance to obtain a little temporary security deserve neither performance nor security."

      --not Benjamin Franklin

  • Autovectorization (Score:5, Informative)

    by DianeOfTheMoon ( 863143 ) on Monday March 14, 2005 @06:06PM (#11938002)
    One optimization that likely will be introduced in GCC 4.1 is called autovectorization, said Richard Henderson, a Red Hat employee and GCC core programmer. That feature economizes processor operations by finding areas in software in which a single instruction can be applied to multiple data elements--something handy for everything from video games to supercomputing.
    Is it just me, or is this the first "we will make it easy to program the Cell" step that Sony and IBM were promising?
  • by Anonymous Coward on Monday March 14, 2005 @06:06PM (#11938004)
    Screenshots, screenshots! I need screenshots people!!!
  • boost, please ? (Score:4, Interesting)

    by savuporo ( 658486 ) on Monday March 14, 2005 @06:07PM (#11938011)
    Can we get Boost [boost.org] in standard library please ?
    • Re:boost, please ? (Score:3, Informative)

      by yamla ( 136560 )
      I'd love for boost to be in the standard library, but I'm not sure that complaining to the gcc folks is the way to get this done. Surely if we want this in the standard library, it should be included as part of the next version of the ISO C++ standard?
    • Re:boost, please ? (Score:5, Informative)

      by devphil ( 51341 ) on Monday March 14, 2005 @06:18PM (#11938144) Homepage


      What does GCC have to do with this?

      If you want something added to the standard, talk to the C++ standard committee. (Either the Library or the Evolution groups, in this case.) You'll find you're about the 10,000th person to ask for this. You'll find there's an extensive FAQ on this exact subject. You'll find that the committee is very keen on adapting large parts of Boost, as experience in the real world smooths the rough edges of Boost.

      If you look a bit more, you'll find that some extensions have already been adopted (called "TR1") and are being shipped with GCC 4.0.

      You'll also find that GCC does not get to determine what's in the standard. And -- speaking as one of the libstdc++ maintainers, although I'm largely too busy to do much myself these days -- GCC will not ship Boost. Or glibc. Or libAPR. Or OpenSSL. Or any of the other million very useful open source libraries out there, because that's not our job.

  • by Zapman ( 2662 ) on Monday March 14, 2005 @06:07PM (#11938019)
    And how many times will they break ABI, API and library compatability in THIS major release? Count stands at 4 for the 3 series, maybe higher.

    The biggest challenge with Binary compatability across Linux distros is the GCC release (followed by the glibc releases, who live in the same ivory tower). I realize that things have to change, but I wish that they would not break compat between versions quite so often...

    I'd really like to be able to take a binary between versions, and it just work.

    This is one area where Sun rocks. Any binary from any solaris2 build will just work on any later version. With some libraries, you can go back to the SunOS days (4.1.4, 4.1.3UL, etc). That's 15 years or so.
    • You're talking about C++ binary compatibility, right? I don't think that GCC has broken C binary compatibility in a long time...

      Can you run C++ applications compiled on Solaris 2 on any later version?

      Compatibility is where Sun rocks, and it's also the rock that Sun is tied to. Most of the things that people hate about Solaris are kept that way because of their commitment to backwards compatibility. It becomes difficult to make signifigant changes if you focus on compatibility the way they do.

      Linux and ot
      • by tap ( 18562 ) on Monday March 14, 2005 @06:52PM (#11938519) Homepage
        C binary compatability is broken constantly, with every version of glibc. Anything compiled statically will crash using NSS if you compile statically and use a sligtly different gblic version. If you compile dynamically, then anyone who doesn't have this weeks version of glibc can't run your binaries.
    • YES this is a huge problem. More than half of my Linux troubleshooting time can be traced back to version skew issues in either GCC or GLIBC. (libstdc++ changes, pthreads changes, exception handling changes, etc...)

      Now that the C++ ABI is standardized, there is NO excuse for not having backwards- and forwards- compatibility for ordinary C and C++ executables linked against glibc.

      The Linux kernel v2 ABI has been mostly backwards- and forwards-compatible since its first release. And Linux kernel guts change
    • Ahem. (Score:5, Informative)

      by devphil ( 51341 ) on Monday March 14, 2005 @06:36PM (#11938340) Homepage


      I realize that things have to change, but I wish that they would not break compat between versions quite so often...

      Have you tried maintaining a compiler used in as many situations as GCC? (If not, you should try, before making complaints like this. It's an educational experience.)

      We added a "select ABI version" to the C++ front-end in the 3.x series. If you need bug-for-bug compatability, you can have it.

      I'd really like to be able to take a binary between versions, and it just work.

      Wanna know when this is gonna happen? Sooner, if you help [gnu.org].

      • Re:Ahem. (Score:3, Insightful)

        by marcelk ( 224477 )
        I realize that things have to change, but I wish that they would not break compat between versions quite so often...


        Have you tried maintaining a compiler used in as many situations as GCC? (If not, you should try, before making complaints like this. It's an educational experience.)


        This is exactly the ivory tower thinking that the poster is complaining about. You are overestimating the maintenance cost und underestimating the pain for your users. This is typical for open source: think that what is

  • by kharchenko ( 303729 ) on Monday March 14, 2005 @06:09PM (#11938043)
    I wish the compiler would output sane error messages on compiling code that uses a lot of templates (i.e STL). At least fixing it so that the line numbers are shown during debugging would be a huge improvement!
  • by devphil ( 51341 ) on Monday March 14, 2005 @06:11PM (#11938069) Homepage


    It's not too much of a stretch to say GCC is as central an enabler to the free and open-source programming movements as a free press is to democracy.
  • by gvc ( 167165 ) on Monday March 14, 2005 @06:16PM (#11938133)
    The gcc team seem to have no respect for legacy code. Incompatible syntax changes and incompatible dynamic libraries make me dread every new release.
    • by ari_j ( 90255 ) on Monday March 14, 2005 @06:24PM (#11938206)
      It's been my experience that they only have a lack of respect for incorrect code. If your legacy code is incorrectly-written, then you assumed the risk to begin with, says me. Write to the standard.
    • by devphil ( 51341 ) on Monday March 14, 2005 @06:45PM (#11938427) Homepage


      The gcc team seem to have no respect for legacy code.

      You've got to be fucking kidding me.

      Have a look at the mailing list anytime somebody reports a bug, and the choice is between fixing the bug and changing the ABI. Watch the flamefests erupt.

      (Watch them die down a few days later as one of the brilliant core maintainers manages to do both, with a command-line option to toggle between the default fixed version and the buggy old version.)

      Wait a few months. See a new corner-case weird bug some in. Lather, rinse, repeat.

      Incompatible syntax changes

      Such as...?

      All the ones I can think of were GCC extensions long before they were officially added to the languages. In fact, their presence in GCC actually influences their presence in an official language standard, because that's what the standards bodies do: standardize existing practice.

      The troublesome part is when the syntax as added to the language standard differs from the extension that was originally put in GCC. Then we have to choose which once to support -- because supporting both is often not feasible -- knowing that whatever choice we make, slashdot is going to whinge about it. :-)

      • by tap ( 18562 )

        Incompatible syntax changes

        Such as...?

        For inline assembly code, non-Lvalue parameters can no longer be given an "m" contraint. It used to be possible to have a parameter like (x+1) and use the most general contraint "m", register or memory. This way gcc could leave x+1 in a register, or spill it onto the stack if it ran out of registers.

        In gcc 4 you have to define a variable to hold x+1 and gcc is forced to write the value into memory, even if it could be left in a register.

  • From what I've heard (Score:4, Informative)

    by Mad Merlin ( 837387 ) on Monday March 14, 2005 @06:18PM (#11938146) Homepage
    GCC 4.0 apparently does compile things quite a bit quicker, C++ in particular. This should be a nice boost for anybody who compiles KDE and such for themselves.

    If you're interested, here's a (long) discussion [gentoo.org] which makes reference to many of the things coming in the new GCC.

  • Compiler (Score:3, Informative)

    by linguae ( 763922 ) on Monday March 14, 2005 @06:18PM (#11938153)
    Almost all open-source software is built with GCC, a compiler that converts a program's source code--the commands written by humans in high-level languages such as C--into the binary instructions a computer understands.

    Don't all compilers convert a program's source code into binary instructions?

    • Re:Compiler (Score:4, Funny)

      by That's Unpossible! ( 722232 ) * on Monday March 14, 2005 @06:55PM (#11938547)
      Don't all compilers convert a program's source code into binary instructions?

      Nope.

      Oh, did you mean all SOURCE CODE compilers?

      See, the word compiler was around before computers, and is only synonymous with "source code compiler" to geeks like us.

      Therefore in your attempt to be pedantic, you clearly were not being pedantic enough, thus the joke is on you.

      Ha-ha...
  • by uujjj ( 752925 ) on Monday March 14, 2005 @06:22PM (#11938184)
    Can't wait!

    (I'm especially excited by the possibility of random compiler incompatibilities!)

  • by jd ( 1658 ) <imipakNO@SPAMyahoo.com> on Monday March 14, 2005 @06:48PM (#11938464) Homepage Journal
    There was an article not too long ago on a fabled isle of Techno-Guruism called Slashdot, in which someone benchmarked GCC 4 against GCC 3. GCC 4 produced slower binaries, in many cases. I'd want to know if those issues were REALLY fixed, before I became confident in the new technology.


    GCC is an incredibly versatile compiler, with frontends for C, C++, Java, Ada and Fortran provided with the basic install. 3rd party extensions include (but are probably not limited to) Pascal, D, PL/I(!!) and I'm pretty sure there are Cobol frontends, too.


    They did drop CHILL (a telecoms language) which might have been useful, now that telecoms are taking Linux and Open Source very seriously. As nobody seems to have picked it up, dusted it off, and forward-ported it to modern GCCs, I think it's a safe bet that even those interested in computer arcana are terribly interested in CHILL.


    OpenMP as been discussed on and off for ages, but another poster here has implied that design and development is underway. OpenMP is a hybrid parallel architecture, mixing compiler optimizations and libraries, but I'm not completely convinced by the approach. There are just too many ways to build parallel systems and therefore too many unknowns for a static compile to work well in the general case.


    Finally, the sheer size and complexity of GCC makes bugs almost inevitable. It provides some bounds checking (via mudflap), and there are other validation and testing suites. It might be worth doing a thorough audit of GCC at this point, so that the 4.x series can concentrate on improvements and refinements.

  • by Qwavel ( 733416 ) on Monday March 14, 2005 @06:55PM (#11938553)
    At the GCC conference in Ottawa in the summer of 2003, there were two very interesting features presented that they said might make it into GCC 4.0.

    - LLVM. Low Level Virtual Machine. This is a low level and generic pseudo code generator and virtual machine.
    http://llvm.cs.uiuc.edu/ [uiuc.edu]
    This sounded fabulous, and the project appears to be progressing well (it's at v1.4 now). If I understand correctly it is only politics that has kept it out of GCC 4. Can anyone shed more light on this?

    - Compiler Server. Rather than invoking GCC for each TU you would run the GCC-Server once for the whole app and then feed it the TU's. This would make the compile process much faster and allow for whole program optimization.
    This would have been nice but perhaps they found better ways to achieve the same thing.
    • by devphil ( 51341 ) on Monday March 14, 2005 @07:25PM (#11938838) Homepage


      Yeah, heavy on the "might".

      • Politics is what's preventing us from considering LLVM, let alone the long and torturous process of making the code work. The brutally short story is that GCC is operating under a certain restriction imposed by RMS since its inception, and LLVM -- or really, any good whole-program optimization technique -- would require us to violate that restriction.

        Now, there are some of us (*waves hand*) who feel that RMS is a reactionary zealot in this respect, and would be more than happy to use the LLVM techniques, but we won't get into that.

      • The compiler server branch is still being worked on, so it won't be in 4.0, but might be in 4.1 or 4.2 or... It's only a few people working on it, after all.
      • by eviltypeguy ( 521224 ) on Monday March 14, 2005 @08:16PM (#11939274)
        The brutally short story is that GCC is operating under a certain restriction imposed by RMS since its inception, and LLVM -- or really, any good whole-program optimization technique -- would require us to violate that restriction.

        Care to tell us what this oh so mysterious restriction is?
        • my guess (Score:3, Insightful)

          by jbellis ( 142590 )
          LLVM is written in C++, and RMS has dictated "Only C shalt thou write for gcc."
          • No, here it is. (Score:5, Informative)

            by devphil ( 51341 ) on Tuesday March 15, 2005 @12:07PM (#11944337) Homepage


            I didn't go into details because this has been covered elsewhere, and I'm tired of discussing it myself. But I didn't realize I would be accused of "uninformed slander". So. A bit of background info first.

            Inside the guts of the compiler, after the parser is done working over the syntax (for whatever language), what's left over is an internal representation, or IR. This is what all the optimizers look at, rearrange, throw out, add to, spin, fold, and mutilate.

            (Up to 4.0, there was really only one thing in GCC that could be properly called an IR. Now, like most other nontrivial compilers, there's more than one. It doesn't change the political situation; any of them could play the part of "the IR" here.)

            Once the optimizers are done transforming your impeccable code into something unrecognizable, the chip-specific backends change the IR into assembly code. (Or whatever they've been designed to produce.)

            Each of these transformations throws away information. What started out as a smart array class with bounds checking becomes a simple user-defined aggregate, which becomes a series of sequential memory references, which eventually all get turned into PEEK and POKE operations. (Rename for your processor as appropriate, or look up that old joke about syntactic sugar.)

            Now -- leaving out all the details -- it would be Really Really Useful if we could look at the PEEKs and POKEs of more than one .o at a time. Since the compiler only sees one .c/.cpp/.whatever at a time, it can only optimize one .o at a time. Unfortunately, typically the only program that sees The Big Picture is the linker, when it pulls together all the .o's. Some linkers can do some basic optimization, most of them are pretty stupid, but all of them are limited by the amount of information present in the .o files... which is nothing more than PEEK and POKE.

            As you can imagine, trying to examine a pattern of PEEK and POKE and working out "oh, this started off as a smart array class with bounds checking, let's see how it's used across the entire program" is essentially impossible.

            Okay, end of backstory.

            The solution to all this is to not throw out all that useful abstract information. Instead of, or in addition to, writing out assembly code or machine code, we write out the IR instead. (Either to specialized ".ir" files, or maybe some kind of accumulating database, etc, etc; the SGI compiler actually writes out .o files containing its IR instead of machine code, so that the whole process is transparent to the user.) Later on, when the linker runs, it can see the IR of the entire program and do the same optimizations that the compiler did / would have done, but on a larger scale.

            This is more or less what all whole-program optimizers do, including LLVM. (I think LLVM has the linker actually calling back into the compiler.)

            The "problem" is that between the compiler running and the linker running, the IR is just sitting on the disk. Other tools could do whatever they want with it. RMS's fear is that a company would write a proprietary non-GPL tool to do all kinds of neat stuff to the IR before the linker sees it again. Since no GPL'ed compiler/linker pieces are involved, the proprietary tool never has to be given to the community. Company wins, community loses.

            End of problem description. Begin personal opinionating.

            It's a legitimate concern, but many of us feel that a) it's going to happen eventually, and b) we do all GCC users a disservice by crippling the tools merely to postpone an inevitable scenario. As usual, there's a wide range of opinions among the maintainers, but the general consensus is that keeping things the way they are is an untenable position.

      • Politics is what's preventing us from considering LLVM, let alone the long and torturous process of making the code work. The brutally short story is that GCC is operating under a certain restriction imposed by RMS since its inception, and LLVM -- or really, any good whole-program optimization technique -- would require us to violate that restriction.

        Now, there are some of us (*waves hand*) who feel that RMS is a reactionary zealot in this respect, and would be more than happy to use the LLVM techniques, b
      • Is there anyone who knows what this LLVM issue is about? Anyone out there who is not just ranting incoherently about RMS?

        • LLVM is sort of a mostly-compiled form of a program.
          (like preprocessed, but more work having been done)

          If gcc can convert C to LLVM, and LLVM to native,
          then you could replace either half with something
          proprietary. You could add a proprietary middle
          step that optimized LLVM code.
        • by Anonymous Coward on Monday March 14, 2005 @09:41PM (#11939938)
          Over time, many companies have tried to make money off of portions of gcc without giving anything back to the community. For example, one of the Edison Group's C++ front-ends can be patched onto gcc, giving a "free" compiler for many platforms without giving a better C++ front-end to gcc. Currently, only an end user can patch gcc to work with that front-end. That restriction makes the product less attractive.

          Because of this history, RMS does not want to make it easier for companies to take from gcc without giving back. LLVM would provide a clean interface between portions of gcc, and that clean interface could be so abused.

          Remember that gcc has Objective-C support only because NeXT was forced to abide by the GNU GPL. Large portions of gcc were contributed by volunteers under the terms of the GNU GPL; their work was donated with the expectation that others' work would be made available. Many would see LLVM as a betrayal of that expectation. The next version of the GPL may address this issue...
      • The compiler server branch is as far as I know not being worked on. At least I han't done any work on it since I left Apple. I've done some vaguely related follow-on and cleanup-work, such as adding support for column numbers for declarations, expressions, and rtl. However, no work on the compile server branch itself. I haven't noticed anyone else working on it.

        It's a shame, since I think the compile server has major potential - and not only in terms of improving compile speed. However, there is still a

Adding features does not necessarily increase functionality -- it just makes the manuals thicker.

Working...