Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
GNU is Not Unix Programming Technology

GCC 4.0.0 Released 680

busfahrer writes "Version 4.0.0 of the GNU Compiler Collection has been released. You can read the changelog or you can download the source tarball. The new version finally features SSA for trees, allowing for a completely new optimization framework." The changelog is pretty lengthy, and there's updates for every language supported from Ada to Java in addition to the usual flavors of C.
This discussion has been archived. No new comments can be posted.

GCC 4.0.0 Released

Comments Filter:
  • Moving fast (Score:4, Interesting)

    by slapout ( 93640 ) on Thursday April 21, 2005 @10:10PM (#12309554)
    Is it just me or did the jump from version 3 to 4 happen a lot faster than the one from 2 to 3?
  • by ribo-bailey ( 724061 ) on Thursday April 21, 2005 @10:11PM (#12309558) Homepage
    of the 2.95 -> 3.0 transition.
  • by Da w00t ( 1789 ) on Thursday April 21, 2005 @10:11PM (#12309559) Homepage
    Not a C coder myself, (sticking mainly to perl).. I've just got to ask, what are SSA trees, and what benefit do they serve?
  • whoa (Score:4, Interesting)

    by william_w_bush ( 817571 ) on Thursday April 21, 2005 @10:14PM (#12309571)
    reading tfa and changelog intrigued me. optimisations aside im curious if this will be better able to thread on the new multi-core systems coming out, as tls has been spotty till 3.3 and glibc 2. maybe native xd support coming soon too?

    also, the c++ side makes me feel optimistic about ongoing support, which had been a big problem till 3.4.

    yes im x86/64 centric.
  • Re:Lisp? (Score:2, Interesting)

    by refactored ( 260886 ) <cyent.xnet@co@nz> on Thursday April 21, 2005 @10:14PM (#12309572) Homepage Journal
    Always had since way back when. It was called RTL. Classic Lisp syntax (function arg arg)

    Say
    info gccint and look at the entry on RTL.

    Ok, so I'm almost joking.

  • by rbarreira ( 836272 ) on Thursday April 21, 2005 @10:15PM (#12309584) Homepage
    An educated guess - are they a move in the direction of making code optimizations in gcc easier to code? I heard that a lot of optimization experts (you need to know a lot of graph theory for example) wouldn't work on gcc because of the difficulty of working with it for optimizations, so they would do their experiments in other compilers...
  • Re:Moving fast (Score:5, Interesting)

    by JohnsonWax ( 195390 ) on Thursday April 21, 2005 @10:18PM (#12309606)
    Apple wasn't working on GCC until version 3. I suspect a lot of other companies weren't either.
  • Autovectorization (Score:5, Interesting)

    by QuantumG ( 50515 ) <qg@biodome.org> on Thursday April 21, 2005 @10:29PM (#12309662) Homepage Journal
    Correct me if I'm wrong here, but most Linux distributions are still i386 right? It's only the people who use Gentoo who actually compile everything with i686 options right? So, if autovectorization and all the other improvements in GCC 4.0 make binaries massively faster on modern platforms, how long will it be before the major binary based distributions (like Ubuntu) start making i686 the default and i386 an available alternative (like AMD64 is now).
  • by jtshaw ( 398319 ) * on Thursday April 21, 2005 @10:29PM (#12309663) Homepage
    When they announced the release of Apple 10.4 "Tiger" I noticed this page: At that point I kinda figured gcc 4.0.0 had to be out by April 29th since Apple claimed they were using it for OS X.
  • by H0p313ss ( 811249 ) on Thursday April 21, 2005 @10:31PM (#12309680)

    Just about every time I have to rebuild a kernel or build a kernel mod I get my butt kicked by gcc versions. So my questions are?

    • Are there compatibility issues with existing binaries?
    • What does this do to existing code?
    • How will this effect existing distros?
    • Is any distro planning on supporting 4.X soon? (And is that a good thing or a bad thing?)

    Anyone know?

  • Re:Moving fast (Score:5, Interesting)

    by burns210 ( 572621 ) <maburns@gmail.com> on Thursday April 21, 2005 @10:32PM (#12309692) Homepage Journal
    Apple is using it in their Tiger (OS X 10.4) release come the 29th of this month. So there is a few millions new GCC 4.0 users right there.
  • by Anonymous Coward on Thursday April 21, 2005 @10:35PM (#12309703)
    AFAIK, RedHat is the only company that sells commercial support specifically for GCC.
  • Re:Autovectorization (Score:5, Interesting)

    by QuantumG ( 50515 ) <qg@biodome.org> on Thursday April 21, 2005 @10:41PM (#12309753) Homepage Journal
    I used to work for Codeplay, a company that made compilers for games development, and we were pretty surprised at the kinds of speedups you would get on non-gaming applications. Obviously compiling open source software was a great way to test our compiler. Basically any loop which performs the same operation on multiple data can be unrolled 4 times and vectorized. That's a massive speedup. So yes, I would expect OpenOffice to be faster.
  • Readme.SCO (Score:5, Interesting)

    by karvind ( 833059 ) <karvind.gmail@com> on Thursday April 21, 2005 @10:48PM (#12309802) Journal
    The gcc tar ball has a README.SCO file (reproduced below)

    The GCC team has been urged to drop support for SCO Unix from GCC, as a protest against SCO's irresponsible aggression against free software and GNU/Linux. We have decided to take no action at this time, as we no longer believe that SCO is a serious threat.

    For more on the FSF's position regarding SCO's attacks on free software, please read:

    http://www.gnu.org/philosophy/sco/sco.html

  • by b17bmbr ( 608864 ) on Thursday April 21, 2005 @10:52PM (#12309835)
    i am interested in the java compatibility. i figure it probably won't do swing, but will it support, or is will it do say gtk/java native. that'd be sweet. i know Qt/kde has had a java bridge for a while, but i really haven't played to much with it. flame java all you want, it's not a geek language. no obfuscated java, no java monks. BFD. sure that'd nix the whole write-once run anywhdere thing. but hell, what a great opportunity to build and test apps under a jre then compile them, to native.
  • Objective-C++...? (Score:4, Interesting)

    by Dimwit ( 36756 ) * on Thursday April 21, 2005 @10:55PM (#12309852)
    They've been talking about having Objective-C++ in the GCC main branch for years now. There was even talk that 4.0 wouldn't ship without it. Now it's shipped without it and it's still "coming Real Soon Now". Any word on if it's coming any time soon (really)?
  • by ArbitraryConstant ( 763964 ) on Thursday April 21, 2005 @11:02PM (#12309899) Homepage
    The OpenBSD crowd had a lot of concerns about bugs in 3.x and performance regressions (in compiling, not in the resulting binaries). I believe Linus shared some of these concerns (don't have a link handy).

    OpenBSD i386 is finally moving towards gcc 3.x, as the bugs have been cleared up even if the performance regressions haven't. I'm wondering if 4.x will be even worse, and if it will be justified by producing better binaries. From TFA, it looks like they've added a few features that may improve optimizations. If it's noticeably better they may move to the new version faster.

    I will have to play with it to see what it can do.
  • OOo Calc (Score:2, Interesting)

    by tepples ( 727027 ) <tepplesNO@SPAMgmail.com> on Thursday April 21, 2005 @11:07PM (#12309921) Homepage Journal

    Sure, OpenOffice.org spends a lot of time at the idle loop compared to say Half-Life, but there are some cases when even a word processor can lag. Faster loops make repeated operations on the most complex documents, such as Writer reflow, Calc recalculation, and Draw repainting, faster. Faster operations make OOo more responsive in general. More responsive OOo makes users happy.

  • Re:Moving fast (Score:2, Interesting)

    by paulymer5 ( 765084 ) on Thursday April 21, 2005 @11:12PM (#12309941)
    No, most Mac applications will soon be using the new version.

    Users, when discussing a compiler, is a nebulous term. Does one mean programmers developing for the compiler, or does one mean any person using the compiler directly through source or indirectly through binaries?

    I consider the latter more significant; autovectorization will be extremely important on G4 and G5 hardware, and Mac OS X binaries (by far the most popular distribution method for the platform) will soon reflect this.
  • Re:debian (Score:1, Interesting)

    by Anonymous Coward on Thursday April 21, 2005 @11:16PM (#12309963)
    Sid is the unstable branch, so probably a whole lot sooner than you think. I think you mean that you wonder when debian *sarge* (or etch) will integrate it...
  • Re:debian (Score:3, Interesting)

    by Anonymous Coward on Thursday April 21, 2005 @11:37PM (#12310095)
    Is Debian's release cycle truly so slow that what appears to be an honest curiosity is modded as a troll?

    Kidding aside, no. Debian is legendary for being, ahem, slow about releases; they release when it's done, not on some date. Thus /. gets lots of jokes about Debian being slow. "I heard that Duke Nukem Forever will be open source and part of Debian's next release!!!11!" etc.

    If GCC 4.0 made changes that would affect the ability of the linker to link things, then GCC 4.0 would actually be slow to go into Debian. Packages would probably show up right away in Debian Experimental but otherwise would stay out for a long time.

    Debian Unstable ("sid") is where the new, potentially unstable, stuff goes once it is out of Experimental. Things in Unstable are automatically promoted into Testing if they look stable, which means the Debian guys can't put anything half-baked into Unstable. They would have to wait until the current Testing is released as Stable, and then they could do a big change like that. The current Testing ("sarge") is getting closer to actually shipping but I don't know when exactly.

    As long as GCC 4.0 simply produces better code, and doesn't break anything, it will show up in Unstable within a very short amount of time. I don't know enough about it to tell you whether this will happen or not, but I did read the release notes and I don't see anything in there that looks like linker breakage.
  • by Old Wolf ( 56093 ) on Thursday April 21, 2005 @11:51PM (#12310183)
    One of the changes in 4.0.0 is autovectorization [gnu.org] optimizing.
    One _ancient_ compiler (10+ years) I have to use, already has this feature -- and on a large scale: it'll do it over several screensful of code. What took GCC so long?

    Unfortunately, this compiler I mention also has a bug: once it's factored out 'i' in a piece of code like that below, it then complains that 'i' is an unused variable. So you have to do something with 'i' to suppress that warning, which kinda defeats the purpose of the autovectorization.

    Sample code:

    int a[256], b[256], c[256];
    foo () {
    int i;

    for (i=0; i256; i++){
    a[i] = b[i] + c[i];
    }
    }
  • Re:Autovectorization (Score:4, Interesting)

    by mattdm ( 1931 ) on Friday April 22, 2005 @12:24AM (#12310346) Homepage
    Correct me if I'm wrong here, but most Linux distributions are still i386 right?

    Right in some ways, but importantly wrong in others. Red Hat and the Fedora Project, for example, are compiled using the i386 instruction set but optimized for i686. This means that the cmov instruction isn't available -- but apparently, it's not much of a win (and even a loss in some cases) on modern processors. And code which uses SSE or 3DNow or whathaveyou is usually carefully hand-coded and checked for at runtime.

    There's not really much advantage of switching away from this scheme, so I don't see it as worth the bother. Instead, x86_64 will eventually kill it all off and we'll move on to that.
  • Example (Score:4, Interesting)

    by TeknoHog ( 164938 ) on Friday April 22, 2005 @12:45AM (#12310444) Homepage Journal
    For an example of a sensible (slightly higher-level) language, consider the Fortran example from the autovectorization page [gnu.org]:

    DIMENSION A(1000000), B(1000000), C(1000000)
    READ*, X, Y
    A = LOG(X); B = LOG(Y); C = A + B
    PRINT*, C(500000)
    END

    Notice the lack of an array index. These are true vector operations to begin with, so it is already assumed that the array elements are independent, therefore the log and addition can be parallelized safely.

  • Will named warnings never be impemented? Or numbered? Something that lets me turn off a warning for a particular line of code?

    Have you ever tried writing an overflow safe integer class? I have, and I did, but I have to compile everything with -w because otherwise I get 40 pages of "condition will always be false due to limited range of data type". Bleh! If it will always be false, throw it away! I need the check in there for when the type will be a signed int.

    Does anyone have a ray of hope? I love most of GCC's warnings, and have always been able to work around them, but in this case there's just no way to get rid of them.

  • Pascal (Score:2, Interesting)

    by AmicoToni ( 123984 ) on Friday April 22, 2005 @01:05AM (#12310531)
    Something that I would really like to see integrated into GCC, sooner or later, is GNU Pascal [gnu-pascal.de].
    They always seem to be close, yet it never happens.
  • Re:whoa (Score:1, Interesting)

    by Anonymous Coward on Friday April 22, 2005 @01:12AM (#12310571)
    Maybe. It's been a while since I looked at language extensions for multithreading and such. Still, I assume these extensions you speak of don't involve automatically adding threads to a program, or using multiple cores without spawning threads. Heck, I doubt the compiler does any optimization based on that. Most of the interactions between optimization and threading have to do with disabling some optimizations across certain sections.

    e.g. a global variable that is not changed but is reread inside a loop is never rechecked because the reads are hoisted out of the loop by the optimizer. If the variable is something that is supposed to be written to by another thread, your code doesn't work as expected. One way to deal with this is to mark the variable as volatile, as you'd do in Java, but that disables a lot of other optimizations on every use of the variable. Another way is to put a mark in the loop saying that the variable can be modified (using an asm statement in GCC IIRC).

    Basically there is very little a compiler can do to improve performance for multi-core systems.

    At least I knew what I was talking about. Apologies if it didn't come out very well, but I stil can't make sense of the OP.

    And BTW, I'm posting as AC, so you can be pretty sure I'm not karma whoring.
  • by Anonymous Coward on Friday April 22, 2005 @01:14AM (#12310578)
    speed improvement for kde code? less symbols means dynamic linking is quicker, this is done at app startup, and does not effect running speed of the code at all.
  • by den_erpel ( 140080 ) on Friday April 22, 2005 @01:24AM (#12310625) Homepage Journal
    Ah things no longer compiling :) True, it was very annoying and made you go through an extra code review while porting your code forward.

    In the long term, I think it was a very good thing: coding C (and C++, but didn't have that much experience on that) got much more stickt and in my experience, removes a lot of possible problems later on.

    If someone had a lot of problems porting 2.95 to 3.2, his code needed to be reveiwed anyway. It kind of removes the "boy" from "cowboys" in coders (experience is drawn from not-so-embedded systems).

    Based on the remarks obtained from the compiler for embedded code (they made a lot of sense) during the switch and gcc becoming more strict, we now even compile everything with -Werror.

    In our deeply embedded networking code, we got a speed improvement of 20% just switching to 3.4 (from 3.3) :) I am going to try to compile a new PowerPC toolchain one of these...

    Go GCC!
  • Re:whoa (Score:4, Interesting)

    by AHumbleOpinion ( 546848 ) on Friday April 22, 2005 @01:50AM (#12310715) Homepage
    ... I assume these extensions you speak of don't involve automatically adding threads to a program ...

    Actually they do just that. You put a #pragma omp before a for loop to have it implemented using threads. You put another #pragma omp before access to a shared variable to have access serialized. You never code to a specific API. The compiler automatically generates pthread calls, Win32 calls, etc as appropriate. Your code is portable. Lawrence Livermore has some nice examples but the seem to be down right now, www.llnl.gov.
  • Re:Autovectorization (Score:3, Interesting)

    by QuantumG ( 50515 ) <qg@biodome.org> on Friday April 22, 2005 @01:52AM (#12310723) Homepage Journal
    You're like the 4th person to say that. The point of autovectorization is that all programs can benefit from SIMD instructions, not just the ones where programmers thought it might be a good idea.
  • by As Seen On TV ( 857673 ) <asseen@gmail.com> on Friday April 22, 2005 @01:55AM (#12310737)
    If anybody was wondering, this is why we stay as far away from the Gnu people as humanly possible.

    We're not shipping "a fork" of GCC 4. We're shipping GCC 4.0.0, which we compiled from source for Darwin 8.

    In fact, when you're talking about shipping a compiler for a specific platform, the whole notion of "a fork" is basically meaningless.

    (Setting aside, of course, that the whole notion of "a fork" runs 100% counter to all that open-source stuff that you guys are supposedly so hip to anyway.)
  • Re:Readme.SCO (Score:4, Interesting)

    by NutscrapeSucks ( 446616 ) on Friday April 22, 2005 @02:15AM (#12310799)
    Maybe someone can find it, but the SCO GCC guy posted on Slashdot once. He indicated that he was pretty much single-handedly responsible for the SCO UNIX port, so GCC pulling their endorcement wouldn't make much if any difference to SCO customers. I belive his attitude towards his employer was "I don't like it but who else will pay me to hack on GCC?"
  • The story about bad performance of GCC 3.x is completely true. I myself wondered some time ago why my dual P3-550 was hardly faster at compiling kernels, compared to my old single P2-350. Actually, if there was only one P3 in it it would've been slower! (And yes, the machine had more RAM...)

    After a while I found out that the P2 ran Debian Woody with gcc 2.95 used by default and the P3 ran testing with gcc 3.3 (?) used by default. Another compile with the same gcc versions gave better results.
  • by As Seen On TV ( 857673 ) <asseen@gmail.com> on Friday April 22, 2005 @03:46AM (#12311086)
    You're defining "fork" so broadly that every build from every vendor would meet it. That's pretty silly.
  • Re:Still no C99? (Score:3, Interesting)

    by m50d ( 797211 ) on Friday April 22, 2005 @04:51AM (#12311270) Homepage Journal
    Is broken in some important ways, for example you can't pass complex numbers to functions, which is why it isn't the default
  • by Hast ( 24833 ) on Friday April 22, 2005 @06:21AM (#12311493)
    One _ancient_ compiler (10+ years) I have to use, already has this feature -- and on a large scale: it'll do it over several screensful of code. What took GCC so long?

    Because vectorisation and parallelisation are two very hard problems. Normal compiler optimisations pale in comparison for the most part.

    Even the best currently available vectorising compilers will do a pretty poor job compared to human optimisations (in cases where it's possible to do by hand). I have seen examples were a simple c-loop could be hand optimised into half a page of asm where a vectorising compiler produced 4 pages.

    It is a REALLY hard problem.

    Unfortunately, this compiler I mention also has a bug: once it's factored out 'i' in a piece of code like that below, it then complains that 'i' is an unused variable. So you have to do something with 'i' to suppress that warning, which kinda defeats the purpose of the autovectorization.

    Which leads me to believe that it wasn't doing a very good job at all. Just because it claims to do vectorisation and adds a few asm instructions doesn't mean it's doing a poor job. That they couldn't even detect the iterator variable may even hint that it could produce broken code.
  • Re:Objective-C++...? (Score:4, Interesting)

    by framerate ( 659707 ) on Friday April 22, 2005 @07:36AM (#12311711)
    "All the ISVs who are still using C++ are building their apps with Core Foundation."

    No they're not! And I myself am not about to port hundreds of thousands of lines of C++ code to Objective-C since that'd eliminate the Windows version, which I can't do!

    In the code base I'm currently porting to Cocoa, all of the application's core logic and data structures are written in C++, and the user-interface layer is written natively for each platform. So the Mac version gets a high-quality Cocoa front-end and Windows/Linux/BSD gets a wxWidgets front-end (since wxWidgets does a good job on those platforms).

    Take away Objective-C++ (and therefore Cocoa C++) support and I'll just compile the wxWidgets version for the Mac since CoreFoundation is, as you say, a pain in the ass to use. The result: another low-quality "Windows-app-in-Aqua-clothing" Mac app.

    Cross-platform toolkits, such as wxWidgets, SWT and Swing produce usable but low-quality Mac applications (missing sheets, drawers, collapsable toolbars, AppleScript support, and so on and so forth). Objective-C++ allows me to easily write high quality Aqua-compliant applications easily. So if Apple values Mac users it will keep supporting Objective-C++!

    Not to mention that, for me at least, Cocoa/C++ is one of the reasons I use a Mac in the first place. I can produce professional user interfaces in no time and still know that I can port the core logic to Windows/Linux/BSD.

    Oh, and I'm working in the games industry, where the majority of code is C++. I know for a fact that Apple wants more games code ported to OS X.

"If it ain't broke, don't fix it." - Bert Lantz

Working...