GCC 4.0.0 Released 680
busfahrer writes "Version 4.0.0 of the GNU Compiler Collection has been released. You can read the changelog or you can download the source tarball. The new version finally features SSA for trees, allowing for a completely new optimization framework." The changelog is pretty lengthy, and there's updates for every language supported from Ada to Java in addition to the usual flavors of C.
Moving fast (Score:4, Interesting)
i'm having horrible flashbacks... (Score:4, Interesting)
Is anyone else curious what SSA trees are? (Score:4, Interesting)
whoa (Score:4, Interesting)
also, the c++ side makes me feel optimistic about ongoing support, which had been a big problem till 3.4.
yes im x86/64 centric.
Re:Lisp? (Score:2, Interesting)
Say
info gccint and look at the entry on RTL.
Ok, so I'm almost joking.
Re:Is anyone else curious what SSA trees are? (Score:4, Interesting)
Re:Moving fast (Score:5, Interesting)
Autovectorization (Score:5, Interesting)
Figured this had to happen (Score:5, Interesting)
Compatibility? Linux testing? (Score:4, Interesting)
Just about every time I have to rebuild a kernel or build a kernel mod I get my butt kicked by gcc versions. So my questions are?
Anyone know?
Re:Moving fast (Score:5, Interesting)
Re:i'm having horrible flashbacks... (Score:1, Interesting)
Re:Autovectorization (Score:5, Interesting)
Readme.SCO (Score:5, Interesting)
The GCC team has been urged to drop support for SCO Unix from GCC, as a protest against SCO's irresponsible aggression against free software and GNU/Linux. We have decided to take no action at this time, as we no longer believe that SCO is a serious threat.
For more on the FSF's position regarding SCO's attacks on free software, please read:
http://www.gnu.org/philosophy/sco/sco.html
how much java comaptibility (Score:4, Interesting)
Objective-C++...? (Score:4, Interesting)
just when OpenBSD i386 started to move to 3.x (Score:3, Interesting)
OpenBSD i386 is finally moving towards gcc 3.x, as the bugs have been cleared up even if the performance regressions haven't. I'm wondering if 4.x will be even worse, and if it will be justified by producing better binaries. From TFA, it looks like they've added a few features that may improve optimizations. If it's noticeably better they may move to the new version faster.
I will have to play with it to see what it can do.
OOo Calc (Score:2, Interesting)
Sure, OpenOffice.org spends a lot of time at the idle loop compared to say Half-Life, but there are some cases when even a word processor can lag. Faster loops make repeated operations on the most complex documents, such as Writer reflow, Calc recalculation, and Draw repainting, faster. Faster operations make OOo more responsive in general. More responsive OOo makes users happy.
Re:Moving fast (Score:2, Interesting)
Users, when discussing a compiler, is a nebulous term. Does one mean programmers developing for the compiler, or does one mean any person using the compiler directly through source or indirectly through binaries?
I consider the latter more significant; autovectorization will be extremely important on G4 and G5 hardware, and Mac OS X binaries (by far the most popular distribution method for the platform) will soon reflect this.
Re:debian (Score:1, Interesting)
Re:debian (Score:3, Interesting)
Kidding aside, no. Debian is legendary for being, ahem, slow about releases; they release when it's done, not on some date. Thus
If GCC 4.0 made changes that would affect the ability of the linker to link things, then GCC 4.0 would actually be slow to go into Debian. Packages would probably show up right away in Debian Experimental but otherwise would stay out for a long time.
Debian Unstable ("sid") is where the new, potentially unstable, stuff goes once it is out of Experimental. Things in Unstable are automatically promoted into Testing if they look stable, which means the Debian guys can't put anything half-baked into Unstable. They would have to wait until the current Testing is released as Stable, and then they could do a big change like that. The current Testing ("sarge") is getting closer to actually shipping but I don't know when exactly.
As long as GCC 4.0 simply produces better code, and doesn't break anything, it will show up in Unstable within a very short amount of time. I don't know enough about it to tell you whether this will happen or not, but I did read the release notes and I don't see anything in there that looks like linker breakage.
Shouldn't they have done this 10 years ago? (Score:5, Interesting)
One _ancient_ compiler (10+ years) I have to use, already has this feature -- and on a large scale: it'll do it over several screensful of code. What took GCC so long?
Unfortunately, this compiler I mention also has a bug: once it's factored out 'i' in a piece of code like that below, it then complains that 'i' is an unused variable. So you have to do something with 'i' to suppress that warning, which kinda defeats the purpose of the autovectorization.
Sample code:
int a[256], b[256], c[256];
foo () {
int i;
for (i=0; i256; i++){
a[i] = b[i] + c[i];
}
}
Re:Autovectorization (Score:4, Interesting)
Right in some ways, but importantly wrong in others. Red Hat and the Fedora Project, for example, are compiled using the i386 instruction set but optimized for i686. This means that the cmov instruction isn't available -- but apparently, it's not much of a win (and even a loss in some cases) on modern processors. And code which uses SSE or 3DNow or whathaveyou is usually carefully hand-coded and checked for at runtime.
There's not really much advantage of switching away from this scheme, so I don't see it as worth the bother. Instead, x86_64 will eventually kill it all off and we'll move on to that.
Example (Score:4, Interesting)
Notice the lack of an array index. These are true vector operations to begin with, so it is already assumed that the array elements are independent, therefore the log and addition can be parallelized safely.
No hope for named warnings (Score:2, Interesting)
Will named warnings never be impemented? Or numbered? Something that lets me turn off a warning for a particular line of code?
Have you ever tried writing an overflow safe integer class? I have, and I did, but I have to compile everything with -w because otherwise I get 40 pages of "condition will always be false due to limited range of data type". Bleh! If it will always be false, throw it away! I need the check in there for when the type will be a signed int.
Does anyone have a ray of hope? I love most of GCC's warnings, and have always been able to work around them, but in this case there's just no way to get rid of them.
Pascal (Score:2, Interesting)
They always seem to be close, yet it never happens.
Re:whoa (Score:1, Interesting)
e.g. a global variable that is not changed but is reread inside a loop is never rechecked because the reads are hoisted out of the loop by the optimizer. If the variable is something that is supposed to be written to by another thread, your code doesn't work as expected. One way to deal with this is to mark the variable as volatile, as you'd do in Java, but that disables a lot of other optimizations on every use of the variable. Another way is to put a mark in the loop saying that the variable can be modified (using an asm statement in GCC IIRC).
Basically there is very little a compiler can do to improve performance for multi-core systems.
At least I knew what I was talking about. Apologies if it didn't come out very well, but I stil can't make sense of the OP.
And BTW, I'm posting as AC, so you can be pretty sure I'm not karma whoring.
Re:GCC 4.0's biggest winner is probably KDE (Score:2, Interesting)
Re:i'm having horrible flashbacks... (Score:3, Interesting)
In the long term, I think it was a very good thing: coding C (and C++, but didn't have that much experience on that) got much more stickt and in my experience, removes a lot of possible problems later on.
If someone had a lot of problems porting 2.95 to 3.2, his code needed to be reveiwed anyway. It kind of removes the "boy" from "cowboys" in coders (experience is drawn from not-so-embedded systems).
Based on the remarks obtained from the compiler for embedded code (they made a lot of sense) during the switch and gcc becoming more strict, we now even compile everything with -Werror.
In our deeply embedded networking code, we got a speed improvement of 20% just switching to 3.4 (from 3.3)
Go GCC!
Re:whoa (Score:4, Interesting)
Actually they do just that. You put a #pragma omp before a for loop to have it implemented using threads. You put another #pragma omp before access to a shared variable to have access serialized. You never code to a specific API. The compiler automatically generates pthread calls, Win32 calls, etc as appropriate. Your code is portable. Lawrence Livermore has some nice examples but the seem to be down right now, www.llnl.gov.
Re:Autovectorization (Score:3, Interesting)
Re:Figured this had to happen (Score:4, Interesting)
We're not shipping "a fork" of GCC 4. We're shipping GCC 4.0.0, which we compiled from source for Darwin 8.
In fact, when you're talking about shipping a compiler for a specific platform, the whole notion of "a fork" is basically meaningless.
(Setting aside, of course, that the whole notion of "a fork" runs 100% counter to all that open-source stuff that you guys are supposedly so hip to anyway.)
Re:Readme.SCO (Score:4, Interesting)
Re:just when OpenBSD i386 started to move to 3.x (Score:3, Interesting)
After a while I found out that the P2 ran Debian Woody with gcc 2.95 used by default and the P3 ran testing with gcc 3.3 (?) used by default. Another compile with the same gcc versions gave better results.
Re:Figured this had to happen (Score:4, Interesting)
Re:Still no C99? (Score:3, Interesting)
Re:Shouldn't they have done this 10 years ago? (Score:3, Interesting)
Because vectorisation and parallelisation are two very hard problems. Normal compiler optimisations pale in comparison for the most part.
Even the best currently available vectorising compilers will do a pretty poor job compared to human optimisations (in cases where it's possible to do by hand). I have seen examples were a simple c-loop could be hand optimised into half a page of asm where a vectorising compiler produced 4 pages.
It is a REALLY hard problem.
Which leads me to believe that it wasn't doing a very good job at all. Just because it claims to do vectorisation and adds a few asm instructions doesn't mean it's doing a poor job. That they couldn't even detect the iterator variable may even hint that it could produce broken code.
Re:Objective-C++...? (Score:4, Interesting)
No they're not! And I myself am not about to port hundreds of thousands of lines of C++ code to Objective-C since that'd eliminate the Windows version, which I can't do!
In the code base I'm currently porting to Cocoa, all of the application's core logic and data structures are written in C++, and the user-interface layer is written natively for each platform. So the Mac version gets a high-quality Cocoa front-end and Windows/Linux/BSD gets a wxWidgets front-end (since wxWidgets does a good job on those platforms).
Take away Objective-C++ (and therefore Cocoa C++) support and I'll just compile the wxWidgets version for the Mac since CoreFoundation is, as you say, a pain in the ass to use. The result: another low-quality "Windows-app-in-Aqua-clothing" Mac app.
Cross-platform toolkits, such as wxWidgets, SWT and Swing produce usable but low-quality Mac applications (missing sheets, drawers, collapsable toolbars, AppleScript support, and so on and so forth). Objective-C++ allows me to easily write high quality Aqua-compliant applications easily. So if Apple values Mac users it will keep supporting Objective-C++!
Not to mention that, for me at least, Cocoa/C++ is one of the reasons I use a Mac in the first place. I can produce professional user interfaces in no time and still know that I can port the core logic to Windows/Linux/BSD.
Oh, and I'm working in the games industry, where the majority of code is C++. I know for a fact that Apple wants more games code ported to OS X.