GCC 4.0 Preview 684
Reducer2001 writes "News.com is running a story previewing GCC 4.0. A quote from the article says, '(included will be) technology to compile programs written in Fortran 95, an updated version of a decades-old programming language still popular for scientific and technical tasks, Henderson said. And software written in the C++ programming language should run faster--"shockingly better" in a few cases.'"
OpenMP? (Score:3, Interesting)
Re:OpenMP? (Score:5, Informative)
It's pretty cool. You write a loop like this:
and the complier will handle the creation and syncronization of all the threads for you. Here's a OpenMP for GCC [nongnu.org] project on the FSF site. Looks like it's still in the "planning" state, though, so I'm guessing it's not in GCC 4.X.Re:OpenMP? (Score:5, Informative)
I just want C++ programs to COMPILE faster (Score:4, Interesting)
Re:I just want C++ programs to COMPILE faster (Score:5, Funny)
Re:I just want C++ programs to COMPILE faster (Score:4, Informative)
1) make (or some equiv). Yes, I said make.
GNU make accepts a -j parameter, to thread builds. Only really useful on hyperthreading or multiprocessor boxes, however. That said, if you use:
2) http://distcc.samba.org/ [samba.org]: distcc. You can distributedly compile your apps across other machines with a similar setup. Only really helpful if you have more then one box.
3) http://ccache.samba.org/ [samba.org]: ccache. This is a C/C++ compiler. Only really useful for iterative development, and if you're doing a lot of make clean/make, as it'll cache things that don't to be rebuilt.
Just some suggestions. Also, check out prelink, to prelink anything using shared libraries (trade space-savings into performance) and make startup code run faster in some cases.
Hope that helps!
++Informative? Pwetty pwease?
Re:I just want C++ programs to COMPILE faster (Score:5, Interesting)
The extra compile time buys you more inlining (which can be either good or bad for performance, depending on cache behavior) and also type-safe templates which are not acheivable in C (without ugly hacks).
Re:I just want C++ programs to COMPILE faster (Score:3, Interesting)
Re:I just want C++ programs to COMPILE faster (Score:4, Interesting)
So I've always wondered why GCC was so much more demanding. BTW, does GCC support pre-compiled headers now? That's what seems to provided the biggest build performance boost. Even more so than parallel compilation on an SMP machine.
Re:I just want C++ programs to COMPILE faster (Score:5, Informative)
That's not true. Building C is much quicker with Visual C++ than building C++. I know, I do it every day.
However, it is generally speaking true that gcc takes more time to compile than Visual C++ does.
Re:I just want C++ programs to COMPILE faster (Score:5, Informative)
That's all you need to do. What's so hard? I use "using namespace std;" in the common include files of all of my home-built programs.
C++ is a different language. Not only is its syntax different, but the style of doing things is different. If you're expecting to not feel like it's an alien environment, you'll be sorely mistaken.
That doesn't mean it's bad; after a long time of resisting it for taste reasons, I started learning exactly *why* C++ does certain things, and how to put them to good use. And the differences can be staggering at times - templates are invaluable, destructors are invaluable, classed arrays (things like vectors instead of pointers) are invaluable, maps are invaluable, etc. These sort of things can knock out bugs you didn't even know were there, improve performance, drastically shorten, and clarify your code all at the same time. That's a rare combination of benefits in the programming world. You pay for it in compile time costs, but it's well worth it - especially when it comes to maintenance. You just have to accept that it's going to feel rather alien for a while, and during that time, you'll be asking yourself, "Why?".
BTW, Java 1.5 is becoming more and more like C++ every day. So, if you don't like the features of C++, you won't like them in modern Java.
Re:I just want C++ programs to COMPILE faster (Score:5, Interesting)
I'll add a couple things that have been _very_ useful in my experience:
- the const keyword: if you want to make your codebase a whole lot safer, and compile AND run faster, const is great. (Yes I know it finally became part of the C language with the C99 standard...)
- the STL. Some love it, some hate it. For my old job (game programmer), it was invaluable. We made extensive use of certain containers, and the algorithms are great. Sure I learned how to write various sort routines in college but I don't even have to think about it when the STL already has an optimal version.
- operator overloading. Once again, some love it, some hate it. Game programmers deal with vector math and quaternions all the time, so this feature of C++ is put to good use. It makes the code read more like a math equation, instead of stuff like:
result = vector1.add(vector2);
There are probably more things that have slipped my mind but those are the ones that jumped out at me right away.
Re:I just want C++ programs to COMPILE faster (Score:5, Informative)
Const, operator overloading... all of it is great. Inheritance, too. There are so many things in C++ to help you keep your code small, easy to read, and clean. It feels a bit alien at first if you've been programming in C for a long time, but it's well worth it.
I have my faults with it, of course. I think streams were done rather poorly, for example. But overall, I'm glad I switched.
Re:I just want C++ programs to COMPILE faster (Score:5, Informative)
They propogate down into every .cpp that includes your library's headers, whether or not the calling programmer wanted to import the entire std namespace.
Some programmers may have their own classes called map, or string, or list, or a dozen other things, and a single using statement buried in a nested .h can cause unanticipated namespace collisions.
In general, it's safest and most polite to refer to classes canonically in header files (std::string, etc), and keep the using statements in your implementation files.
Sources: "Accelerated C++" (Koenig, Moo); comp.lang.c++ (sample [tinyurl.com])
Re:I just want C++ programs to COMPILE faster (Score:3, Insightful)
Funny, I thought real coders used the right tool for the job, or is that real smart coders?
watch out (Score:5, Funny)
Is this the programmer's way of saying it will run at some speed less than faster?
Re:watch out (Score:5, Funny)
But if faster has been decremented... (Score:4, Insightful)
C++ compiler (Score:5, Insightful)
I'm so sorry, ... (Score:5, Informative)
You can make an experiment and try compiling KDE with Intel C++ or Comeau C++ compilers, and see that not much can be gained comparing to GCC.
Re:C++ compiler (Score:5, Informative)
Yes, from here [apple.com]: "
Re:C++ compiler (Score:5, Funny)
Re:C++ compiler (Score:3, Insightful)
[snip]
But it's the same parser as g++ 3.4. It is faster (and fixes bugs) compared to g++ 3.3, but calling it "tremendously faster" seems a bit of stretch.
Re:C++ compiler (Score:5, Interesting)
Re:C++ compiler (Score:5, Informative)
http://people.redhat.com/bkoz/benchmarks/
Doesn't look public though.
Re:C++ compiler (Score:4, Interesting)
But will it compile C++ any faster?
I don't care if it compiles any faster, just as long as it compiles correctly. We were in the middle of a port of a major system to Linux recently, and the sysadmins decided we really need to install some patches. I shoulda' known better. I shoulda' said no.
They applied the Red Hat AS patches (which included patches to gcc) on the target machine, and suddenly newly compiled programs that had been working for years had memory overwrite problems. Strings and char arrays would contain things that should be in adjacent memory. The most obvious difference was the newly compiled code was much smaller than that produced by the unpatched gcc.
Luckily, we had another Red Hat AS machine which had not been patched, and I moved all the development work there. Then I promised the admins that I'd go postal if they touched gcc on that box. So far, so good, but I'd really appreciate it if the gcc guys would get it right before releasing stuff. One of the promised results of the above mentioned patch was a significant reduction in size. They got that part right at least.
Mudflap (Score:5, Insightful)
I really love this feature, it will probably cut down on a great deal of problems. My only concern is that some devs will think running it all the time is OK (read: "Mudflap slows a program's performance"), so hopefully that's not the case.
More detailed information on the mudflap system can be found here [gnu.org].
Re:Mudflap (Score:3, Insightful)
For some users and some classes of applications, it will be OK. Sometimes security is more important than performance, and you can't imagine the weird stuff your code sees when it's in the customers' hands.
Re:Mudflap (Score:4, Interesting)
I'll agree with you on this much: C+Mudflap is not the way to fix buffer overrun problems. The problem isn't that runtime safety is costly--it isn't--the problem is that adding runtime safety to the C programming language post hoc is costly because of C's screwed up pointer semantics. That's why Mudflap costs you a factor of 3-5 in terms of performance on benchmarks, when runtime safety in another language really should only cost you a few percent overhead at most.
Mudflap will probably not be used much for testing (people already have good tools for that they don't use) and it has too much overhead for most production use. The biggest thing Mudflap will do is perpetuate the myth that runtime safety is costly.
Re:Mudflap (Score:5, Insightful)
I have been using C since 1980. I have seen dozens of attempts to fix C semantics since then. I published some papers on it myself. It can't be done efficiently. The best you can do is something like Mudflap, Purify, Cyclone, or Valgrind.
Where does the factor of 3-5 come from? From the Mudflap paper on the Mudflap web site--it has benchmarks.
Where do the "few percent overhead" come from? From comparing the performance of Pascal, Java, and Eiffel code compiled with safety on and off.
And you know what the real kicker is? Not only do C pointer semantics make it impossible to generate efficient runtime safety checks, they even inhibit important optimizations when no safety features are enabled. And because C programmers then have to jump through all sorts of hoops to achieve some kind of safety in the midst of this chaos, the software ends up being bloated, too. So, C is not only bad for efficient safe code, it is bad for efficient code of any form.
I am getting sick when C-hating posts like this one get modded up. Seems to be happening all the time lately.
I'm getting sick of the fact that ignorant fools like you have been holding back progress in software systems for a quarter of a century. It's even more annoying that you try to portray your ignorance and inexperience as some kind of principled stance. C was good for what it was 30 years ago: an on-board compiler for writing small, low-level programs on machines with very limited memory. C made a decent PDP-11 compiler for V7 UNIX, and it was usable on CP/M and MS-DOS. I have fond memories of it in those environments.
I'm starting to meta-mod again.
You do that. If you join forces with enough other idiots, you will probably be able to condemn us to another quarter century of bad pointers, buffer overruns, and bloat.
Re:Mudflap (Score:3, Interesting)
There are many places where C is still used. There are many API's that are still in C. There's plenty of embedded systems programming that is done in C. So on and so forth. It has it's uses just like fortran has it's uses (and that's a rather ugly language IMHO).
A couple of years ago I was using C for embedded systems, due to the fact that the overhead incurred by C++ was just too large.
I still p
Re:Mudflap (Score:4, Informative)
quote... (Score:5, Funny)
--not Benjamin Franklin
Re:Mudflap (Score:3)
Autovectorization (Score:5, Informative)
Re:Autovectorization (Score:3, Informative)
Re:Autovectorization (Score:4, Insightful)
Screenshots! (Score:5, Funny)
Re:Screenshots! (Score:5, Funny)
Knock yourself out, bud!
Re:Screenshots! (Score:5, Funny)
http://www.algorithm.com.au/albums/screenshots/lo
Re:Screenshots! (Score:3, Insightful)
Funny, but it does highlight something that annoys me. Make/gcc output.
For the last few weeks I've been compiling a set of apps that's about 5x bigger than just the Linux kernel (it includes the kernel too). Watching the make/gcc output scroll by I've decided one thing: I *hate* it.
GCC itself is fine. It only does something when there are errors. Make, on the other hand, spits out every command it runs and all kinds of things that I really don't care about.
Without the bloat of a full-fledged I
Re:Screenshots! (Score:5, Informative)
Point-by-point response:
Re:Screenshots! (Score:4, Informative)
boost, please ? (Score:4, Interesting)
Re:boost, please ? (Score:3, Informative)
Re:boost, please ? (Score:5, Informative)
What does GCC have to do with this?
If you want something added to the standard, talk to the C++ standard committee. (Either the Library or the Evolution groups, in this case.) You'll find you're about the 10,000th person to ask for this. You'll find there's an extensive FAQ on this exact subject. You'll find that the committee is very keen on adapting large parts of Boost, as experience in the real world smooths the rough edges of Boost.
If you look a bit more, you'll find that some extensions have already been adopted (called "TR1") and are being shipped with GCC 4.0.
You'll also find that GCC does not get to determine what's in the standard. And -- speaking as one of the libstdc++ maintainers, although I'm largely too busy to do much myself these days -- GCC will not ship Boost. Or glibc. Or libAPR. Or OpenSSL. Or any of the other million very useful open source libraries out there, because that's not our job.
and how many times... (Score:5, Insightful)
The biggest challenge with Binary compatability across Linux distros is the GCC release (followed by the glibc releases, who live in the same ivory tower). I realize that things have to change, but I wish that they would not break compat between versions quite so often...
I'd really like to be able to take a binary between versions, and it just work.
This is one area where Sun rocks. Any binary from any solaris2 build will just work on any later version. With some libraries, you can go back to the SunOS days (4.1.4, 4.1.3UL, etc). That's 15 years or so.
Re:and how many times... (Score:3, Interesting)
Can you run C++ applications compiled on Solaris 2 on any later version?
Compatibility is where Sun rocks, and it's also the rock that Sun is tied to. Most of the things that people hate about Solaris are kept that way because of their commitment to backwards compatibility. It becomes difficult to make signifigant changes if you focus on compatibility the way they do.
Linux and ot
Re:and how many times... (Score:4, Informative)
Re:and how many times... (Score:3, Interesting)
And if you statically link against libc on, I suspect, at least some other UN*Xes (Solaris being one of them), you'd better be prepared to handle the consequences as well. The same, I suspect, applies if you statically link against the kernel32/gdi32/user32 libraries on Windows, if you even can do so.
Thus, it's not even clear that this (problems with installing completely-statically-linked binar
Re:and how many times... (Score:3, Interesting)
Now that the C++ ABI is standardized, there is NO excuse for not having backwards- and forwards- compatibility for ordinary C and C++ executables linked against glibc.
The Linux kernel v2 ABI has been mostly backwards- and forwards-compatible since its first release. And Linux kernel guts change
Ahem. (Score:5, Informative)
Have you tried maintaining a compiler used in as many situations as GCC? (If not, you should try, before making complaints like this. It's an educational experience.)
We added a "select ABI version" to the C++ front-end in the 3.x series. If you need bug-for-bug compatability, you can have it.
Wanna know when this is gonna happen? Sooner, if you help [gnu.org].
Re:Ahem. (Score:3, Insightful)
Have you tried maintaining a compiler used in as many situations as GCC? (If not, you should try, before making complaints like this. It's an educational experience.)
This is exactly the ivory tower thinking that the poster is complaining about. You are overestimating the maintenance cost und underestimating the pain for your users. This is typical for open source: think that what is
sane error messages when using templates (Score:4, Insightful)
Favorite quote from the article (Score:5, Insightful)
More incompatibilities on the way? (Score:3, Insightful)
Re:More incompatibilities on the way? (Score:5, Insightful)
Re:More incompatibilities on the way? (Score:4, Insightful)
This is a troll, right? (Score:5, Interesting)
You've got to be fucking kidding me.
Have a look at the mailing list anytime somebody reports a bug, and the choice is between fixing the bug and changing the ABI. Watch the flamefests erupt.
(Watch them die down a few days later as one of the brilliant core maintainers manages to do both, with a command-line option to toggle between the default fixed version and the buggy old version.)
Wait a few months. See a new corner-case weird bug some in. Lather, rinse, repeat.
Such as...?
All the ones I can think of were GCC extensions long before they were officially added to the languages. In fact, their presence in GCC actually influences their presence in an official language standard, because that's what the standards bodies do: standardize existing practice.
The troublesome part is when the syntax as added to the language standard differs from the extension that was originally put in GCC. Then we have to choose which once to support -- because supporting both is often not feasible -- knowing that whatever choice we make, slashdot is going to whinge about it. :-)
Re:This is a troll, right? (Score:3, Interesting)
For inline assembly code, non-Lvalue parameters can no longer be given an "m" contraint. It used to be possible to have a parameter like (x+1) and use the most general contraint "m", register or memory. This way gcc could leave x+1 in a register, or spill it onto the stack if it ran out of registers.
In gcc 4 you have to define a variable to hold x+1 and gcc is forced to write the value into memory, even if it could be left in a register.
From what I've heard (Score:4, Informative)
If you're interested, here's a (long) discussion [gentoo.org] which makes reference to many of the things coming in the new GCC.
Compiler (Score:3, Informative)
Don't all compilers convert a program's source code into binary instructions?
Re:Compiler (Score:4, Funny)
Nope.
Oh, did you mean all SOURCE CODE compilers?
See, the word compiler was around before computers, and is only synonymous with "source code compiler" to geeks like us.
Therefore in your attempt to be pedantic, you clearly were not being pedantic enough, thus the joke is on you.
Ha-ha...
Gentoo system rebuild! (Score:4, Funny)
(I'm especially excited by the possibility of random compiler incompatibilities!)
Re:Gentoo system rebuild! (Score:5, Funny)
We now actually detect when GCC is running on a Gentoo system, and will occasionally miscompile an inner loop, just to make you twitch. The biggest complaint we received from Gentoo users during the 3.x series was that GCC was too boring, so we threw this in to keep you on your toes.
Cheers!
Intel C compiler team (Score:4, Informative)
Performance on optimizations? (Score:5, Informative)
GCC is an incredibly versatile compiler, with frontends for C, C++, Java, Ada and Fortran provided with the basic install. 3rd party extensions include (but are probably not limited to) Pascal, D, PL/I(!!) and I'm pretty sure there are Cobol frontends, too.
They did drop CHILL (a telecoms language) which might have been useful, now that telecoms are taking Linux and Open Source very seriously. As nobody seems to have picked it up, dusted it off, and forward-ported it to modern GCCs, I think it's a safe bet that even those interested in computer arcana are terribly interested in CHILL.
OpenMP as been discussed on and off for ages, but another poster here has implied that design and development is underway. OpenMP is a hybrid parallel architecture, mixing compiler optimizations and libraries, but I'm not completely convinced by the approach. There are just too many ways to build parallel systems and therefore too many unknowns for a static compile to work well in the general case.
Finally, the sheer size and complexity of GCC makes bugs almost inevitable. It provides some bounds checking (via mudflap), and there are other validation and testing suites. It might be worth doing a thorough audit of GCC at this point, so that the 4.x series can concentrate on improvements and refinements.
Major Features Dropped From GCC 4.0 (Score:5, Interesting)
- LLVM. Low Level Virtual Machine. This is a low level and generic pseudo code generator and virtual machine.
http://llvm.cs.uiuc.edu/ [uiuc.edu]
This sounded fabulous, and the project appears to be progressing well (it's at v1.4 now). If I understand correctly it is only politics that has kept it out of GCC 4. Can anyone shed more light on this?
- Compiler Server. Rather than invoking GCC for each TU you would run the GCC-Server once for the whole app and then feed it the TU's. This would make the compile process much faster and allow for whole program optimization.
This would have been nice but perhaps they found better ways to achieve the same thing.
Re:Major Features Dropped From GCC 4.0 (Score:5, Informative)
Yeah, heavy on the "might".
Politics is what's preventing us from considering LLVM, let alone the long and torturous process of making the code work. The brutally short story is that GCC is operating under a certain restriction imposed by RMS since its inception, and LLVM -- or really, any good whole-program optimization technique -- would require us to violate that restriction.
Now, there are some of us (*waves hand*) who feel that RMS is a reactionary zealot in this respect, and would be more than happy to use the LLVM techniques, but we won't get into that.
Re:Major Features Dropped From GCC 4.0 (Score:5, Interesting)
Care to tell us what this oh so mysterious restriction is?
my guess (Score:3, Insightful)
No, here it is. (Score:5, Informative)
I didn't go into details because this has been covered elsewhere, and I'm tired of discussing it myself. But I didn't realize I would be accused of "uninformed slander". So. A bit of background info first.
Inside the guts of the compiler, after the parser is done working over the syntax (for whatever language), what's left over is an internal representation, or IR. This is what all the optimizers look at, rearrange, throw out, add to, spin, fold, and mutilate.
(Up to 4.0, there was really only one thing in GCC that could be properly called an IR. Now, like most other nontrivial compilers, there's more than one. It doesn't change the political situation; any of them could play the part of "the IR" here.)
Once the optimizers are done transforming your impeccable code into something unrecognizable, the chip-specific backends change the IR into assembly code. (Or whatever they've been designed to produce.)
Each of these transformations throws away information. What started out as a smart array class with bounds checking becomes a simple user-defined aggregate, which becomes a series of sequential memory references, which eventually all get turned into PEEK and POKE operations. (Rename for your processor as appropriate, or look up that old joke about syntactic sugar.)
Now -- leaving out all the details -- it would be Really Really Useful if we could look at the PEEKs and POKEs of more than one .o at a time. Since the compiler only sees one .c/.cpp/.whatever at a time, it can only optimize one .o at a time. Unfortunately, typically the only program that sees The Big Picture is the linker, when it pulls together all the .o's. Some linkers can do some basic optimization, most of them are pretty stupid, but all of them are limited by the amount of information present in the .o files... which is nothing more than PEEK and POKE.
As you can imagine, trying to examine a pattern of PEEK and POKE and working out "oh, this started off as a smart array class with bounds checking, let's see how it's used across the entire program" is essentially impossible.
Okay, end of backstory.
The solution to all this is to not throw out all that useful abstract information. Instead of, or in addition to, writing out assembly code or machine code, we write out the IR instead. (Either to specialized ".ir" files, or maybe some kind of accumulating database, etc, etc; the SGI compiler actually writes out .o files containing its IR instead of machine code, so that the whole process is transparent to the user.) Later on, when the linker runs, it can see the IR of the entire program and do the same optimizations that the compiler did / would have done, but on a larger scale.
This is more or less what all whole-program optimizers do, including LLVM. (I think LLVM has the linker actually calling back into the compiler.)
The "problem" is that between the compiler running and the linker running, the IR is just sitting on the disk. Other tools could do whatever they want with it. RMS's fear is that a company would write a proprietary non-GPL tool to do all kinds of neat stuff to the IR before the linker sees it again. Since no GPL'ed compiler/linker pieces are involved, the proprietary tool never has to be given to the community. Company wins, community loses.
End of problem description. Begin personal opinionating.
It's a legitimate concern, but many of us feel that a) it's going to happen eventually, and b) we do all GCC users a disservice by crippling the tools merely to postpone an inevitable scenario. As usual, there's a wide range of opinions among the maintainers, but the general consensus is that keeping things the way they are is an untenable position.
Re:Major Features Dropped From GCC 4.0 (Score:3, Interesting)
Now, there are some of us (*waves hand*) who feel that RMS is a reactionary zealot in this respect, and would be more than happy to use the LLVM techniques, b
how LLVM would harm gcc (Score:5, Insightful)
LLVM can be used as a GPL bypass. If this were to
become a problem, people would not feel as good
about contributing to gcc.
Well, that's how RMS thinks anyway. Never mind that
adding LLVM would enable some really neat stuff.
Can anyone elaborate on this LLVM v. RMS issue? (Score:3, Interesting)
It's supposedly a GPL bypass (Score:3, Informative)
(like preprocessed, but more work having been done)
If gcc can convert C to LLVM, and LLVM to native,
then you could replace either half with something
proprietary. You could add a proprietary middle
step that optimized LLVM code.
Re:Can anyone elaborate on this LLVM v. RMS issue? (Score:5, Interesting)
Because of this history, RMS does not want to make it easier for companies to take from gcc without giving back. LLVM would provide a clean interface between portions of gcc, and that clean interface could be so abused.
Remember that gcc has Objective-C support only because NeXT was forced to abide by the GNU GPL. Large portions of gcc were contributed by volunteers under the terms of the GNU GPL; their work was donated with the expectation that others' work would be made available. Many would see LLVM as a betrayal of that expectation. The next version of the GPL may address this issue...
Re:Major Features Dropped From GCC 4.0 (Score:3, Informative)
It's a shame, since I think the compile server has major potential - and not only in terms of improving compile speed. However, there is still a
Re:GUI (Score:3, Informative)
Re:GUI (Score:4, Informative)
Likewise, there are several IDEs that can nicely handle a C++ project which uses GCC. Eclipse [eclipse.org] is maybe the best example of these.
Besides, do you really want "Must have GUI to cope with compiler" on your resume? ;-)
Re:GUI (Score:3, Informative)
That's: g++ -o myapp file1.cpp file2.cpp file3.cpp
Re:GUI (Score:4, Insightful)
Yeah, something that article does not bring up... (Score:4, Interesting)
Paul B.
Re:Shockingly better? (Score:4, Informative)
Actually, SSA trees probably count, which is new in GCC 4 (invented in the early 90's). Look here [apple.com], scroll down to "Power Through Builds" for a list of improvements from SSA trees.
Of course, this claim may be due to no longer doing something shockingly inefficient.
Re:If GCC can compile C++, then... (Score:5, Informative)
Not much. (Score:5, Interesting)
"gcc" will switch languages based on the filename extension. Many people compile C++ by calling "gcc".
"g++" suppresses that bit of logic and forces the language to be C++, which is useful if you have some C code that you want to be built as C++, or if you're feeding the C++ source from stdin (hence, no filename extension).
Linking C++, though, you want to use g++ instead of gcc, unless you really know what you're doing. The "gcc" driver doesn't know which libraries to pull in -- yes, this is something we'd like to change someday -- and the "g++" driver will correctly pull in libstdc++, libm, etc, etc, in the correct order for your linker and your system.
(Hands up, everybody who remembers when "g++" was a shell script!)
What?!? (Score:5, Funny)
Are you going to rob us? At first I thought that was your joke, but the more I think about it, the more I wonder if, being a part of the gcc team, you are inserting insidious code to look for credit card and bank account numbers on the disk during compiles and use steganography to embed them in executables; no one else would know about them, and all you'd need is a robot crawling download pages, looking for binaries with some magic code somewhere
The little bit of extra disk thrashing during the combined compile and search would never be noticed, and no one looking at compiled machine lanuage ever wonders why it is so odd looking. They just assume it's because of some new fangled optimization.
My god you are devious rascals!
Re:Not much. (Score:4, Informative)
Execution speed.
The gcc/g++ driver's purpose in life is to rip through the command line, figure out what other programs need to be run (compiler, assembler, linker, etc), fork them all off -- possibly in a loop, if you've passed more than one file on the command line -- and clean up afterwards.
"gcc -> real-work-programs" or "g++ -> real-work-programs" is a much faster executation path than "sh parser -> gcc -> real-work-programs", especially when your makefile is repeatedly invoking g++.
Maintainence is not especially difficult; g++ isn't really a seperate program. The difference between gcc and g++ is one or two extra .o files that get linked into the final executable. (Same for other language drivers that can't get by with plain "gcc", like the Java one.)
Wrong. (Score:4, Informative)
Even the most cursory search of the GCC mailing list archives would disprove this.
Gcc killed fortran (Score:5, Interesting)
Re:Gcc killed fortran (Score:3, Informative)
Anyone who mistook g95 for F95 would indeed be right in concluding fortran was a dated useless language.
Dude, g95 [sourceforge.net] isn't yet completed. Why the hell would one expect it to be fully-functional? Got an axe to grind about the g95/gfortran fork?
My favorite parts of fortrans are that one cannot overflow a buffer
Rubbish, you can do just the same stupid things that you can with C. The difference is that Fortran can implement arrays without the need for pointers, and most Fortran compilers support decent (
Re:Gcc killed fortran (Score:3, Interesting)
(1) the slow arrival of the Fortran 90 standard, which added dynamic memory allocation, a full set of control structures, user-defined types and free source form, and which otherwise remedie
Re:Gcc killed fortran (Score:3, Interesting)
In the end, Fortran95 is fantastic for scientific computing and the only other language that comes close is C. C is just as fast, but (omfg) you can make a lot of mistakes that will just suck the time from you. No thanks.
Re:Fortran??? (Score:4, Interesting)
This is a dramatic oversimplification, but from what I've read on the GCC lists, it appears to be how it works.
Re:Latest Fedora-development has gcc 4.0 (Score:3, Informative)
FC4 is not due for release for 4 months
If gcc development slips so does FC4
gcc 4 is ABI compatible with gcc 3.4
gcc 4 is ABI compatible with gcc 3.4
See above two statements.
This is Fedora releasing in June (possibly) not Red Hat's next release which will be a year and a half from now.
Yes, it's true (Score:5, Interesting)
It gets even more devistating on Fortran. Seems Intel has like the only good Fortran compiler in the world. That's part of the reason their chips do so well on SPEC, the FP part is all fortran code and their compiler just rules at it.
If you Google around for compiler benchmarks you'll find a number of them, and virtually all show the Intel compiler dominating. One of the best, which I can't find a link for right now, was a test done by Toms Hardware. They did MPEG-4 encoding with the P4 and found that it blew. Intel figured something was wrong, got the source and recompiled the program (was compiled with VC++ 6.0). The P4 almost quadrupled in speed (and got even faster with the SSE optimised modes they added), and even the Athlons showed a near doubling in speed.
Re:Nitpicking (Score:5, Interesting)
No idea about MSVC, it doesn't build very good Linux binaries though anyways.
Re:How about support for older levels? (Score:3, Insightful)
A start would be sticking to ISO C. If you can possibly avoid it, steer clear of writing code targetted at a specific compiler.
Apple has been on the "leading" edge for a while (Score:5, Interesting)
10.3 shipped with GCC 3.3, before 3.3 was released.
10.4 looks to continue the pattern. Apple takes a snapshot of GCC, forks it 6-9 months before the OS ships, tweaks/tunes/optimizes GCC, builds and ships with that version of the compiler, and then re-submits its changes, so future GCC builds (especially the PPC ones) get all the goodies.
And the compiler has had 6-9 months of QA from Apple, which is as good as the amount of credit you give their QA department
for loop inside a printf -- gcc does it (Score:4, Interesting)