Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
GNU is Not Unix Programming IT Technology

GCC 4.0 Preview 684

Reducer2001 writes "News.com is running a story previewing GCC 4.0. A quote from the article says, '(included will be) technology to compile programs written in Fortran 95, an updated version of a decades-old programming language still popular for scientific and technical tasks, Henderson said. And software written in the C++ programming language should run faster--"shockingly better" in a few cases.'"
This discussion has been archived. No new comments can be posted.

GCC 4.0 Preview

Comments Filter:
  • C++ compiler (Score:5, Insightful)

    by pchan- ( 118053 ) on Monday March 14, 2005 @07:03PM (#11937971) Journal
    But will it compile C++ any faster? The difference between compile times of C and C++ files is staggering. Compiling Qt/KDE takes forever with gcc 3.x.
  • Mudflap (Score:5, Insightful)

    by SteelV ( 839704 ) on Monday March 14, 2005 @07:05PM (#11937990)
    "GCC 4.0 also introduces a security feature called Mudflap, which adds extra features to the compiled program that check for a class of vulnerabilities called buffer overruns, Mitchell said. Mudflap slows a program's performance, so it's expected to be used chiefly in test versions, then switched off for finished products." - from the article

    I really love this feature, it will probably cut down on a great deal of problems. My only concern is that some devs will think running it all the time is OK (read: "Mudflap slows a program's performance"), so hopefully that's not the case.

    More detailed information on the mudflap system can be found here [gnu.org].
  • by Zapman ( 2662 ) on Monday March 14, 2005 @07:07PM (#11938019)
    And how many times will they break ABI, API and library compatability in THIS major release? Count stands at 4 for the 3 series, maybe higher.

    The biggest challenge with Binary compatability across Linux distros is the GCC release (followed by the glibc releases, who live in the same ivory tower). I realize that things have to change, but I wish that they would not break compat between versions quite so often...

    I'd really like to be able to take a binary between versions, and it just work.

    This is one area where Sun rocks. Any binary from any solaris2 build will just work on any later version. With some libraries, you can go back to the SunOS days (4.1.4, 4.1.3UL, etc). That's 15 years or so.
  • by kharchenko ( 303729 ) on Monday March 14, 2005 @07:09PM (#11938043)
    I wish the compiler would output sane error messages on compiling code that uses a lot of templates (i.e STL). At least fixing it so that the line numbers are shown during debugging would be a huge improvement!
  • by devphil ( 51341 ) on Monday March 14, 2005 @07:11PM (#11938069) Homepage


    It's not too much of a stretch to say GCC is as central an enabler to the free and open-source programming movements as a free press is to democracy.
  • by Anonymous Coward on Monday March 14, 2005 @07:11PM (#11938072)
    The protection from buffer overruns is valuable enough that perhaps it is worth including all the time. After all, who knows what vulnerabilities lurk after you "turn off" mudflap?

    Besides, it might just be automating the addition of the same code that we would need to put in to fix buffer overrun vulnerabilities.

    This is one case where I think it's worth "wasting" a small amount of performance (except perhaps in routines that need to be highly optimized) to give added security. Sure beats ray-traced-on-the-fly desktop widgets, or something, which you KNOW we're goingto see advertized in another decade. ;)
  • Re:Mudflap (Score:2, Insightful)

    by RetroGeek ( 206522 ) on Monday March 14, 2005 @07:11PM (#11938076) Homepage
    I really love this feature, it will probably cut down on a great deal of problems.

    It will create a false sense of security.

    During developing/testing problems are found and hopefully fixed. It is the problems that are NOT found that create vulnerabilities.
  • Re:Mudflap (Score:3, Insightful)

    by bill_mcgonigle ( 4333 ) * on Monday March 14, 2005 @07:13PM (#11938096) Homepage Journal
    My only concern is that some devs will think running it all the time is OK

    For some users and some classes of applications, it will be OK. Sometimes security is more important than performance, and you can't imagine the weird stuff your code sees when it's in the customers' hands.
  • by gvc ( 167165 ) on Monday March 14, 2005 @07:16PM (#11938133)
    The gcc team seem to have no respect for legacy code. Incompatible syntax changes and incompatible dynamic libraries make me dread every new release.
  • Re:GUI (Score:4, Insightful)

    by Daedius ( 740129 ) on Monday March 14, 2005 @07:23PM (#11938194)
    First, you are missing view of an ideaology among many open source projects which is to create a very powerful and optimized that does not bind itself, its users, or any other projects that want to build on top of it to any particular GUI. Most programs do this by running in extremely flexible commandline interfaces, allowing library interfaces, or just being a library for external programs to reference. You do have a point, however, that there is a lacking of a good IDEs in the linux community. I don't think any of us can deny the tremendous effect of an extremely good IDE (Eclipse for java for example). I think within the open source community one of the biggest threats they have to people just picking up linux and wanting to program is a lack of a good IDE. Honostly, when i'm programming in .NET on Visual Studio 2003, I feel like i'm in heaven. I only wish I could have the same type of luxury within linux (Especially with the MONO project!). But with all things, it takes contribution.
  • by ari_j ( 90255 ) on Monday March 14, 2005 @07:24PM (#11938206)
    It's been my experience that they only have a lack of respect for incorrect code. If your legacy code is incorrectly-written, then you assumed the risk to begin with, says me. Write to the standard.
  • by Doc Ruby ( 173196 ) on Monday March 14, 2005 @07:26PM (#11938225) Homepage Journal
    C++ will be "much faster", so it's now "much slower" than it can be. What about the comparative efficiency of the Java bytecode it will generate? If the Java compiler is already getting more of its maximum theoretical efficiency, doesn't that mean that Java code might be faster than C++? If the Java efficiency isn't as close, doesn't that mean that any comparatively lower performance compared to C++ executables could be overcome by developing the Java compiler further? In fact, doesn't the fact that even C++ compilers as mature as GCC at this late date can get big performance increases with better programming, mean that all the C++/Java performance comparisons are really more about the compiler and its language module optimization?
  • by Gr8Apes ( 679165 ) on Monday March 14, 2005 @07:27PM (#11938240)
    Having just viewed C++ for the first time in 5 years, I must say, yuck! Namespaces in the STL are what drove me from C++ in the first place. I'm glad they got the STL to work, but namespaces are still ungodly ugly, and their pervasiveness within C++ make what used to look like an elegant language an ungainly loaded behemoth Pascal offspring, and compiling it pretty much brought down a decent SGI machine of the time.

    I'd rather use straight C at this point than C++ with the STL. Java is ever more elegant, perhaps one reason it eclipses C++ in the general business environment. (OK, so there's the generally accepted benefits of built in memory management to prevent neophytes from stubbing their toes and bringing down the house....) But with the JDK improving performance with every release, and Java gaining many of the lacking items in the the 1.5 release (ok, so some are compile time only) it's easy to see why Java continues to be a favorite of developers.
  • by gvc ( 167165 ) on Monday March 14, 2005 @07:27PM (#11938242)
    Yes, you've captured their attitude precisely.
  • Re:Screenshots! (Score:3, Insightful)

    by Merk ( 25521 ) on Monday March 14, 2005 @07:51PM (#11938491) Homepage

    Funny, but it does highlight something that annoys me. Make/gcc output.

    For the last few weeks I've been compiling a set of apps that's about 5x bigger than just the Linux kernel (it includes the kernel too). Watching the make/gcc output scroll by I've decided one thing: I *hate* it.

    GCC itself is fine. It only does something when there are errors. Make, on the other hand, spits out every command it runs and all kinds of things that I really don't care about.

    Without the bloat of a full-fledged IDE, is there such a thing as a make-wrapper GUI? Here's what I'd want:

    • Don't show me what commands are being run by default. 95% of the time, I don't care what commands make is running, I just want to know what went wrong. On the other hand, don't just throw out the output. If something goes wrong, I might need that output.
    • Show me errors, but give me context. It's great to know that there's an "undefined reference to 'ide_xlate_1024'", but what's the context leading up to that? What directory was it in? What command caused that error? What was the first error in a series? What was the environment? What were the commandline args?
    • If I *do* want to see the output of a command, don't just give me the raw commandline. When the commandline is 800 characters long, parsing all the switches with a Mark 1 eyeball is too damn difficult.
    • Syntax highlighting! -I/dir could look different from -DEMBED which could look different from -Wall. Errors could be highlighted too: "Undefined reference" should look different from "warning foo redefined", which could look different from "conditional is always true due to limited range of operands" (or however it's phrased)

    I'm sure I could come up with some more enhancements, but that would really make me happy. I know the 2.6 kernel has gone a few steps in this direction but it is far from enough.

  • by SuperKendall ( 25149 ) * on Monday March 14, 2005 @08:10PM (#11938690)
    ...would that not mean the speed other programs run at reaches "faster" more quickly?
  • by Aardpig ( 622459 ) on Monday March 14, 2005 @08:32PM (#11938899)

    A start would be sticking to ISO C. If you can possibly avoid it, steer clear of writing code targetted at a specific compiler.

  • by iabervon ( 1971 ) on Monday March 14, 2005 @08:37PM (#11938940) Homepage Journal
    Each of the APUs in a Cell has SIMD instructions. Also, the PU handles dispatch, so it's not all that much like traditional SMP from the compiler's point of view.

  • by Anonymous Coward on Monday March 14, 2005 @08:47PM (#11939017)
    Um... right.

    You realize that, once MudFlap detects a possible buffer overrun error in the code, you can fix it, right? And then, it's not there any more? That's what MudFlap does. It CHECKS for buffer overruns. It's up to your lazy ass to fix it. Leaving it in a production version is worthless... unless your end users want to be informed that the programmer was too lazy to fix buffer overruns that he knew about. It's a 5% slowdown, or a 0% slowdown for fixing your damn code and not using mudflap in a production version. NOW which would you choose?
  • by Kihaji ( 612640 ) <lemkesr AT uwec DOT edu> on Monday March 14, 2005 @09:14PM (#11939259)
    Real coders use nuttin' butt C, ADA, and ASSembly.

    Funny, I thought real coders used the right tool for the job, or is that real smart coders?

  • Re:Mudflap (Score:5, Insightful)

    by idlake ( 850372 ) on Monday March 14, 2005 @09:24PM (#11939348)
    C doesn't have a screwed up pointer semantics. It is perfect for what it does. You probably just don't understand it. Where are you getting the 3 to 5 factor? Anything to back that up? And the few percent is from what language?

    I have been using C since 1980. I have seen dozens of attempts to fix C semantics since then. I published some papers on it myself. It can't be done efficiently. The best you can do is something like Mudflap, Purify, Cyclone, or Valgrind.

    Where does the factor of 3-5 come from? From the Mudflap paper on the Mudflap web site--it has benchmarks.

    Where do the "few percent overhead" come from? From comparing the performance of Pascal, Java, and Eiffel code compiled with safety on and off.

    And you know what the real kicker is? Not only do C pointer semantics make it impossible to generate efficient runtime safety checks, they even inhibit important optimizations when no safety features are enabled. And because C programmers then have to jump through all sorts of hoops to achieve some kind of safety in the midst of this chaos, the software ends up being bloated, too. So, C is not only bad for efficient safe code, it is bad for efficient code of any form.

    I am getting sick when C-hating posts like this one get modded up. Seems to be happening all the time lately.

    I'm getting sick of the fact that ignorant fools like you have been holding back progress in software systems for a quarter of a century. It's even more annoying that you try to portray your ignorance and inexperience as some kind of principled stance. C was good for what it was 30 years ago: an on-board compiler for writing small, low-level programs on machines with very limited memory. C made a decent PDP-11 compiler for V7 UNIX, and it was usable on CP/M and MS-DOS. I have fond memories of it in those environments.

    I'm starting to meta-mod again.

    You do that. If you join forces with enough other idiots, you will probably be able to condemn us to another quarter century of bad pointers, buffer overruns, and bloat.
  • by MattWillis ( 16246 ) on Monday March 14, 2005 @09:37PM (#11939464)
    Alas, from experience I can attest that usually this is your own fault for writing nonstandard code targetting some particular feature of gcc. The best thing you can do to your code is to make sure it compiles on multiple compilers. Listen to your compiler's warnings; you ignore them at your peril.

  • Re:Yes, it's true (Score:1, Insightful)

    by Anonymous Coward on Monday March 14, 2005 @09:47PM (#11939528)
    The wicked fast Fortran compiler isn't because Intel developed it internally, but because they acquired the old Alpha compiler group from Compaq. Same with the newer C compilers. There are many reasons why Alpha was the fastest in its day...
  • by r00t ( 33219 ) on Monday March 14, 2005 @10:21PM (#11939778) Journal
    It's a pretty far-fetched idea, but...

    LLVM can be used as a GPL bypass. If this were to
    become a problem, people would not feel as good
    about contributing to gcc.

    Well, that's how RMS thinks anyway. Never mind that
    adding LLVM would enable some really neat stuff.

  • Re:GUI (Score:2, Insightful)

    by (void*)cheerio ( 443053 ) on Monday March 14, 2005 @10:25PM (#11939809) Homepage
    For me, *NIX is an IDE.

    MyIDE = xterms + vim + grep + make + svn + man + the browser + diff + io redirection +....

    It's not as polished as an IDE, not as cool. But you get to organize it any way your want.

    And besides, considering most of my time is spent manipulating text, any IDE that doesn't have vim integrated in it is useless, at least to me.

    (NB: if you like, you can subst emacs for vim in the above)
  • my guess (Score:3, Insightful)

    by jbellis ( 142590 ) <jonathan@carDEBI ... com minus distro> on Monday March 14, 2005 @10:27PM (#11939818) Homepage
    LLVM is written in C++, and RMS has dictated "Only C shalt thou write for gcc."
  • Re:C++ compiler (Score:3, Insightful)

    by martinde ( 137088 ) on Monday March 14, 2005 @10:37PM (#11939906) Homepage
    > GCC 4.0 features an entirely new C++ parser. The new parser is tremendously faster than the one in GCC 3.3

    [snip]

    But it's the same parser as g++ 3.4. It is faster (and fixes bugs) compared to g++ 3.3, but calling it "tremendously faster" seems a bit of stretch.
  • by MighMoS ( 701808 ) on Monday March 14, 2005 @10:38PM (#11939915) Homepage
    I think it comes down to C vs sh. If its more complex than sh can handle, start it up with C. Of course, I don't maintain large projects, that's my $0.02.
  • Re:Ahem. (Score:3, Insightful)

    by marcelk ( 224477 ) on Monday March 14, 2005 @10:41PM (#11939937)
    I realize that things have to change, but I wish that they would not break compat between versions quite so often...




    Have you tried maintaining a compiler used in as many situations as GCC? (If not, you should try, before making complaints like this. It's an educational experience.)


    This is exactly the ivory tower thinking that the poster is complaining about. You are overestimating the maintenance cost und underestimating the pain for your users. This is typical for open source: think that what is good for the developer justifies major compatibility issues for everybody else.

  • Should one respond to half-truths and flame-bait from Anonymous Cowards?

    Saying that distcc is "less error prone" is a meaningless statement since you're comparing distcc against an unfinished project. The compile server can work "even when preprocessor tricks are used" - give us credit for having thought about the issues, and having come up with solutions, albeit partially implemented and not necessarily optimal.

    Your compile server makes a lot of assumptions that many popular projects break.
    So what? As long as many projects can benefit from it. If some projects benefit, that would encourage other projects to clean up their header files, which would be a good thing in itself. (A side benefit of the compile server is that it encourages clean design.)

    I agree discc is far simpler, and it will be challenging to engineer a compile server that can detect and recover from header files that aren't "clean", without the checks taking so much time we lose most of the benefit. It's essentially research, and there is no guarantee that it would justify the investment needed. But it does have good potential.

    Note there are some limitations for distcc. First, of course it assumes you have multiple idle machines you can spread your compiles to. That may not be the case in a home environment or when travelling. Second, shipping pre-processed source code all over the place is quite expensive. Distcc doesn't save you time in preprocessing, optimizing, or code generation. All it helps with is parsing and semantic analysis, so the best it can give you is a modest constant-time improvement. By this I mean that if you have M files that include N header files each, the compile-time with distcc is O(M*N), but with the compile server it could potentially be O(M+N).

  • Re:Mudflap (Score:3, Insightful)

    by Cryptnotic ( 154382 ) * on Tuesday March 15, 2005 @12:34AM (#11940565)
    I'd like one of the newer languages to have the power of assembly/C/C++ while still maintaining all their grace of memory saftey and management.

    Pretty much any of those newer languages (I assume you mean Python, Ruby, Lua, et cetera) provide a C API for adding module interfaces (useful for doing fast calculations, access to C libraries, low-level operating system or device communications, et cetera). You shouldn't be afraid of mixing languages. It's the only way to really get the best of both worlds.

  • Re:Mudflap (Score:3, Insightful)

    by idlake ( 850372 ) on Tuesday March 15, 2005 @01:05AM (#11940726)
    I just googled for mudflap performance hit and got nothing

    You don't need to Google, you just need to follow the links at the top of the story!

    How does the C sematnics make it hard for the complier to iperform checks on buffers?

    Because, for practical purposes, C pointers have to be naked memory addresses that can point into the middle of an arbitrary sized chunk of memory. That means that, unlike implementations of other languages, a C implementation cannot simply get information about bounds or types by looking up data at a fixed offset relative to the pointer.

    There are plenty ways to prevent buffer over-runs these days.

    Yes, like using a decent language with a minimum of built-in error checking and a sensible type system. We have had them, oh, for about half a century. And nowadays, you can even choose among a bunch of mainstream languages like that: Java, C#, VisualBasic, OCAML, and Python, to name just a few.

    Do you have any links at all to contribute?

    I have no idea what your background is; you might high school kid that writes viruses in C in his spare time and thinks that C is the k00lest thing since Britney Spears. But if you seriously want to learn about this sort of thing, look at the Cyclone papers (you can find them on Google) and check their references, as well as references to them in the literature. You'll reach a large collection of papers on trying to make C safe. Pick and choose according to your interests.
  • Re:Ahem. (Score:1, Insightful)

    by Anonymous Coward on Tuesday March 15, 2005 @01:29AM (#11940821)
    You can have whatever compiler you want if you pay. Oh wait, you didnt? Then what are you complaining about? As has been said about other pieces of free software: If it breaks, you get to keep both pieces.
  • Re:Ahem. (Score:3, Insightful)

    by Anonymous Coward on Tuesday March 15, 2005 @01:30AM (#11940826)

    You are overestimating the maintenance cost

    How come you can judge the maintenance cost better than a GCC developer?

  • Re:C++ compiler (Score:3, Insightful)

    by multipart ( 732754 ) on Tuesday March 15, 2005 @04:55AM (#11941482)
    Actually, a substantial part of the new C++ parser in 3.4 was rewritten again for 4.0.
  • Re:Ahem. (Score:3, Insightful)

    by IamTheRealMike ( 537420 ) on Tuesday March 15, 2005 @05:21AM (#11941552)
    He can't, but he can probably judge the user cost better than a GCC developer can. And it's a huge cost.
  • Re:C++ compiler (Score:1, Insightful)

    by Anonymous Coward on Tuesday March 15, 2005 @05:54AM (#11941670)
    A make bootstrap for Gcc produces a three stage compile:

    1. Using the system-provided C compiler, compile a very simple version of gcc. The system provided compiler may not be Gcc. It may not even be fully ANSI compliant.
    2. Using the tiny Gcc built in stage 1, built a complete version of Gcc.
    3. Rebuild Gcc using the complete Gcc built in stage 2. This is done to get the best possible optimisation; the final Gcc binary can be optimised by Gcc itself.
  • by Anonymous Coward on Tuesday March 15, 2005 @07:37AM (#11941981)
    It's "they that can give up runtime safety to obtain a little boost in performance..."
  • Re:Mudflap (Score:3, Insightful)

    by marcosdumay ( 620877 ) <marcosdumay&gmail,com> on Tuesday March 15, 2005 @10:07AM (#11942720) Homepage Journal

    The best you can do is something like Mudflap, Purify, Cyclone, or Valgrind.
    Yes, it is, but C has other strengths that make it worth.

    Where do the "few percent overhead" come from? From comparing the performance of Pascal, Java, and Eiffel code compiled with safety on and off.
    But all those languages face a performance hit compared with C even with safety off.

    And you know what the real kicker is? Not only do C pointer semantics make it impossible to generate efficient runtime safety checks, they even inhibit important optimizations when no safety features are enabled. And because C programmers then have to jump through all sorts of hoops to achieve some kind of safety in the midst of this chaos, the software ends up being bloated, too.
    One need that optimixation when have no control of his pointers. But a well written program on C can be as fast without the compiler optimization. Also, a good design can avoid the bloat without compromissing security and also generate optimizations for the places where safety can be off.

    I'm getting sick of the fact that ignorant fools like you have been holding back progress in software systems for a quarter of a century.
    I am design a very speed sensitive library. Which "modern" language do you recomend me? On what language can I keep my arrays at the stack, like I'm doing on C for better speed? And on what language I can create an entire (less powerfull but faster) memory management library to avoid a bottleneck like I did with C (C++ actualy)? Think twice before you call most of the people out there idiots. Obvoulsy there are programs that worth the pay-off of using an easier language, but before you ban C, try to realize that there are applications that doens't worth it. And since some of them are the compilers of your "modern" languages, don't see how supporting C delay their developpment.

    And, before you try to arguee, I belive on the better tool for the job. That is why I currently have 4 projects, one in C++, one in Java, one in Perl (learning) and one in Bash script. I am not a blind C overload.

  • Re:Ahem. (Score:3, Insightful)

    by Per Abrahamsen ( 1397 ) on Wednesday March 16, 2005 @04:22AM (#11951385) Homepage
    As devphil said, GCC support bug-compatible ABI's. The GCC people are not the people who should judge the user cost, the distributors are. They are the people in contact with the users, and should select which ABI version to use.

Genetics explains why you look like your father, and if you don't, why you should.

Working...