Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
GNU is Not Unix Programming Technology

GCC 4.0.0 Released 680

busfahrer writes "Version 4.0.0 of the GNU Compiler Collection has been released. You can read the changelog or you can download the source tarball. The new version finally features SSA for trees, allowing for a completely new optimization framework." The changelog is pretty lengthy, and there's updates for every language supported from Ada to Java in addition to the usual flavors of C.
This discussion has been archived. No new comments can be posted.

GCC 4.0.0 Released

Comments Filter:
  • Sweetness (Score:5, Informative)

    by kronak ( 723456 ) on Thursday April 21, 2005 @10:12PM (#12309561)
    Glad to see they are targeting the AMD64 architecture for improvements.
  • by Entrope ( 68843 ) on Thursday April 21, 2005 @10:17PM (#12309595) Homepage
    Single static assignment is a way the compiler can rewrite the code (usually for optimization purposes) so each "variable" being analyzed is only written once. This makes a lot of optimizations easier to do, since it eliminates aliasing due to the programmer assigning different values to the same variable. You'd probably learn these things if you would RTFA.
  • by hey hey hey ( 659173 ) on Thursday April 21, 2005 @10:18PM (#12309602)
    Static Single Assignment, optimization techniques. Try here [gnu.org] for more details.
  • Re:Great Timing (Score:4, Informative)

    by Rubel ( 121009 ) on Thursday April 21, 2005 @10:18PM (#12309605) Journal
    Although the version of GCC 4 that Apple ships is from last October:
    gcc version 4.0.0 20041026
  • by GillBates0 ( 664202 ) on Thursday April 21, 2005 @10:22PM (#12309626) Homepage Journal
    Wikipedia (as usual) has a nice article [wikipedia.org] about the Static Single Assignment (SSA) form.

    To put it simply, SSA is an intermediate representation where each variable in a block is defined only *once*. If a variable is defined multiple times, the target of any subsequent definitions of the same variable is replaced by a new variable name.

    SSA helps to simplify later optimizations passes of a compiler (for example: eliminating unused definitions, etc) as described in greater detail (with examples and flowcharts) in the article linked to.

    That's the SSA form in short. Now I need to ask somebody the difference between the standard SSA form and "SSA for trees".

  • Re:Great Timing (Score:2, Informative)

    by Anonymous Coward on Thursday April 21, 2005 @10:30PM (#12309672)
    It's not a snapshot though, it's a fork (from that date). Apple has done some development on it.
  • by Anonymous Coward on Thursday April 21, 2005 @10:35PM (#12309705)
    I can't wait to figure out what will (won't?) build with GCC 4.0.0. (One thing's for sure... JDK and OOo won't.)

    FYI: Red Hat has a guy working full-time on building OOo on GCJ. His blog. [linux.ie]. Not that everything works straight out-of-the-box. But it's not like nothing works either.

    (And from what I've heard, you can't expect it to work out of the box either. Sun's coders have done a terrible job and adding all kinds of dependencies on undocumented Sun-internal classes. So it probably doesn't work on Apple's JDK either, and that one is Sun-approved!)
  • Re:Lisp? (Score:5, Informative)

    by sketerpot ( 454020 ) * <sketerpotNO@SPAMgmail.com> on Thursday April 21, 2005 @10:37PM (#12309718)
    Try SBCL, CMUCL, GCL, or CLISP. They're all good Lisp implementations. SBCL and CMUCL compile to native code directly and are probably the fastest free CL implemetations, GCL compiles via C (and therefore GCC), and CLISP has a bytecode interpreter.
  • Re:Great Timing (Score:2, Informative)

    by Tharkban ( 877186 ) on Thursday April 21, 2005 @10:41PM (#12309746) Homepage Journal
    Fedora Core 4 has literally been waiting for this.

    From Feburary: http://lwn.net/Articles/124798/ [lwn.net]

    That article includes the question/answer:
    - Does that mean Fedora Core 4 will ship with a pre-release compiler?
    We're not *that* crazy. If GCC 4.0 is delayed, we will either revert, or slip.
  • by k98sven ( 324383 ) on Thursday April 21, 2005 @10:47PM (#12309794) Journal
    When they announced the release of Apple 10.4 "Tiger" I noticed this page: At that point I kinda figured gcc 4.0.0 had to be out by April 29th since Apple claimed they were using it for OS X.

    Well, you're wrong because GCC doesn't follow Apple's schedule, or anyone else's for that matter. Even a cursory glance at the GCC mailing list will tell you that.

    The reason Apple can promise this is that they're not actually shipping GCC 4. They're shipping their own fork of the GCC 4 code. It's probably about 99% the same code, but don't make the mistake of thinking they're shipping exactly what the FSF is distributing.
  • Re:Moving fast (Score:3, Informative)

    by Dink Paisy ( 823325 ) on Thursday April 21, 2005 @10:47PM (#12309796) Homepage
    Well, there was the whole stagnation on 2, leading to the egcs fork and eventual reconciliation with the FSF branch. So it's not really surprising that development is happening a whole lot faster now.
  • by hey ( 83763 ) on Thursday April 21, 2005 @10:50PM (#12309815) Journal
    % gcc -v

    Using built-in specs.
    Target: i386-redhat-linux
    Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --enable-shared --enable-threads=posix --enable-checking=release --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-languages=c,c++,objc,java,f95,ada --enable-java-awt=gtk --host=i386-redhat-linux
    Thread model: posix
    gcc version 4.0.0 20050405 (Red Hat 4.0.0-0.40)
  • Re:Readme.SCO (Score:1, Informative)

    by Anonymous Coward on Thursday April 21, 2005 @10:51PM (#12309826)
    Oh wow, you noticed that now? They've had that file since GCC 3.3.0, and with that wording for half a year.

  • Re:Moving fast (Score:2, Informative)

    by adamjaskie ( 310474 ) on Thursday April 21, 2005 @10:53PM (#12309840) Homepage
    It was a joke. There WAS no Slackware 5. Or 6.
  • does this mean executables compiled with GCC 4 and optimized properly will run faster?

    In the long run, yes. But for now, I'd imagine that because the SSA form is so new to the GCC codebase, the GCC maintainers are waiting for latent bugs to surface and be fixed before squeezing the most wizardly optimizations out of the SSA tree.

  • Correct me if I'm wrong here, but most Linux distributions are still i386 right?

    You are. Most packagers have assumed for at least a couple of years that everybody has a 486 or better. Some are so bold to assume you have a 586 or better. If you don't meet those requirements, you can compile it yourself (it's open-source).
  • by Anonymous Coward on Thursday April 21, 2005 @10:57PM (#12309862)
    The parent poster is refering to the deprecation of Managed Extensions for C++ syntax in favor of C++/CLI (which is undergoing ISO standardization).

    While it is true the syntax has changed (much for the better: templates are now supported in managed C++ code and so are generics, keywords replace ugly __gc, and more), support for the old syntax is still in the compiler (/clr:oldSyntax), and IntelliSense.

    However, you will be unable to mix new syntax and old syntax code in the same project without taking some penalties (IntelliSense will break, at the least). The designer will even spit out old syntax code when designing an old form or control.

    While the old syntax is definitely on its last legs, the VC++ team was very concerned about not screwing over those (early) adopters of C++ code for the CLR thus far.

    A good resource to read up more on the subject would be Herb Sutter's Blog [msdn.com], Stan Lippman's Blog [msdn.com], or any of the other VC++ team member's blogs.

    Take this from a former VC++ teammate who left during the Whidbey product cycle (posting AC since I've never bothered to get a slashdot account).
  • by Dink Paisy ( 823325 ) on Thursday April 21, 2005 @11:00PM (#12309885) Homepage
    As other people have said, SSA is static single assignment. It means that each variable in the program is assigned in only one place. SSA is for optimization, and is usually done in intermediate forms generated by the compiler, rather than in programs written by a human in common computer languages such as C, C++, Perl or assembly languages.

    Trying to recall my knowledge of optimizing compilers:

    SSA makes optimization easier, since it is obvious where a variable was assigned (since it was assigned in only one location) and what value it contains (since there is only one value being assigned to it). The complexity moves to register allocation, where there can be many more variables to allocate because of SSA. Register allocation is Hard, but doing an ok job is quite possible. Most optimizations are impossible unless you can prove various properties about the variables involved, which is often much easier with variables in SSA form.

  • by SuperQ ( 431 ) * on Thursday April 21, 2005 @11:00PM (#12309888) Homepage
    Fedora Core 4 is GCC 4.0
  • Not that pertains to SSA trees, but...

    I wanted to know the differences in the optimization levels, so I made up a script[1] that compiled zlib and libpng with various optimizations on various archs and timed how long it took to run the test suite with 4 various images[2][3][4][5].

    I ran this test on a 2.8GHz Intel P4 HT proc, with 512MB Kingston HyperX DDR-400. Results are here [brantleyonline.com].

    The X axis, is in the format "L-Z", where "L" is the libpng optimization level (3, 2, 1, 0, or s (size)), and "Z" is the zlib optimization level (3, 2, 1, 0, or s (size)). The Y axis is in seconds, and for precise values, look at the graph data vs the visual graph.

    The google logo[3] was useless, it's far too small to give me accurate results. However, if you compare the comic[5], the difference is 0.665 seconds... while this may not seem *huge*, in the case of a server where every tenth of a second counts, multiply the time by requests and compare the two. In the simple case of a libpng test case (opens a PNG image, re-writes it, compares the old to the new), optimizations matter, and a lot.

    Of course, some code can be optimized more than others, and there is a large number of variables to take into account, but I'd hope that booting into single user mode and running this in a terminal should remove as much of that as possible.

    The thing is, very few people are going to notice the 2/3rds of a second optimizations give you, vs the hours spent compiling OpenOffice.


    Not linked due to bandwidth reasons (384kbps upstream = teh suck)
    [1]http://www.brantleyonline.com/sf.sh
    [2] Tranqulity - @ gallery.artofgregmartin.com - down due to bandwidth, but it's a 1.4MB file (PNG format), 1600x1200, using a large variety of blues and blacks.
    [3]http://www.google.com/intl/en/images/logo.gif
    [4]Screenshot of OSX on dual monitors, large amount of windows, varied transparancies, 1440x576, 570.2kb. (not linked due to bandwidth)
    [5]http://www.applegeeks.com/comic_archive/viewcom ic.php?issue=134
    Note: if you use the script, please, PLEASE, mirror the images LOCALLY and use that instead...
  • by mindriot ( 96208 ) on Thursday April 21, 2005 @11:02PM (#12309895)

    Hmm. Funny. Seems like perfect timing, in retrospect. I just held a presentation on SSA (and efficiently transforming code into SSA) today.

    Get the slides here [udel.edu].

    HTH

  • Re:Autovectorization (Score:4, Informative)

    by Anonymous Coward on Thursday April 21, 2005 @11:02PM (#12309897)
    Most linux distributions being built for i386 is mostly incorrect and when not, a half-truth.

    Fedora Core, for example, relies on the improved instructions for atomic operations found in 486 and newer processors, necessary for certain threading libraries. The rpm program itself requires a 586, if I don't remember wrong.

    Fedora Core also compiles all binaries optimized for P4. It was decided to use P4 optimizations, since these generally work just as well on Athlon processors, while Ahtlon optimizations is rather slow on a P4.

    Furthermore, for CPU intensive applications such as many audio and video applications, CPU optimizations such as MMX and SSE are automatically activated at runtime if the CPU supports it.

    The 'i386' in the name should really be called 'x86'. Of course, then there's also 'i686' packages, which basically mean 'x86 processors that support the CMOV instruction'. That is also wrong, as there are i686 processors which do not support CMOV, such as certain VIA and Cyrix variants.

    CMOV is basically the only useful addition to the x86 instruction set since the i486 for general purpose programs. And programs not fitting into that category, already have hand written asm for time critical sections, which can take advantage of MMX, SSE, 3DNow, Altivec or VIS.
  • Re:works great! (Score:2, Informative)

    by Anonymous Coward on Thursday April 21, 2005 @11:03PM (#12309900)
    I'm posting right now with Firefox CVS code compiled tonight with GCC 4.0.0 20050418 (prerelease). I just finished building GCC 4.0 a bit ago and Firefox is now compiling. I've seen no problems with the GCC prerelease builds from the last few weeks. Firefox seems a little more responsive than with builds using GCC 3.4.3.
  • by Anonymous Coward on Thursday April 21, 2005 @11:03PM (#12309904)
    Nice troll. Good thing you know nothing of what your talking about. Any knowledgeable Gentoo user who uses Gentoo for optimizations alone will set it to recompile overnight, so when they wake up (provided the system isn't a P-II and they aren't running KDE), it should be done. It usually is for me on new installs. For everyone else who uses Gentoo, GCC 4.0 is just another compiler. I use Gentoo because of the minimalism it provides. I also use it because I can strip it down to the bare minimum much easier than most other RPM or DEB based distributions. I used to use Gentoo for the 'Rice' effect. Now I use it because the package management system, and the fact I can make a lean system which runs on 25MB of RAM, especially for those old NEC laptops I've been given by my school. And as for resources? Portage is automatically set at a high Nice, meaning that every other application, such as Firefox, has priority over it. If you're going to troll, at the very least know what your talking about.
  • by Anonymous Coward on Thursday April 21, 2005 @11:12PM (#12309940)
    due to the fact that all its c++ shared libraries will now be 40% smaller due to the symbol visibility improvements (i.e., no runtime adjustment needed by the linker for internal-only functions). This translates into a significant speed improvement for all KDE code.
  • Re:Readme.SCO (Score:2, Informative)

    by kernel_dan ( 850552 ) <slashdevslashtty@NOSPam.gmail.com> on Thursday April 21, 2005 @11:18PM (#12309977)
    http://www.gnu.org/philosophy/sco/sco.html [gnu.org]

    SCO is so not a threat they decided to delete the page. (404 Not Found)

    oh, wait. here it is [fsf.org]
  • by IvyMike ( 178408 ) on Thursday April 21, 2005 @11:20PM (#12309986)
    There have been several good answers to your question, but if you're really new to compilers, you might want a little more context. Want a quick lesson in how compilers work? If you're willing to accept some gross oversimplifications, here's how most modern compilers work:

    1) Tokenize the input. For example, if you were compiling perl, you might choose to turn "print $foo" into three tokens; KEYWORD_PRINT, TYPE_SCALAR, and IDENTIFIER('foo'). The output is typically a stream of tokens. This step might be done by lex or flex.

    2) Parse the sequence of tokens using a set of rules called a grammar. For example, "TYPE_SCALAR" followed by "IDENTIFIER()" is might match a rule to generate a variable called "$foo", and "KEYWORD_PRINT" followed by a variable means call the function print on the contents of the variable. The output is typically an abstract syntax tree (AST); a high-level data structure representing the program. This step might be done by yacc or bison.

    3) Match the AST against a series of rules to output the final code. This might actually be two steps; you might generate something into a low-level register transfer language (RTL) that looks very much like assembly, and then turn THAT into actual machine instructions.

    At each stage, you might choose to optimize the output. You might also insert optimizations passes between steps. (For example, you might insert a pass between 2 and 3 to optimize the AST into a simpler AST.)

    Before SSA, GCC sort of skipped making any high-level AST; it used to go from parsing almost immediately into a RTL. You can still optimize RTL, but since it's pretty low-level, it misses out on higher-level context and made some optimizations really difficult.

    SSA is simply a form used for the high-level AST. Why SSA? It is a very nice form to optimize. Read the wikipedia article for more details on why SSA is particularly useful for some optimizations.

    Page 181 of this PDF file [linux.org.uk] from the 2003 GCC Summit explains the flow of the GCC compiler.
  • by Anonymous Coward on Thursday April 21, 2005 @11:25PM (#12310005)
    Trees are the earliest intermediate representation. Think "parse trees", annotated with all sorts of additional info. They are language-specific, but target-agnostic.

    If I understand it correctly, tree-ssa is just the parse tree, where each block has been converted to SSA. It's just the internal name for the SSA infrastructure.

    Of course, there's a link in the article summary to the GCC wiki page on tree-ssa. You might as well RTFA
  • Patent issues (Score:5, Informative)

    by plgs ( 447731 ) on Thursday April 21, 2005 @11:25PM (#12310006) Homepage
    "Unfortunately we cannot implement Steensgaard [pointer] analysis due to patent issues."

    They mean this patent [uspto.gov] owned by this company [microsoft.com]. What a surprise.

  • by vandan ( 151516 ) on Thursday April 21, 2005 @11:28PM (#12310028) Homepage
    For those who want to know what works and what doesn't: http://forums.gentoo.org/viewtopic-t-176085.html [gentoo.org]
  • by Anonymous Coward on Thursday April 21, 2005 @11:29PM (#12310040)
    Of course, if you had RTFM, you'd have an answer. Look:

    #
    # There have been many improvements to the class library. Here are some highlights:

    * Much more of AWT and Swing exist.
    * Many new packages and classes were added, including java.util.regex, java.net.URI, javax.crypto, javax.crypto.interfaces, javax.crypto.spec, javax.net, javax.net.ssl, javax.security.auth, javax.security.auth.callback, javax.security.auth.login, javax.security.auth.x500, javax.security.sasl, org.ietf.jgss, javax.imageio, javax.imageio.event, javax.imageio.spi, javax.print, javax.print.attribute, javax.print.attribute.standard, javax.print.event, and javax.xml
    * Updated SAX and DOM, and imported GNU JAXP

    Now, go and test it. Report your results
  • by 0x000000 ( 841725 ) on Thursday April 21, 2005 @11:32PM (#12310061)
    RTFM. It says right in the changelog that more swing has been added.

    "There have been many improvements to the class library. Here are some highlights:
    Much more of AWT and Swing exist.
    Many new packages and classes were added, including java.util.regex, java.net.URI, javax.crypto, javax.crypto.interfaces, javax.crypto.spec, javax.net, javax.net.ssl, javax.security.auth, javax.security.auth.callback, javax.security.auth.login, javax.security.auth.x500, javax.security.sasl, org.ietf.jgss, javax.imageio, javax.imageio.event, javax.imageio.spi, javax.print, javax.print.attribute, javax.print.attribute.standard, javax.print.event, and javax.xml
    Updated SAX and DOM, and imported GNU JAXP"

    http://gcc.gnu.org/gcc-4.0/changes.html [gnu.org]
  • Re:debian (Score:5, Informative)

    by Mongoose ( 8480 ) on Thursday April 21, 2005 @11:35PM (#12310085) Homepage
    Debian has had pre-releases for 4.0 for a while now. I guess you'd know that if you were a developer and actually used Debian. Hell I have mono 1.1.6 on Debian -- not many distros even have that yet. =)
  • You're right. (Score:3, Informative)

    by MrDomino ( 799876 ) <mrdomino.gmail@com> on Thursday April 21, 2005 @11:40PM (#12310116) Homepage
    Like, could it conceivably work faster and use up less in the way of resources if the different languages were separated into different compilers?

    The different languages ARE separated into different compilers.

  • by noda132 ( 531521 ) on Thursday April 21, 2005 @11:46PM (#12310153) Homepage

    Does all this extraneous language support make gcc bloated for single-language compilation?

    Short answer: No.

    Long answer: pretty much every compiler around goes through the following steps: (a) make an abstract syntax tree from the source code, (b) optimize it, and (c) output machine code. No matter what language you're using, these steps must be performed; a multi-language compiler simply provides many ways of doing (a). But since optimization happens after (a) anyway, it doesn't matter.

    That's oversimplifying, since if a compiler were tuned to a single language it could probably use a slightly simpler abstract syntax tree format. But the benefits would be slight; it's far more useful to support tons of languages at little extra effort than to drop all alternate languages for a minor performance gain.

  • people use -fweb (Score:3, Informative)

    by r00t ( 33219 ) on Thursday April 21, 2005 @11:51PM (#12310181) Journal
    Having -finline-functions in -O3 is yucky. It tends to make code fall out of the cache, slowing things down. You get more cache misses.

    So the choice has been:

    • -O2 -fweb
    • -O3 -fno-inline-functions
    (adding -frename-registers or -fno-rename-registers too, as desired)
  • Re:Moving fast (Score:1, Informative)

    by Anonymous Coward on Friday April 22, 2005 @12:15AM (#12310295)
    > Apple wasn't working on GCC until version 3.

    Apple wasn't, but NexT sure was. Objective C was a hacked up gcc.
  • Re:debian (Score:1, Informative)

    by Anonymous Coward on Friday April 22, 2005 @12:26AM (#12310355)
    GCC4 release candidates have been in sid for months. Recompiling everything with gcc4 is not even likely to happen for a good long while, even with distros like gentoo.
  • Re:Moving fast (Score:3, Informative)

    by bani ( 467531 ) on Friday April 22, 2005 @01:04AM (#12310526)
    autovectorization is nice, but its like a peephole optimizer. it optimizes small bits of code.

    in terms of technology, SSA is far more important as it optimizes "the big picture".
  • Gentoo with GCC 4 (Score:2, Informative)

    by Zan Lynx ( 87672 ) on Friday April 22, 2005 @01:05AM (#12310528) Homepage
    Gentoo has had a GCC 4 ebuild for months now. I've been using it to build my system. If you really really want it, you need to set your package.keywords to -* and also unmask it in package.unmask.

    Then be prepared to switch between gcc 3.4 and 3 a lot because many packages, especially multimedia packages, fail to build.
  • by drmerope ( 771119 ) on Friday April 22, 2005 @01:24AM (#12310627)
    > A lot of people have done benchmarks on this,
    > and found out that tuning for 686 with gcc only
    > provides 1-2% improvements in the best case.

    Uh what? It is true that i386 code runs in the ballpark on a Pentium IV, but this most definitely was not true for the Pentium III and is not true for the Pentium M. Those processors have the 4-1-1 rule, which is to say that you'll stall concurrent execution of instructions unless you arrange a complex instructions (4) with two simple ones (1). This is because there aren't enough execution units to handle any kind of instruction.
  • TR1 included! (Score:5, Informative)

    by Anthony Liguori ( 820979 ) on Friday April 22, 2005 @01:40AM (#12310684) Homepage
    I'm surprised noone's mention the inclusion of the C++ TR1. There's a ton of very cool new library features. Here are my two favorite:
    #include <tr1/functional>

    int foo(int x, int y) { return x * y; }

    using namespace std::tr1::placeholders;

    int main() {
    std::tr1::function<int (int, int)> f;
    std::tr1::function<int (int)> g;

    // f can be stored in a container
    f = foo;

    f(2, 3);

    g = std::tr1::bind(f, _1, 3);

    // this is equivalent to f(2, 3)
    f(2)
    }
    Not to mention the inclusion of shared_ptr which provides a reference counted pointer wrapper. This will eliminate 99% of the need to do manual memory management in C++. It's all very exciting, kudus to the G++ team on this!
  • Re:debian (Score:3, Informative)

    by thomasweber ( 757387 ) on Friday April 22, 2005 @01:47AM (#12310704)
    > Is Debian's release cycle truly so slow that what appears to be an honest curiosity is modded as a troll?
    As Debian sid is the unstable branch of Debian, the release cycle is pretty unimportant for gcc's inclusion. Looking at the experimental branch, you'll find gcc 4.0 already included: http://packages.debian.org/experimental/devel/
    (w hich is probably an earlier release candidate).

    In sid itself exists a snapshot of gcc as of 20050319.
  • by As Seen On TV ( 857673 ) <asseen@gmail.com> on Friday April 22, 2005 @01:51AM (#12310720)
    Let's say that Apple has 99.9999999% of all desktop installs. Even then, almost none of them actually use GCC.

    Mac OS X itself is compiled with GCC 4. That was the point. Hence, all Mac users depend on GCC 4. That's 40 million and counting according to the latest figures.
  • Re:Autovectorization (Score:5, Informative)

    by thalakan ( 14668 ) <jspence AT lightconsulting DOT com> on Friday April 22, 2005 @01:53AM (#12310732) Homepage
    Wrong. The SSE instruction set includes several instructions for doing vector integer ops, such as average and multiplication. These things are a huge speed win even in "average" applications, as the game compiler developer noted above. If you don't believe me, fire up a profiler and look at how much time an office app or web browser spends doing rectangle intersection calculations and TrueType font math.

    Also, there aren't nearly enough people using MOVNTDQ to avoid polluting the instruction pipeline and dumping useless garbage into the system cache. If you're copying stuff into main memory and you aren't going to use it for a while, use MOVNTDQ to get a big speed win. If you do need it cached, use MOVDQA to get both caching and 128 bit transfers in one instruction! We all paid for these fancy schmancy new instructions in our processors, and it's extremely annoying to see programmers not use them.

  • Re:Objective-C++...? (Score:4, Informative)

    by As Seen On TV ( 857673 ) <asseen@gmail.com> on Friday April 22, 2005 @01:58AM (#12310747)
    Hmm. That's a pretty sound misrepresentation. You make it sound like our guys are spread too thin to work on Objective-C++. Not so.

    Fact is, demand for Objective-C++ from our developers is so close to zero as to be completely insigificant. Seriously, there's more demand for Python than Objective-C++. Honest to God, Python!

    All the ISVs who are still using C++ are building their apps with Core Foundation, not with Cocoa. That's fine. Core Foundation is a first-class application platform in Mac OS X. It's just so much more of a pain in the ass to use, developers are flocking away from C and C++ to reimplement in Objective-C, not even bothering with Objective-C++ along the way.

    So it's not that we don't have the time. It's that we don't see the point.
  • by Anonymous Coward on Friday April 22, 2005 @02:11AM (#12310787)
    What is it that possesses people to post authoritative comments on Slashdot, about subjects which they know little to nothing about?

    The changes between 386 and 686 in instruction sets alone make optimizing for specific platforms more than worth it. In fact, at the performance lab at [big chip manufacturer where I work as an electrical engineer], we have observed as much as a 43% speed gain in compiled applications using a platform-optimized compiler!

    Granted these optimizations would be compiler-specific, and were obviously not made using GCC. However, your assertion that "tuning for 686 with gcc only provides 1-2% improvements in the best case" is simply absurd. Please get a clue before you post nonsense.
  • by dvdeug ( 5033 ) <dvdeug AT email DOT ro> on Friday April 22, 2005 @02:24AM (#12310824)
    We're not shipping "a fork" of GCC 4. We're shipping GCC 4.0.0, which we compiled from source for Darwin 8.

    According to http://gcc.gnu.org/install/specific.html#powerpc-x -darwin [gnu.org],
    The version of GCC shipped by Apple typically includes a number of extensions not available in a standard GCC release. These extensions are generally for backwards compatibility and best avoided.

    i.e. you're using a forked version of GCC, and definitely not 4.0.0 out of the box.

    the whole notion of "a fork" runs 100% counter to all that open-source stuff

    No, actually, the importance of the ability to fork and wisdom to know when to fork is very important to "that open-source stuff".
  • by Anonymous Coward on Friday April 22, 2005 @02:25AM (#12310833)
    In case you hadn't noticed, the "slow" part of running KDE are the start up times. Once you actually get KDE loaded, the runtime speed is fine.

    -a GNOME/KDE agnostic fluxbox user
  • by Anonymous Coward on Friday April 22, 2005 @02:31AM (#12310854)
    You're leaving out a very important step - intermediate representation, which goes between your (a) and (b). Every compiler makes an AST out of a language, and then transforms that language-specific AST into a common IR. A lot of optimizations (dead code elimination, constant propagation, LICM) are performed on that IR before transforming it into architecture-specific code. (more optimizations such as instruction scheduling are performed at the arch-specific level).
  • Re:Pascal (Score:3, Informative)

    by dvdeug ( 5033 ) <dvdeug AT email DOT ro> on Friday April 22, 2005 @02:34AM (#12310858)
    Something that I would really like to see integrated into GCC, sooner or later, is GNU Pascal.

    GNU Pascal supports building with a number of versions of GCC, and work on GNU Pascal is with a released version of GCC. The GCC developers want GNU Pascal as just another frontend; changes should go to GCC head, and there shouldn't be #ifdefs to get it to compile with different versions of the compiler backend. That's a large problem that has stopped GPC from being merged.
  • Re:Lisp? (Score:5, Informative)

    by Theatetus ( 521747 ) on Friday April 22, 2005 @02:58AM (#12310924) Journal

    Yes, gcl (formerly known as kyoto common lisp). But it doesn't need the assembler/linker part of the toolchain so it's packaged separately. But I think it is "Part of the GNU Compiler Collection", for what that's worth, and it does depend on GCC.

  • Re:Why? (Score:3, Informative)

    by Theatetus ( 521747 ) on Friday April 22, 2005 @03:03AM (#12310935) Journal

    Man... I'd almost forgotten. the 5/6 libc switch was horrendous, much worse than the 2/3 gcc switch.

    Another big problem was that a lot of the early proprietary Linux software vendors hopped on board right before both of those switches, so there were a boatload of closed-source apps requiring ecgs and a patched libc5 or some other bizarre combination.

  • by prockcore ( 543967 ) on Friday April 22, 2005 @03:27AM (#12311013)
    I get 40 pages of "condition will always be false due to limited range of data type". Bleh! If it will always be false, throw it away! I need the check in there for when the type will be a signed int.

    Maybe I'm missing something, but this warning is GCC telling you that it's not going to compile that code.. so you can't possibly "need" it because GCC doesn't even compile it.

    uint8_t i=0;
    if (i>300)
    {
    printf("hello");
    }

    Compile that, and then see for yourself.. strings a.out | grep hello will return nothing. gcc optimizes your useless check right out.
  • by mccalli ( 323026 ) on Friday April 22, 2005 @04:08AM (#12311148) Homepage
    Useful usenet posting about this here [google.co.uk]. Notice the date of the post: 1993.

    Cheers,
    Ian

  • by cuerty ( 671497 ) on Friday April 22, 2005 @04:23AM (#12311186)
    I think that stuff like this made the GCC developers to add de -pedantic flag, from de man file:

    -pedantic
    Issue all the warnings demanded by strict ISO C and ISO C++; reject all programs that use forbidden extensions, and some other programs that do not follow ISO C and ISO C++.For ISO C, follows the version of the ISO C standard specified by any -std option used.

    Valid ISO C and ISO C++ programs should compile properly with or without this option (though a rare few will require -ansi or a -std option specifying the required version of ISO C).However, without this option, certain GNU extensions and traditional C and C++ features are supported as well.With this option, they are rejected.

    -pedantic does not cause warning messages for use of the alternate keywords whose names begin and end with __.Pedantic warnings are also disabled in the expression that follows "__extension__".However, only system header files should use these escape routes; application programs should avoid them.

    Some users try to use -pedantic to check programs for strict ISO C conformance.They soon find that it does not do quite what they want: it finds some non-ISO practices, but not all---only those for which ISO C requires a diagnostic, and some others for which diagnostics have been added.

    A feature to report any failure to conform to ISO C might be useful in some instances, but would require considerable additional work and would be quite different from -pedantic.We don't have plans to support such a feature in the near future.

    Where the standard specified with -std represents a GNU extended dialect of C, such as gnu89 or gnu99, there is a corresponding base standard, the version of ISO C on which the GNU extended dialect is based.Warnings from -pedantic are given where they are required by the base standard.(It would not make sense for such warnings to be given only for features not in the specified GNU C dialect, since by definition the GNU dialects of C include all features the compiler supports with the given option, and there would be nothing to warn about.)


    "pendantic" is a ironic way of call it.
  • by gowen ( 141411 ) <gwowen@gmail.com> on Friday April 22, 2005 @05:11AM (#12311308) Homepage Journal
    BTW ... a language like Fortran, that doesn't have pointers at all
    Fortran 90/95 has pointers, just not in the C-like "It's an address of a chunk of memory" kinda way. Basically, they're pointers done properly, specifically designed to be limited in scope (they can only point at something declared to be a TARGET, for example) and they contain enough extra information to faciliate exactly the sort of opimisation you're talking about.
  • by Anonymous Coward on Friday April 22, 2005 @05:22AM (#12311330)
    Every build from every 3rd party DOES constitute a fork.

    Wrong.

    A fork is a branch from the primary codebase of ANY size.

    No, a fork is where a second set of developers take the original codebase at one well-defined point and use it as the base for their own project.

    OpenBSD and NetBSD are forks of the original BSD because they have taken the BSD code and turned it into their own divergent systems, both based on the same base but heading in different directions. A Linux kernel with one of the various non-Linus patchsets applied is not a fork, because the "non-standard" part is utterly dependent on the main kernel sources.

    Apple's GCC builds fall into the latter category, and therefore do not constitute forks.

    I repeat: Red Hat have not forked the Linux kernel, and therefore Apple have not forked GCC.
  • Re:Why? (Score:3, Informative)

    by Anonymous Coward on Friday April 22, 2005 @05:27AM (#12311342)
    Maybe I am missing something. If so, please enlighten me

    It frustrates me because I don't think older open source projects should just mysteriously break. For example, I have an older sublaptop which I got pretty cheaply on Ebay. There is some someware I would like to try on this machine, such as the last pre-Gecko version of Mozilla (which had a not of Netscape 4 code in it), that I can't get to compile in Gcc 3.4 because too much has changed since then.

    I don't like the fact the gcc breaks code the used to compile fine just two years ago. Another example: Abiword 1.x. Compiled just fine in 2002. Won't compile in 2005.

    The big advantage of open source licenses is that anyone can pick up some old open-source code which has been abandoned a few years back and make something useful out of it. Or would be able to, except for the fact that the people who write GCC continue to insist on breaking older code.

    I'm quite frustrated because I wasted hours trying to compile older code a few weeks ago and constantly hit a wall because of GCC's breakage.

    I think a well-written "How to fix things that don't compile" guide would be nice; I can usually make an educated guess and compile the code again, but obscure things like old-style variable-number-of-arguments code stump me.

    I think this is a general trend with some open source developers; they break code or configuration files without any regard for how much inconvenience they cause end users.
  • Re:TR1 included! (Score:3, Informative)

    by drac667 ( 878093 ) on Friday April 22, 2005 @05:48AM (#12311395)
    TR1 means Library Technical Report 1, it contains:
    • std::tr1
    • Utilities
      • shared_ptr
      • regular expressions
      • random numbers
    • Meta-Template-Programming
      • reference_wrapper
      • lambda binders and adaptors
      • type_traits
      • tuples
    • Containers
      • arrays
      • hash containers
    For more information see this: http://www.open-std.org/jtc1/sc22/wg21/docs/papers /2004/n1647.pdf [open-std.org]
  • Re:perl fork bug (Score:3, Informative)

    by hyc ( 241590 ) on Friday April 22, 2005 @06:23AM (#12311499) Homepage Journal
    ??? When an application calls fork() everything that happens next is up to the kernel. The definition of fork() is that the child gets an identical copy of everything. What does the compiler have to do with this?
  • Re:Why? (Score:3, Informative)

    by gaj ( 1933 ) on Friday April 22, 2005 @08:12AM (#12311824) Homepage Journal
    So you're saying that the smart developers over at Microsoft can't manage to code up support for new lanugage features themselves?

    While the standards compliance of pre-.net VC++ releases (and the C99 support of *any* VC++ releases) indicate that might well be the case, I'm inclined to think that it is not.

    I think it is much more sensable to conclude from your post that you are a) confused about the GPL, b) intentionally misrepresenting the situation, c) not the sharpest knife in the drawer or d) some combination of the above.

    Putting the gcc code under GPL doesn't put the language extensions under GPL.

  • Re:Any bechmarks (Score:2, Informative)

    by devnullify ( 561782 ) <ktims AT gotroot DOT ca> on Friday April 22, 2005 @08:15AM (#12311838) Homepage
    The OpenSSL speed suite is about 2.87% (averaged over all of the throughput tests) faster with gcc 4 (Debian prerelease) than gcc 3.3.5 on my Duron machine. There were no cases where gcc 4 was more than 0.5% slower than 3.3.5.
  • by Anonymous Coward on Friday April 22, 2005 @09:31AM (#12312423)
    Match the AST against a series of rules to output the final code

    While this is what gcc does, most modern compilers don't because it's a bit slow. gcc only uses AST matching because (formerly) it tried to combine optimization and code generation. Once you're in SSA you don't need the codegen to optimize, so most modern SSA-based compilers generate simple correct but hideously inefficient code directly from the AST. All the real intelligence about the machine ISA is in the optimizer.

    gcc is kind of mid-way to becoming modern, since it has a module that uses SSA but it's shoehorned into the old classical architecture.

    SSA is simply a form used for the high-level AST.

    This is incorrect. SSA is a form used for the low-level RTL, after the AST has been processed into pseudocode. It comprises nodes that correspond to RTL instructions. The AST, by comparison, represents the type of things that exist in the original language - declaration nodes, for example - which no longer exist in the RTL and have no SSA equivalents.
  • Re:debian (Score:2, Informative)

    by mrtom852 ( 754157 ) on Friday April 22, 2005 @09:38AM (#12312478)

    For the impatient...

    deb http://ftp.us.debian.org/debian/ ../project/experimental main contrib non-free

    apt-get -t experimental update
    apt-get -t experimental install g++-4.0 gcc-4.0
  • Re:Moving fast (Score:3, Informative)

    by Kazymyr ( 190114 ) on Friday April 22, 2005 @10:14AM (#12312818) Journal
    Yeah I know most people will chime in to say there never was a Slackware 5, but I happen to have a burned copy of it. :)

    [Slack 5 was the name it held in the beta or "current", which was later released as 7.0]
  • Re:MINGW? (Score:3, Informative)

    by cimetmc ( 602506 ) on Friday April 22, 2005 @10:27AM (#12312979)
    According to the following link http://www.mingw.org/MinGWiki/index.php/TheNextRel ease [mingw.org] it might still take some time before there will be an official new MinGW version. However unofficial versions tend to popup quite quickly. For instance the following web page http://www.thisiscool.com/gcc_mingw.htm [thisiscool.com] regularly provides MinGW binaries for GCC snapshots.

    As for the speed of GCC, the compilation speed is often a bit disappointing compared to commertial compilers. Dead code elimination is an optimization which GCC already does for a very long time.

    Marcel
  • Re:perl fork bug (Score:3, Informative)

    by tacocat ( 527354 ) <tallison1@twmi.[ ]com ['rr.' in gap]> on Friday April 22, 2005 @11:22AM (#12313562)

    It's not supposed to fork the file pointer.

    Read data in from a 'while()' statement and store it into an array. When it reaches a certain size, fork the process, kill the array and start filling it up from STDIN. Next time you fork your data is partially duplicated in Solaris and not at all duplicated under Linux.

  • Exactly, and that's what I want it to do, but it gets tricky when you're using templates...

    template <class T> class safeint {
    template <class FromT> safeint (FromT const number) {
    // I know this test will fail sometimes, it's just an example.
    if (number < numeric_limits<T>::min()) {
    throw "You stink!";
    }
    };
    };

    If FromT is an unsigned int, and T is an int, the check will never be true and should be optimized away. However, if it's the other way around, we need that check, or there can be overflow.

    Also, I'd like to have the warning only disabled for that one line of code, so that if I make a mistake elsewhere the compiler will help me find it.

    Visual C++ can do it, come on GNU!
  • Re:Moving fast (Score:2, Informative)

    by malxau ( 533231 ) on Friday April 22, 2005 @06:07PM (#12318265) Homepage
    I have a prerelease build of OS X built with gcc 2.7. I remember the Apple engineers wanted to ship the original OS X with gcc 3.x, but 'it just wouldn't make it' and they used 2.96 instead. OS X 10.0 and 10.1 were built on that compiler.

For large values of one, one equals two, for small values of two.

Working...