Forgot your password?
typodupeerror
GNU is Not Unix Programming Technology

GCC 4.0.0 Released 680

Posted by CowboyNeal
from the funrolled-loops dept.
busfahrer writes "Version 4.0.0 of the GNU Compiler Collection has been released. You can read the changelog or you can download the source tarball. The new version finally features SSA for trees, allowing for a completely new optimization framework." The changelog is pretty lengthy, and there's updates for every language supported from Ada to Java in addition to the usual flavors of C.
This discussion has been archived. No new comments can be posted.

GCC 4.0.0 Released

Comments Filter:
  • Moving fast (Score:4, Interesting)

    by slapout (93640) on Thursday April 21, 2005 @10:10PM (#12309554)
    Is it just me or did the jump from version 3 to 4 happen a lot faster than the one from 2 to 3?
  • Lisp? (Score:4, Funny)

    by ari_j (90255) on Thursday April 21, 2005 @10:10PM (#12309555)
    Yeah, but does it have a Common Lisp compiler yet?
  • by ribo-bailey (724061) on Thursday April 21, 2005 @10:11PM (#12309558) Homepage
    of the 2.95 -> 3.0 transition.
    • Why? (Score:5, Funny)

      by Mr. Underbridge (666784) on Thursday April 21, 2005 @10:28PM (#12309658)
      of the 2.95 -> 3.0 transition.

      Did you not get pleasure out of things being errors in 3.0 that weren't even warnings in 2.95?

      I'm sure all the contractors loved it! ;)

      GCC motto: "What code can we break today?

      • Misplaced blame (Score:5, Insightful)

        by tepples (727027) <{tepples} {at} {gmail.com}> on Thursday April 21, 2005 @10:35PM (#12309708) Homepage Journal

        Did you not get pleasure out of things being errors in 3.0 that weren't even warnings in 2.95?

        At least the maintainers of the ISO C++ standard did.

        GCC motto: "What code can we break today?

        Blame the standards committee, not the GCC maintainers.

        • by Screaming Lunatic (526975) on Thursday April 21, 2005 @11:38PM (#12310108) Homepage
          Blame the standards committee, not the GCC maintainers.

          Insightful? Jesus eff-ing Christ. Now the slashbots don't like standards. I bet you wouldn't be presenting the same argument if this discussion was about the transition from MSVC 6.0 to 7.0/7.1.

          • Re:Misplaced blame (Score:4, Insightful)

            by Alioth (221270) <no@spam> on Friday April 22, 2005 @04:37AM (#12311231) Journal
            So basically the GCC developers are damned if they do, damned if they don't - if they fix their bugs to make their compiler ISO C++ compliant, they are whined at for following the standard, if they preserve the bugs, they are whined at for not following the standard!

            Personally, I prefer GCC to be standards compliant.
          • by Screaming Lunatic (526975) on Friday April 22, 2005 @04:55AM (#12311278) Homepage
            Blame the standards committee, not the GCC maintainers.
            Insightful? Jesus eff-ing Christ. Now the slashbots don't like standards. I bet you wouldn't be presenting the same argument if this discussion was about the transition from MSVC 6.0 to 7.0/7.1.

            Funny? Jesus eff-ing Christ. When did pointing out the hypocricy of slashdot group think become funny? I don't get which part of my original statement is funny.

        • by Mancat (831487) on Thursday April 21, 2005 @11:40PM (#12310121) Homepage
          Mechanic: Sir, your car is ready.

          Customer: Thanks for fixing it so quickly!

          Mechanic: We didn't fix it. We just brought it up to standards. Oh, by the way, your air conditioning no longer works, and your rear brakes are now disabled.

          Customer: Uhh.. What?

          Mechanic: That's right. The standard refrigerant is now R-134A, so we removed your old R-14 air conditioning system. Also, disc brakes are now standard in the autmotive world, so we removed your drum brakes. Don't drive too fast.

          Customer: What the fuck?

          Mechanic: Oh, I almost forgot. Your car doesn't have airbags. We're going to have to remove your car's body and replace it with a giant tube frame lined with air mattresses.
      • Re:Why? (Score:5, Insightful)

        by Sivar (316343) <charlesnburns[&]gmail,com> on Friday April 22, 2005 @01:04AM (#12310527)
        I know you were just poking fun but--

        Standards are the reason that computers are tolerable to use for any purpose.
        If a programmer can't be bothered to follow an international standard of his own language, there is no guarantee that the code is future-proof. One can hardly blame the compiler vendor, as we can't expect a compiler to mindlessly maintain backwards compatibility with every weird use of a bug and every bizarre code construct that has ever been supported in the past.

        The ability to compile code written for GCC in another compiler is a *good* thing. If it requires informing the programmer that their code has always been broken, then so be it. A little inconvenience is a small price to pay for standards compliance, or should we expect that the GCC authors "embrace and extend" C and other languages until so much code relies on weird GCC nuggets that programmers (and users) are "locked in" to using just that compiler? (But Douglas Adams forbid if Microsoft does the same thing!)

        Maybe I am missing something. If so, please enlighten me (This is not a sarcastic remark--I haven't done much research on what 4.0 has broken so I may be way out of line).

        Sheesh, for as hard as the GCC authors work, and for as much GCC has improved in the last 10 years, the contributers sure get a lot of flak. Anyone who doesn't contribute code themselves should be greatful (or at least appreciative) of their efforts, even when they do make mistakes.
    • Ah things no longer compiling :) True, it was very annoying and made you go through an extra code review while porting your code forward.

      In the long term, I think it was a very good thing: coding C (and C++, but didn't have that much experience on that) got much more stickt and in my experience, removes a lot of possible problems later on.

      If someone had a lot of problems porting 2.95 to 3.2, his code needed to be reveiwed anyway. It kind of removes the "boy" from "cowboys" in coders (experience is drawn f
  • by Da w00t (1789) on Thursday April 21, 2005 @10:11PM (#12309559) Homepage
    Not a C coder myself, (sticking mainly to perl).. I've just got to ask, what are SSA trees, and what benefit do they serve?
    • by rbarreira (836272) on Thursday April 21, 2005 @10:15PM (#12309584) Homepage
      An educated guess - are they a move in the direction of making code optimizations in gcc easier to code? I heard that a lot of optimization experts (you need to know a lot of graph theory for example) wouldn't work on gcc because of the difficulty of working with it for optimizations, so they would do their experiments in other compilers...
    • by Entrope (68843) on Thursday April 21, 2005 @10:17PM (#12309595) Homepage
      Single static assignment is a way the compiler can rewrite the code (usually for optimization purposes) so each "variable" being analyzed is only written once. This makes a lot of optimizations easier to do, since it eliminates aliasing due to the programmer assigning different values to the same variable. You'd probably learn these things if you would RTFA.
    • by hey hey hey (659173) on Thursday April 21, 2005 @10:18PM (#12309602)
      Static Single Assignment, optimization techniques. Try here [gnu.org] for more details.
    • by GillBates0 (664202) on Thursday April 21, 2005 @10:22PM (#12309626) Homepage Journal
      Wikipedia (as usual) has a nice article [wikipedia.org] about the Static Single Assignment (SSA) form.

      To put it simply, SSA is an intermediate representation where each variable in a block is defined only *once*. If a variable is defined multiple times, the target of any subsequent definitions of the same variable is replaced by a new variable name.

      SSA helps to simplify later optimizations passes of a compiler (for example: eliminating unused definitions, etc) as described in greater detail (with examples and flowcharts) in the article linked to.

      That's the SSA form in short. Now I need to ask somebody the difference between the standard SSA form and "SSA for trees".

    • by Dink Paisy (823325) on Thursday April 21, 2005 @11:00PM (#12309885) Homepage
      As other people have said, SSA is static single assignment. It means that each variable in the program is assigned in only one place. SSA is for optimization, and is usually done in intermediate forms generated by the compiler, rather than in programs written by a human in common computer languages such as C, C++, Perl or assembly languages.

      Trying to recall my knowledge of optimizing compilers:

      SSA makes optimization easier, since it is obvious where a variable was assigned (since it was assigned in only one location) and what value it contains (since there is only one value being assigned to it). The complexity moves to register allocation, where there can be many more variables to allocate because of SSA. Register allocation is Hard, but doing an ok job is quite possible. Most optimizations are impossible unless you can prove various properties about the variables involved, which is often much easier with variables in SSA form.

    • by mindriot (96208) on Thursday April 21, 2005 @11:02PM (#12309895)

      Hmm. Funny. Seems like perfect timing, in retrospect. I just held a presentation on SSA (and efficiently transforming code into SSA) today.

      Get the slides here [udel.edu].

      HTH

    • by IvyMike (178408) on Thursday April 21, 2005 @11:20PM (#12309986)
      There have been several good answers to your question, but if you're really new to compilers, you might want a little more context. Want a quick lesson in how compilers work? If you're willing to accept some gross oversimplifications, here's how most modern compilers work:

      1) Tokenize the input. For example, if you were compiling perl, you might choose to turn "print $foo" into three tokens; KEYWORD_PRINT, TYPE_SCALAR, and IDENTIFIER('foo'). The output is typically a stream of tokens. This step might be done by lex or flex.

      2) Parse the sequence of tokens using a set of rules called a grammar. For example, "TYPE_SCALAR" followed by "IDENTIFIER()" is might match a rule to generate a variable called "$foo", and "KEYWORD_PRINT" followed by a variable means call the function print on the contents of the variable. The output is typically an abstract syntax tree (AST); a high-level data structure representing the program. This step might be done by yacc or bison.

      3) Match the AST against a series of rules to output the final code. This might actually be two steps; you might generate something into a low-level register transfer language (RTL) that looks very much like assembly, and then turn THAT into actual machine instructions.

      At each stage, you might choose to optimize the output. You might also insert optimizations passes between steps. (For example, you might insert a pass between 2 and 3 to optimize the AST into a simpler AST.)

      Before SSA, GCC sort of skipped making any high-level AST; it used to go from parsing almost immediately into a RTL. You can still optimize RTL, but since it's pretty low-level, it misses out on higher-level context and made some optimizations really difficult.

      SSA is simply a form used for the high-level AST. Why SSA? It is a very nice form to optimize. Read the wikipedia article for more details on why SSA is particularly useful for some optimizations.

      Page 181 of this PDF file [linux.org.uk] from the 2003 GCC Summit explains the flow of the GCC compiler.
  • Sweetness (Score:5, Informative)

    by kronak (723456) on Thursday April 21, 2005 @10:12PM (#12309561)
    Glad to see they are targeting the AMD64 architecture for improvements.
  • debian (Score:5, Funny)

    by Anonymous Coward on Thursday April 21, 2005 @10:12PM (#12309562)
    i wonder when debian sid will integrate GCC 4.0...
    • Re:debian (Score:5, Insightful)

      by dhakbar (783117) on Thursday April 21, 2005 @10:48PM (#12309803)
      I am curious why this AC's comment was modded troll. Is Debian's release cycle truly so slow that what appears to be an honest curiosity is modded as a troll?
      • Re:debian (Score:5, Funny)

        by alehmann (50545) on Thursday April 21, 2005 @10:57PM (#12309868) Homepage
        Pretty much, yeah.
      • Re:debian (Score:3, Interesting)

        by Anonymous Coward
        Is Debian's release cycle truly so slow that what appears to be an honest curiosity is modded as a troll?

        Kidding aside, no. Debian is legendary for being, ahem, slow about releases; they release when it's done, not on some date. Thus /. gets lots of jokes about Debian being slow. "I heard that Duke Nukem Forever will be open source and part of Debian's next release!!!11!" etc.

        If GCC 4.0 made changes that would affect the ability of the linker to link things, then GCC 4.0 would actually be slow to go i
      • Re:debian (Score:3, Informative)

        by thomasweber (757387)
        > Is Debian's release cycle truly so slow that what appears to be an honest curiosity is modded as a troll?
        As Debian sid is the unstable branch of Debian, the release cycle is pretty unimportant for gcc's inclusion. Looking at the experimental branch, you'll find gcc 4.0 already included: http://packages.debian.org/experimental/devel/
        (w hich is probably an earlier release candidate).

        In sid itself exists a snapshot of gcc as of 20050319.
    • Re:debian (Score:5, Informative)

      by Mongoose (8480) on Thursday April 21, 2005 @11:35PM (#12310085) Homepage
      Debian has had pre-releases for 4.0 for a while now. I guess you'd know that if you were a developer and actually used Debian. Hell I have mono 1.1.6 on Debian -- not many distros even have that yet. =)
  • whoa (Score:4, Interesting)

    by william_w_bush (817571) on Thursday April 21, 2005 @10:14PM (#12309571)
    reading tfa and changelog intrigued me. optimisations aside im curious if this will be better able to thread on the new multi-core systems coming out, as tls has been spotty till 3.3 and glibc 2. maybe native xd support coming soon too?

    also, the c++ side makes me feel optimistic about ongoing support, which had been a big problem till 3.4.

    yes im x86/64 centric.
  • by k4_pacific (736911) <k4_pacific@y a h oo.com> on Thursday April 21, 2005 @10:16PM (#12309588) Homepage Journal
    I've already downloaded it and used it to recompile Firefox and I must say that gf@fd@k3nl&
    NO CARRIER
  • Trees (Score:4, Funny)

    by goodbadorugly (837673) on Thursday April 21, 2005 @10:20PM (#12309613)
    The new version finally features SSA for trees,

    So I guess its pretty safe to say that this release is for the birds

    *ducks*

  • Autovectorization (Score:5, Interesting)

    by QuantumG (50515) <qg@biodome.org> on Thursday April 21, 2005 @10:29PM (#12309662) Homepage Journal
    Correct me if I'm wrong here, but most Linux distributions are still i386 right? It's only the people who use Gentoo who actually compile everything with i686 options right? So, if autovectorization and all the other improvements in GCC 4.0 make binaries massively faster on modern platforms, how long will it be before the major binary based distributions (like Ubuntu) start making i686 the default and i386 an available alternative (like AMD64 is now).
    • Re:Autovectorization (Score:4, Informative)

      by Anonymous Coward on Thursday April 21, 2005 @11:02PM (#12309897)
      Most linux distributions being built for i386 is mostly incorrect and when not, a half-truth.

      Fedora Core, for example, relies on the improved instructions for atomic operations found in 486 and newer processors, necessary for certain threading libraries. The rpm program itself requires a 586, if I don't remember wrong.

      Fedora Core also compiles all binaries optimized for P4. It was decided to use P4 optimizations, since these generally work just as well on Athlon processors, while Ahtlon optimizations is rather slow on a P4.

      Furthermore, for CPU intensive applications such as many audio and video applications, CPU optimizations such as MMX and SSE are automatically activated at runtime if the CPU supports it.

      The 'i386' in the name should really be called 'x86'. Of course, then there's also 'i686' packages, which basically mean 'x86 processors that support the CMOV instruction'. That is also wrong, as there are i686 processors which do not support CMOV, such as certain VIA and Cyrix variants.

      CMOV is basically the only useful addition to the x86 instruction set since the i486 for general purpose programs. And programs not fitting into that category, already have hand written asm for time critical sections, which can take advantage of MMX, SSE, 3DNow, Altivec or VIS.
    • by vlad_petric (94134) on Thursday April 21, 2005 @11:04PM (#12309910) Homepage
      The main problem is the C language. While vectorizing a loop is generally not that difficult, figuring out if it's the right thing to do is extremely tough. To do that, you have to "prove" that iterations of a loop are independent of each other. This, in turn, requires good pointer alias analysis, and gcc isn't doing it well enough yet. BTW ... a language like Fortran, that doesn't have pointers at all, is much easier to vectorize; that's one of the reasons a lot of scientific codes are still in Fortran.

      Without automatic vectorization, the performance benefit of compiling for 686 as opposed to 386 is simply minimal. A lot of people have done benchmarks on this, and found out that tuning for 686 with gcc only provides 1-2% improvements in the best case. Keep in mind that current X86 processors execute instructions out-of-order, so instruction scheduling for a specific pipeline is not going to do much (it's very important for in-order machines, though)

      • > A lot of people have done benchmarks on this,
        > and found out that tuning for 686 with gcc only
        > provides 1-2% improvements in the best case.

        Uh what? It is true that i386 code runs in the ballpark on a Pentium IV, but this most definitely was not true for the Pentium III and is not true for the Pentium M. Those processors have the 4-1-1 rule, which is to say that you'll stall concurrent execution of instructions unless you arrange a complex instructions (4) with two simple ones (1). This is be
    • Re:Autovectorization (Score:4, Interesting)

      by mattdm (1931) on Friday April 22, 2005 @12:24AM (#12310346) Homepage
      Correct me if I'm wrong here, but most Linux distributions are still i386 right?

      Right in some ways, but importantly wrong in others. Red Hat and the Fedora Project, for example, are compiled using the i386 instruction set but optimized for i686. This means that the cmov instruction isn't available -- but apparently, it's not much of a win (and even a loss in some cases) on modern processors. And code which uses SSE or 3DNow or whathaveyou is usually carefully hand-coded and checked for at runtime.

      There's not really much advantage of switching away from this scheme, so I don't see it as worth the bother. Instead, x86_64 will eventually kill it all off and we'll move on to that.
      • Re:Autovectorization (Score:3, Interesting)

        by QuantumG (50515)
        You're like the 4th person to say that. The point of autovectorization is that all programs can benefit from SIMD instructions, not just the ones where programmers thought it might be a good idea.
  • by jtshaw (398319) * on Thursday April 21, 2005 @10:29PM (#12309663) Homepage
    When they announced the release of Apple 10.4 "Tiger" I noticed this page: At that point I kinda figured gcc 4.0.0 had to be out by April 29th since Apple claimed they were using it for OS X.
    • by k98sven (324383) on Thursday April 21, 2005 @10:47PM (#12309794) Journal
      When they announced the release of Apple 10.4 "Tiger" I noticed this page: At that point I kinda figured gcc 4.0.0 had to be out by April 29th since Apple claimed they were using it for OS X.

      Well, you're wrong because GCC doesn't follow Apple's schedule, or anyone else's for that matter. Even a cursory glance at the GCC mailing list will tell you that.

      The reason Apple can promise this is that they're not actually shipping GCC 4. They're shipping their own fork of the GCC 4 code. It's probably about 99% the same code, but don't make the mistake of thinking they're shipping exactly what the FSF is distributing.
  • *chuckle* (Score:5, Funny)

    by fr2asbury (462941) on Thursday April 21, 2005 @10:29PM (#12309664)
    I can see my Gentoo box sweating now all nervous for the night I get a little drunk and decide to see how this gcc 4 thing works out. heh heh heh.
  • by H0p313ss (811249) on Thursday April 21, 2005 @10:31PM (#12309680)

    Just about every time I have to rebuild a kernel or build a kernel mod I get my butt kicked by gcc versions. So my questions are?

    • Are there compatibility issues with existing binaries?
    • What does this do to existing code?
    • How will this effect existing distros?
    • Is any distro planning on supporting 4.X soon? (And is that a good thing or a bad thing?)

    Anyone know?

  • Readme.SCO (Score:5, Interesting)

    by karvind (833059) <karvindNO@SPAMgmail.com> on Thursday April 21, 2005 @10:48PM (#12309802) Journal
    The gcc tar ball has a README.SCO file (reproduced below)

    The GCC team has been urged to drop support for SCO Unix from GCC, as a protest against SCO's irresponsible aggression against free software and GNU/Linux. We have decided to take no action at this time, as we no longer believe that SCO is a serious threat.

    For more on the FSF's position regarding SCO's attacks on free software, please read:

    http://www.gnu.org/philosophy/sco/sco.html

    • Re:Readme.SCO (Score:4, Interesting)

      by NutscrapeSucks (446616) on Friday April 22, 2005 @02:15AM (#12310799)
      Maybe someone can find it, but the SCO GCC guy posted on Slashdot once. He indicated that he was pretty much single-handedly responsible for the SCO UNIX port, so GCC pulling their endorcement wouldn't make much if any difference to SCO customers. I belive his attitude towards his employer was "I don't like it but who else will pay me to hack on GCC?"
  • by hey (83763) on Thursday April 21, 2005 @10:50PM (#12309815) Journal
    % gcc -v

    Using built-in specs.
    Target: i386-redhat-linux
    Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --enable-shared --enable-threads=posix --enable-checking=release --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-languages=c,c++,objc,java,f95,ada --enable-java-awt=gtk --host=i386-redhat-linux
    Thread model: posix
    gcc version 4.0.0 20050405 (Red Hat 4.0.0-0.40)
  • by b17bmbr (608864) on Thursday April 21, 2005 @10:52PM (#12309835)
    i am interested in the java compatibility. i figure it probably won't do swing, but will it support, or is will it do say gtk/java native. that'd be sweet. i know Qt/kde has had a java bridge for a while, but i really haven't played to much with it. flame java all you want, it's not a geek language. no obfuscated java, no java monks. BFD. sure that'd nix the whole write-once run anywhdere thing. but hell, what a great opportunity to build and test apps under a jre then compile them, to native.
  • by phoenix.bam! (642635) on Thursday April 21, 2005 @10:54PM (#12309846)
    and no gentoo users commenting on how they've already recompiled their entire system with the new optimizations. Or maybe they're just waiting for some free resources to open a browser.
  • Objective-C++...? (Score:4, Interesting)

    by Dimwit (36756) * on Thursday April 21, 2005 @10:55PM (#12309852)
    They've been talking about having Objective-C++ in the GCC main branch for years now. There was even talk that 4.0 wouldn't ship without it. Now it's shipped without it and it's still "coming Real Soon Now". Any word on if it's coming any time soon (really)?
  • by mrcrowbar (821370) on Thursday April 21, 2005 @11:02PM (#12309896)
    Screenshots anyone? ;)
  • by ArbitraryConstant (763964) on Thursday April 21, 2005 @11:02PM (#12309899) Homepage
    The OpenBSD crowd had a lot of concerns about bugs in 3.x and performance regressions (in compiling, not in the resulting binaries). I believe Linus shared some of these concerns (don't have a link handy).

    OpenBSD i386 is finally moving towards gcc 3.x, as the bugs have been cleared up even if the performance regressions haven't. I'm wondering if 4.x will be even worse, and if it will be justified by producing better binaries. From TFA, it looks like they've added a few features that may improve optimizations. If it's noticeably better they may move to the new version faster.

    I will have to play with it to see what it can do.
  • Patent issues (Score:5, Informative)

    by plgs (447731) on Thursday April 21, 2005 @11:25PM (#12310006) Homepage
    "Unfortunately we cannot implement Steensgaard [pointer] analysis due to patent issues."

    They mean this patent [uspto.gov] owned by this company [microsoft.com]. What a surprise.

  • by vandan (151516) on Thursday April 21, 2005 @11:28PM (#12310028) Homepage
    For those who want to know what works and what doesn't: http://forums.gentoo.org/viewtopic-t-176085.html [gentoo.org]
  • by Old Wolf (56093) on Thursday April 21, 2005 @11:51PM (#12310183)
    One of the changes in 4.0.0 is autovectorization [gnu.org] optimizing.
    One _ancient_ compiler (10+ years) I have to use, already has this feature -- and on a large scale: it'll do it over several screensful of code. What took GCC so long?

    Unfortunately, this compiler I mention also has a bug: once it's factored out 'i' in a piece of code like that below, it then complains that 'i' is an unused variable. So you have to do something with 'i' to suppress that warning, which kinda defeats the purpose of the autovectorization.

    Sample code:

    int a[256], b[256], c[256];
    foo () {
    int i;

    for (i=0; i256; i++){
    a[i] = b[i] + c[i];
    }
    }
  • TR1 included! (Score:5, Informative)

    by Anthony Liguori (820979) on Friday April 22, 2005 @01:40AM (#12310684) Homepage
    I'm surprised noone's mention the inclusion of the C++ TR1. There's a ton of very cool new library features. Here are my two favorite:
    #include <tr1/functional>

    int foo(int x, int y) { return x * y; }

    using namespace std::tr1::placeholders;

    int main() {
    std::tr1::function<int (int, int)> f;
    std::tr1::function<int (int)> g;

    // f can be stored in a container
    f = foo;

    f(2, 3);

    g = std::tr1::bind(f, _1, 3);

    // this is equivalent to f(2, 3)
    f(2)
    }
    Not to mention the inclusion of shared_ptr which provides a reference counted pointer wrapper. This will eliminate 99% of the need to do manual memory management in C++. It's all very exciting, kudus to the G++ team on this!
  • MINGW? (Score:3, Insightful)

    by Spy der Mann (805235) <spydermann DOT slashdot AT gmail DOT com> on Friday April 22, 2005 @02:26AM (#12310839) Homepage Journal
    From what I've read, GCC 4 is blazingly fast, _AND_ provides dead code elimination (VERY important for windows users).

    So, any ideas of how long till the MINGW port is done?

Never appeal to a man's "better nature." He may not have one. Invoking his self-interest gives you more leverage. -- Lazarus Long

Working...