Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
GNU is Not Unix Programming Upgrades Linux

GLIBC 2.16 Brings X32 Support, ISO C11 Compliance, Better Performance 95

An anonymous reader writes "The GNU C Library version 2.16 was released with many new features over the weekend. The announcement cites support for the Linux x32 ABI, ISO C11 compliance, performance improvements for math functions and some architectures, and more than 230 bug fixes."
This discussion has been archived. No new comments can be posted.

GLIBC 2.16 Brings X32 Support, ISO C11 Compliance, Better Performance

Comments Filter:
  • by Anonymous Coward
    I've been following Glibc development for years, and I understand the code well and the concepts fine, but can anyone explain to me what the benefits are of the X32 ABI, perceived or real? I just don't get it...
    • Re:X32 (Score:5, Informative)

      by TheRaven64 ( 641858 ) on Sunday July 01, 2012 @06:45AM (#40509705) Journal

      64 bit architectures give you 64-bit registers and a 64-bit address space (since pointers are, traditionally, integers that fit in registers[1]). On x86, there are a number of other advantages to using the 64-bit long mode: guaranteed SSE so you don't need slow x87 code, more registers, PC-relative addressing (useful for position-independent code) and so on. The cost of using these is that now every pointer is bigger. This has knock-on effects in terms of data cache usage.

      The X32 ABI allows you to have all of the benefits of the 64-bit mode except for the larger address space. If you're using under 4GB of memory, then it can, in theory give an improvement in memory and cache usage.

      There are two down sides. The first is that, in my testing of C, C++ and Objective-C code, I've found that it's very rare on a 64-bit platform for pointers to account for even 10% of the total memory usage, and usually it's a lot less. This is made worse by the fact that the X64 and X32 ABIs are incompatible, so you may need two copies of the same library in memory if you have code using both.

      I was quite enthusiastic about the idea of something like X32 five or so years ago when it was very rare for programs to want more than 4GB of address space, but now it's far less clear that there's a real advantage.

      [1] Not with the architecture I'm currently working with, and I'm spending a lot of time fixing compiler assumptions that this is always the case.

      • Re: (Score:2, Informative)

        by Anonymous Coward

        I think there are more advantages. Think embedded device, which is a market X86 is trying to move into.

        • Re:X32 (Score:5, Informative)

          by TheRaven64 ( 641858 ) on Sunday July 01, 2012 @08:50AM (#40510007) Journal
          On an embedded system, you'll be saving 5-10% memory usage by only supporting X32 and not X64. It may be worth it...
          • On an embedded system, you'll be saving 5-10% memory usage by only supporting X32 and not X64. It may be worth it...

            Yeah, this was my thought as well. x86 just doesn't seem to happen much in the embedded space, but I guess somewhere there is an embedded x86 vendor running millions of units (perhaps the contributor).

            I doubt it's the +10% memory size that's keeping vendors away from x86, though.

          • But w/ memory densities going up, and fast getting to the point where 4GB or even 8GB will be the bare minimum standalone memory that will be available, even embedded will be forced to support x64. Unless they make CPUs w/ embedded RAM, in which case the question of having GBs doesn't even arise. There, since CPU designers can assign as little of the silicon to memory as they wish, that is one place where a x32 would be valid.
            • by Mr Z ( 6791 )

              Outside of the webbrowser, what hand-held applications do you see needing a 4GB virtual address space in a single process?

              As far as "embedded RAM", just how embedded do you mean? The latest generation OMAP chips [ti.com] allow for "Package-on-Package" LPDDR, for example. There's plenty of phones out there today sporting 1GB RAM, and I'm sure it's a matter of time before that's in the same package or maybe on the same die as the processor if it isn't already on some devices.

              (I admit ignorance of the absolute bleedin

              • I wasn't talking about anything needing 4GB. I was talking about discrete 4GB memory being the minimum available, in which case, the system would need x64 if it wanted to access all the available memory. Granted, it may not be there right now, but as we undergo process shrinks, don't expect 1GB, 2GB or even 4GB to be there forever. At which point, even embedded systems may be foreced to 4GB, unless the CPUs are themselves going to incorporate internal memory on die.

                MCMs or PoPs will not be an option, s

                • by Mr Z ( 6791 )

                  x32 user-space programs require an x64 kernel. So, the OS would be able to access the full address space. The restriction is only that individual processes are limited to 4GB per process. The x32 mode is really more like a "small model 64-bit mode" than a "fancy 32-bit mode."

                  So, even if it was "impossible" to build a system around a given chip / chipset with less than 8GB of RAM, an x32-based system would still be able to use all 8 GB from kernel space (since the kernel is x64), despite the fact no singl

          • by Brane2 ( 608748 )

            So, it was meant for the case where you'd go with X86_64, but with anorexic memory ?

            Who is so moronic to do that ?

            • Embedded is about *cheap* systems. A cheap system always lags current systems in capability by several years and even decades. Such systems will *never* have X amount of RAM if X/2 amount of RAM can be supplied cheaply instead. The only way you'll get X amount of RAM standard on an embedded system your boss will give you is if if X is literally the lowest possible manufacture size at the time.
              • by Brane2 ( 608748 )

                So you will go with small, cheap and economic ARM instead of x86.

                Why go for x86 and then worry about a couple bytes of _code_ ?

                One can be lover of sport cars and byu a Ferrari. Or one could be a prudent driver and go for Diesel car.

                But having Diesel Ferrari doesn't make you prudent performance driver, it makes you a moron.

                • No, it makes you an industrial scale player. Some of the CPUs used in modern cars can be 20 years old. Why? Because it's cheaper. Once the hardware costs have come down as much as possible, then it's up to the software to get the performance out of what's there. Software engineering costs only once for development, whereas hardware costs for each product that is manufactured. That's industrial scale reasoning.
                  • by Brane2 ( 608748 )

                    Show me one 20 year old x86 CPU, capable of executing 64-bit code.

                    And then please explain why would someone with such configuration even go with glibc when there are other, much slimmer choices.

                    • Because not all C library implementations are equivalent. It's quite common for a codebase to either fail to compile or not quite work just right when the "wrong" C library is used. Yes, that shouldn't happen and should be fixed, and no it doesn't get fixed.

                      There's nothing wrong with glibc trying to be relevant on older hardware.

        • by Bengie ( 1121981 )
          Not to say it won't happen, but I don't think of x64 when I think "embedded". Would be great for a Linux firewall, but most embedded versions are ARM. I could see a high performance edge firewall/router with only 2-4GB of memory and a quad-core Xeon with 10Gb NICs. Smaller pointers are great for something like a firewall that relies heavily on data-structures.
          • by tibit ( 1762298 )

            Would a firewall/router, even one with a couple 10Gb NICs, really need that much memory? On a firewall/router, if it's not in the L2 cache (or its performance-wise equivalent), it better not be needed very often. I have an 8 port 100 Mbit switch with some firewall and routing functionality that has a whooping 256 kbytes of memory shared between code and data, and it performs very smoothly.

      • Note that, on multiple processors, the legacy x86 and the x64 implementations are (almost completely) separated, using different processor resources. Between the larger/better resources, and the higher number of register, the x64 pipeline gets a better performance in the same processor. The lower memory usage also helps to improve performance, but its impact is minor.

      • by jelle ( 14827 )

        If your platform has stdint, then you can use ptrdiff_t instead of int/long in your code... It's always the right size integer for pointers.

        • You mean intptr_t. ptrdiff_t is the largest size of an object. On segmented architectures, it can be the maximum size of a segment. ptrdiff_t is large enough to hold the result of any defined pointer comparison, but pointer comparisons are only defined (in C) on pointers within the same object.
          • by osu-neko ( 2604 )
            In other words, the difference between two pointers should never require more bits than the pointers themselves. However, depending on the memory model, it may require less (pointer comparisons being only valid between objects in the same segment), making ptrdiff_t smaller than intptr_t.
            • by osu-neko ( 2604 )
              Correction: It doesn't even necessarily depend on the memory model. ptrdiff_t need only be large enough to hold any valid pointer offset, which on some architectures could be smaller than all of memory, even without a segmented memory model.
      • What architecture are you currently working with that is giving you pointer problems?
        • CHERI. It's an experimental MIPS-derived chip that has 32 256-bit capability registers, which are basically fat pointers containing a base, length, and set of permissions.
          • That is fascinating. I'm having trouble finding a link to information about it. I would guess the base is the address, length is the length of the memory you own (useful for writing arrays, possibly for doing managed code in hardware?), and the permission must be an owner or group ID?

            I'm going to have to think about this.
            • No, permissions are things like execute, read, write. There are also some flags. For example, you can seal a capability and then you can't use it unless it is unsealed by a special form of jump instruction that can be used to implement protection boundaries. The architecture is being developed at Cambridge as part of a DARPA grant to see what you'd do if you were allowed to change anything about computing to improve security: it's a softcore running on an FPGA, running FreeBSD and (almost - I need the la
          • What is the size of integer ALU ops?
      • Re:X32 (Score:4, Insightful)

        by unixisc ( 2429386 ) on Sunday July 01, 2012 @01:02PM (#40511215)

        I was quite enthusiastic about the idea of something like X32 five or so years ago when it was very rare for programs to want more than 4GB of address space, but now it's far less clear that there's a real advantage.

        That same thought struck me as well. At that time, when we were seemingly 'nowhere near 4GB', a x32 ABI would have made sense. But now, by the time it's ready, it's use by date seems to be near, if not already gone.

        • by snadrus ( 930168 )
          If a single boot program needs >4gb then it's a problem. If a mobile app needs > 4GB and that's not obvious at design-time, then it's a problem. Process-in-a-tab browsers should be safe. Anything higher-end is intended to be X64. For future design, this is yet another reason to keep programs of limited size & using multi-processing (using message passing) where such a choice can be made.
    • Re: (Score:3, Interesting)

      by SuneSpeg ( 662034 )
      If someone could explain to me why x86 architecture is suddenly named x32 , i would be happy too. The name has been working fine for 40 years, despite its difficult teen years where it only represented 16bit.
      • Re:X32 (Score:5, Insightful)

        by sapphire wyvern ( 1153271 ) on Sunday July 01, 2012 @07:15AM (#40509779)

        Good question. I looked at the linked article and it took me a while to figure out.

        x64 CPUs aren't just x86 CPUs with larger memory addresses. They also have more registers, and are guaranteed to support certain additional instructions that aren't necessarily available in an x86 CPU (e.g. SSE). "x32" mode exploits the additional registers & instructions, without actually using 64-bit memory addressing. I think the idea is that it's supposed to allow for most of the benefits of the x64 instruction set without incurring the caching penalties of larger pointers. Honestly I'm not sure how useful that really is.

      • Re:X32 (Score:5, Informative)

        by kasperd ( 592156 ) on Sunday July 01, 2012 @07:16AM (#40509783) Homepage Journal

        If someone could explain to me why x86 architecture is suddenly named x32

        It's not. X32 is the name of a programming model, not an architecture. And the X32 programming model cannot be used on the x86 architecture, X32 is a programming model for the AMD64 architecture.

        The most important aspect of a programming model for C is that it defines the sizes of basic types like char, short, int, long, and pointers. Many other aspects follow more or less directly from the sizes of those types. X32 is unusual in that it is 64 bit code, but pointers are only 32 bits. When a pointer is in a CPU register the lower 32 bits is the actual value, and the upper 32 bits are always zero.

      • Re:X32 (Score:4, Informative)

        by billcopc ( 196330 ) <vrillco@yahoo.com> on Sunday July 01, 2012 @07:36AM (#40509819) Homepage

        X32 is for running 32-bit apps in a 64-bit native execution environment.

        On Windows this practice is called "thunking" or "Windows-on-Windows", where it takes the form of a partially emulated legacy kernel which then backhands its requests to the real kernel.

        On Linux, since we usually compile things for the platform as-needed, it's more about efficiency than compatibility. If you don't need 64-bit processing, sometimes it's faster to stick with 32-bit code. If you can live with the 4gb address space, your pointers are half the size, resulting in a smaller memory footprint, which then reduces cache pressure, potentially yielding significant speed improvements for some workloads. Due to the use of 32-bit pointers in this scenario, a 32-bit compatibility layer is required to interface with system libraries. You can't just stick a 32-bit libc and expect it to work, because of its intimate relationship with the kernel.

        • Re:X32 (Score:4, Interesting)

          by kantos ( 1314519 ) on Sunday July 01, 2012 @09:27AM (#40510117) Journal

          Actually no it's not... Linux has that already and it works just fine, anyone who has gone through the pain of getting flash player to work before the x64 port can tell you. This is actually more similar (albeit with more restrictions) to setting the /LARGEADDRESSAWARE:NO [microsoft.com] option on the linker in Visual C++. An option you'll notice that comes with a significant warning about interoperability. Microsoft solved this problem by making pointer handling the developer's job, this meant that they could continue to use x86-64 libraries without an issue but all malloc operations would return addresses that are safe to sign extend.

          The benefit on windows is that you:

          1. Use less ram on an x64 bit OS than a corresponding x86 application would, this is because you won't have x64 threads for each x86 thread you have going, and won't have to load the thunking DLLs
          2. In theory could interop with x86 code since your pointers are safe, however this is not supported
          • by tibit ( 1762298 )

            Are you serious that anyone sign-extends pointers, where there is no concept of a sign?! Why would you do that?? It makes no sense at all. Zero-extending: sure. Sign extending: that's major brain damage symptom right there.

      • by Anonymous Coward

        It's not the same architecture. X32 is basically x86-64 (with its extra registers, NX bit, etc.) but with 32 bit pointers.

        The positives:
        * It cuts memory usage (amounts differ on how pointer-heavy an application is).
        * There are low-memory systems out there today with 64-bit processors running 32-bit operating systems - e.g. most netbooks, some low-end commercial virtual server services. Also, as time goes on, x86-64 processors will replace x86 processors where they're used in embedded systems. X32 lets these

        • Re:X32 (Score:4, Interesting)

          by ThePhilips ( 752041 ) on Sunday July 01, 2012 @08:44AM (#40509995) Homepage Journal

          The negatives:
          * On a mixed x86-64 and X32 system, you have to load two copies of shared libraries (this doesn't happen with exclusively X32 systems).

          It's not that the extra copies is the problem.

          The problem is that you effectively have to have two different /usr/lib directories. With potentially two different sets of the libraries. Same consequently goes for the /usr/include. Compiling and running software occasionally becomes nightmarish experience.

          After enjoying it thoroughly on the commercial *nix variants (Solaris and HP-UX), I have hopes that Linux will not do it and keep it clean: either fully 64bit system or fully 32bit system.

          • Actually, you need three copies, one for i386, one for X64 and one for X32. You should only need copies of libraries though, not of headers (which use #ifdef for multiple platforms).
            • Re:X32 (Score:4, Interesting)

              by ThePhilips ( 752041 ) on Sunday July 01, 2012 @10:34AM (#40510393) Homepage Journal

              Actually, you need three copies, one for i386, one for X64 and one for X32. You should only need copies of libraries though, not of headers (which use #ifdef for multiple platforms).

              I should have spelled it out better: many pieces of software already now make a presumption what /usr/lib is: some insist that those must be 32bit libraries, some insist on the 64bit ones. LD_LIBRARY_PATH works most of the time - until you hit an application which to do its work launches other applications.

              As for the headers... I have seen enough breakages to stop believing that #ifdefs are the solution: different software and different systems refer to the same thing with under names. Good luck finding a working combination. E.g. some functions are available in 32bit variant but not in 64bit variant, and vice versa. (That was recent fsck up of compiling VIM on the AIX 7: compile as 32bit, _XOPEN_SOURCE=600 - and it would fail linking because some 32bit libs are missing; try to compile as 64bit - only to find that _XOPEN_SOURCE wouldn't work and piles and piles of function declarations are missing. I had to hack the includes/defines into the config.h for the VIM to compile and link. (autoconf hasn't spotted it because it apparently couldn't imagine the functions missing.))

              Linux now is more or less clean of the madness.

          • by Anonymous Coward

            The problem is that you effectively have to have two different /usr/lib directories. With potentially two different sets of the libraries. Same consequently goes for the /usr/include. Compiling and running software occasionally becomes nightmarish experience.

            solution: multiarch

            with multiarch in debian in ubuntu you can even install your arm or whatever libraries on a md64 system without trouble and without any nightmarish experiences.

            • But this is exactly the problem the parent post was talking about - for multiarch, you need two sets of libraries. which *could* be different. Fortunately, debian keeps things clean, so it works amazingly well, but the problem could still arise.

          • > The problem is that you effectively have to have two different /usr/lib directories.

            That's not a problem. Almost every distro has being moving to a separate lib folder per arch anyway. It's not a nightmare as it makes cross compilation easier.

      • If someone could explain to me why x86 architecture is suddenly named x32

        It isn't. x86 is some sort of thunking layer between x86 and x86. Or something.

        I'm not sure of the exact details but it's not x86.

    • Re:X32 (Score:4, Interesting)

      by BusterB ( 10791 ) on Sunday July 01, 2012 @09:25AM (#40510103)
      In my opinion, it is designed pimarily so that Intel's embedded processors run Android well in the short term. Atom architecture in particular benefits in that some pointer offset calculations are faster when done in 32-bit vs 64-bit. Here are some great discussion links: http://blog.flameeyes.eu/2012/06/debunking-x32-myths [flameeyes.eu] http://lwn.net/Articles/503412/ [lwn.net]
      • From the first link

        CPUs perform better on 32-bit operands than 64-bit. Interestingly, the only CPU that Intel admits do perform better on 32-bit on the presentation I already linked a few times, is the Atom â" the quote is actually "64bit imul latency is twice of 32bit imul on Atom"
        .
        Now, what the heck is imul? That's a signed multiply operation. Do you multiply pointers? It doesn't make sense. Besides, pointers are not signed. Are you telling me that your most concern is for a platform (Atom) that has extra latency on an operation when people use 64-bit data types and they should instead use 32-bit? And your solution for that concerns is to create a new ABI where it's harder to use 64-bit data types instead of going to fix whatever program is causing the problem?
        I guess I should end it here, because this last note about the Atom and imul is probably going to make the day of most people who have half a clue.

        That paper is hardly making the case for x32.

  • The thing I keep wondering about x32 is, I have > 4 GB of RAM. If applications use 32-bit pointers, they will be able to address a virtual address space of max 4 GB. Of course, different processes can be mapped in different physical memory regions so having > 4GB of RAM still helps, but if you have memory-hungry apps (such as when you try to open complex Excel spreadsheets in LibreOffice), you will hit the 4 gigs limit. Uhm. Doesn't look as a win-win situation to me. Can someone shed some light on the
    • If you are likely to need more than 4GB of RAM, don't compile in X32 mode. Of course, if some of your applications use X32 and some use X64 mode, then you'll need two copies of shared libraries loaded, and one extra copy of glibc is likely to offset all of the benefits of X32 mode unless you are running some very unusually pointer-heavy workloads in the X32 programs.
    • You don't need to use x32 for this. It works fine with the usual i386/i686 ABI.

      • You don't need to use x32 for this. It works fine with the usual i386/i686 ABI.

        Except that the i386/i686 ABI is ridiculously crippled. For example, it is register-starved (EAX, EBX, ECX, EDX, ESI, EDI, EBP... and that's it) -- while the processor has plenty more you can't use. Or, you can't use SSE math. Or, you can't have PIC code without a significant penalty. Or, ...

        • You can use SSE math (though it's not enabled by compilers by default), you just can't pass scalar floating-point arguments in SSE registers.

          Also this is a bit irrelevant to what the message I replied to said. The poster asked whether multiple x32 processes could still use more than 4GB. I replied that it is already the case with i386 processes running with an amd64 kernel (actually, it's also even possible with a 32-bit kernel with PAE enabled).

    • I hope Firefox doesn't try this trick. Although it doesn't need to use > 4GB, it does need support for leaking more than that. I have an 8GB swap partition, mostly to stop firefox crashing more than once a week!

      • by fatphil ( 181876 )
        It becomes horrifically slow after a few days, so crashing weekly is a positive feature!
    • There is no such thing as an x32 kernel -- it's merely an extension to the amd64 one. Thus, the same kernel can run amd64, i386 and x32 code.

      If all of your userland uses x32, the kernel still uses full 64 bits. Thus, as long as no single process uses more than 2 (3?) GB of address space, you can have as much physical and virtual memory as you want with no loss.

      This said, except for some artificial edge cases, the gains don't outweigh extra complexity of having two incompatible architectures running togeth

  • by Anonymous Coward

    So, one of the things that has changed in this latest release is that only the ELF binary format is supported. What does this actually mean though? I guess it means they dropped a.out [wikipedia.org] and COFF [wikipedia.org], but does anyone still use those?

    Is this particularly a problem, perhaps for embedded *nix? (I.e. is ELF bigger or worse in resource terms compared to the other two formats?)

    As far as I can tell from reading Wikipedia, ELF is much the better format generally, but is it worse in some situations?

    Actually, did GLIBC supp

    • Microsoft has its own compiler and Apple has moved away from it, maintaining support when they are doing much better themselves is probably not a high priority.
    • Re: (Score:3, Informative)

      by Goaway ( 82658 )

      Neither Windows nor Mac OS X uses glibc, so it is not a problem. gcc uses the appropriate libc for the platform.

    • Actually, did GLIBC support MS Windows PE format before now

      On Linux systems, the PE format is handled by Wine if I remember correctly.

    • by peppepz ( 1311345 ) on Sunday July 01, 2012 @09:58AM (#40510233)

      So, one of the things that has changed in this latest release is that only the ELF binary format is supported. What does this actually mean though?

      That you can no longer run Linux executables based on the a.out format. The a.out format was phased out in 1996.

      but does anyone still use those?

      No, nobody does.

      Is this particularly a problem, perhaps for embedded *nix? (I.e. is ELF bigger or worse in resource terms compared to the other two formats?)

      No, because people stopped using the a.out format to store Linux executables long before Linux started appearing on embedded devices. On a side note, many MCUs use ELF as their preferred executable format, so I don't think there's a "size" problem with it.

      As far as I can tell from reading Wikipedia, ELF is much the better format generally, but is it worse in some situations?

      No, that's why nobody has used it since 1997. Even if it was competitive with ELF, and it isn't, maintaining two different binary formats to contain the executables of the same OS would be overkill - especially almost 20 years after the first format has been deprecated.

      Actually, did GLIBC support MS Windows PE [wikipedia.org] format before now (a modified form of COFF)? Or what about the Mac Mach-O [wikipedia.org] format?

      GCC can build different file formats, is that also going to change?

      Glibc is only for running Linux or Hurd (I think) executables. These OSes only use ELF. Glibc never ran on Windows or Mac. GCC is a completely separate project, and of course it supports generating executables for Windows, hence it will target PE/COFF of course on those OSes. There is no relationship with Glibc.

      • by cpghost ( 719344 )

        No, nobody does.

        You can't be sure. But even if there are, something like the reverse of elftoaout [freshports.org] could probably be written relatively easily.

        • Moreover, support for a.out is still there in the kernel. Assuming that you can find some a.out binaries today, and have a reason to run one of them on a contemporary system, you can still do it by having an old libc around. Different versions of libc can coexist on the same system.
      • by Anonymous Coward

        Glibc is only for running Linux or Hurd (I think) executables.

        Last I checked, Glibc had ports for Linux, Hurd, kFreeBSD, OpenSolaris and Syllable. They all use ELF, though.

  • by Anonymous Coward

    This has some nice new features, and --enable-obsolete-rpc will prevent distributions needing to patch the headers back in if they actually want to support NFS, but has this latest release fixed the problems that spawned EGlibc in the first place?

    My understanding is that EGlibc happened because Glibc didn't accept patches for clearly wrong behaviour (expecting /bin/sh to be /bin/bash for example) and didn't look after the embedded architecture ports well enough, but with the change in leadership and the har

    • by Trepidity ( 597 ) <delirium-slashdot@@@hackish...org> on Sunday July 01, 2012 @12:29PM (#40511011)

      The goal is to merge eglibc back into glibc, yes. After the previous glibc steering committee disbanded [h-online.com], it switched to being run by an informal three-person committee, one of whom (Joseph Myers) is also one of the lead maintainers of eglibc, so the two projects' leadership are no longer at odds. And Myers has suggested [sourceware.org] that the goal is to start moving eglibc changes over into the main glibc branch.

    • My understanding is that EGlibc happened because Glibc didn't accept patches for clearly wrong behaviour (expecting /bin/sh to be /bin/bash for example) and didn't look after the embedded architecture ports well enough

      Yeah that was Drepper saying he didn't care about effeminate toy architectures like ARM, because glibc was being developed for manly real architectures like x86.

On the eighth day, God created FORTRAN.

Working...