Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
GNU is Not Unix Encryption

A 'Severe' Bug Was Found In Libgcrypt, GnuPG's Cryptographic Library (helpnetsecurity.com) 39

Early Friday the principal author of GNU Privacy Guard (the free encryption software) warned that version 1.9.0 of its cryptographic library Libgcrypt, released January 19, had a "severe" security vulnerability and should not be used.

A new version 1.9.1, which fixes the flaw, is available for download, Help Net Security reports: He also noted that Fedora 34 (scheduled to be released in April 2021) and Gentoo Linux are already using the vulnerable version... [I]t's a heap buffer overflow due to an incorrect assumption in the block buffer management code. Just decrypting some data can overflow a heap buffer with attacker controlled data, no verification or signature is validated before the vulnerability occurs.

It was discovered and flagged by Google Project Zero researcher Tavis Ormandy and affects only Libgcrypt v1.9.0.

"Exploiting this bug is simple and thus immediate action for 1.9.0 users is required..." Koch posted on the GnuPG mailing list. "The 1.9.0 tarballs on our FTP server have been renamed so that scripts won't be able to get this version anymore."
This discussion has been archived. No new comments can be posted.

A 'Severe' Bug Was Found In Libgcrypt, GnuPG's Cryptographic Library

Comments Filter:
  • by AleRunner ( 4556245 ) on Sunday January 31, 2021 @05:51PM (#61013144)

    Surely some of the long scary functions manipulating buffers [gnupg.org] (click the show all lines) need to be carefully refactored into shorter simpler (possibly inlined) functions with lots of careful aggressive test cases. Not that I'm volunteering to do it at this point, so I'm not going to criticise those working on it, but, looking at the code we really need lots of people to learn how this stuff works and improve it.

    Sure, that's what's included by default in C but it is possible to write it in ways that make it to be safer even without changing language.

    • 120 lines in a single function.

    • Almost correct.

      It needs to be separated into aspects, not functions. So, layers.

      A memory management aspect, a bounds checking aspect, a permission management aspect, etc, and the main pure algorithm aspect.
      This is exactly what languages like Haskell, and features like carnage, err, garbage collectors and monad transformers were invented for.
      Written with the right data types, it would have prevented all of those problems at *compile* time. And be about 10-15 easy to read lines.

      • What is more efficient? Logical XOR on the data or moving nulls to it? Optionally, each processor - arm or X86 have memory fencing that MIGHT be possible to add. Remember pre-execution means the processor may decide to just mark that memory as 'free', and not actually carry out the nulling. Anyway belts and braces bounds checking has been added, so why tis release only? Did someone remove checking to make it 'faster?'.
    • Ugh, that code's still as bad as when I looked at it 10+ years ago, magic values hardcoded in everywhere, incomprehensible variable names, barely any comments... it's practically OpenSSL.
      • That sounds troubling. For any code to be secure, remain secure, it has to be clear to any programmers who review it. Cryptically written code is a way to hide shady shit.
        • ...or make for a great place for a novice dev to insert a defect that might not be checked by others.
  • I'm on gentoo stable and use libgcrypt 1.8.6. Is that version not vulnerable?

    • Re:oh great (Score:4, Informative)

      by 93 Escort Wagon ( 326346 ) on Sunday January 31, 2021 @06:11PM (#61013222)

      You are fine. From the mailing list announcement:

      "Only one released version is affected:

        - Libgcrypt 1.9.0 (released 2021-01-19)

      All other versions are not affected."

    • Somebody who uses Gentoo stable??
      I thought you guys were a myth! That you would only exist in tales of ye olde stage-0-installers about BSD dagons and bank server knights on the quest for the ancient releases!

      We are unworthy, oh Great One!

      • Yes, their are a few of us that use mostly stable packages. In my case, only resorting to un-stable if their is no other way to perform an update. Usually that just lasts one update.
    • I'm on Gentoo stable too, and it appears that the vulnerable version of "libgcrypt" is not available from the current, (as of minutes ago), package tree. Their is even a security note about it, so I guess someone has already removed it.

      <dev-libs/libgcrypt-1.9.0: Multiple vulnerabilities
      766213 - Assigned to security

      • I'm on Gentoo stable too, and it appears that the vulnerable version of "libgcrypt" is not available from the current, (as of minutes ago), package tree.

        My Gentoo system is running updates and libgcrypt is scheduled to downgrade to 1.8.6. It was on 1.9.0. 1.9.0 was installed on January 18.

  • This is what you get when you don't unit test your code.

    I'm able to develop much more quickly, with greater confidence and quality with unit tests. I can quickly test all sorts of scenarios that my code will encounter without having to incur delays from round-trip connections (in my case I'm connecting to a server).

    Any software product that doesn't have these tests in place is build on quicksand.
    • Re:Unit test (Score:5, Insightful)

      by nagora ( 177841 ) on Sunday January 31, 2021 @06:32PM (#61013272)

      This is what you get when you don't unit test your code.

      Don't be complacent - unit tests only test what you thought of to test.

      Do fuzzing as well.

      And review the code by eye every so often; you'll have learnt stuff since you wrote it.

      • Better yet is have someone else check your code, or at the very list its output on realistic input. Not always possible, but generally required in safety-of-life scenarios. Whoopsies like Boeing's Starliner aren't just embarrassments, they're career enders.

      • They said that every encryption would lead to a buffer overflow. That means even a simple unit test would've caught this bug.

        It also implies they don't even perform a smoke test before shipping a release.
    • by Anonymous Coward

      Your comment is what you get when you have never even met a Real Programmer [catb.org].

    • Re: Unit test (Score:4, Insightful)

      by BAReFO0t ( 6240524 ) on Sunday January 31, 2021 @07:41PM (#61013488)

      I never understood why people rely on unit tests.

      Because if you do not trust code without unit tests, logically, since unit tests themselves are code, you can not trust them either. And would have to write unit tests for unit tests ad infinitum

      I found the concept of statistical reliability much more trustworthy. As in: Several people write several independent implementations, and the more you got, the more you can rely on the results if they agree.
      Yes, you will hardly ever get to six sixma, with its million of implementations. But if three flight computers are good enough for a plane, amd triple-mirroring is good enough for the best raid storage, then three implementations should be fine. Three actual working implementations.
      Then all the tests would be reduced to a simple diff of the outputs.
      Ideally with an exhaustive input range during staging. Or, alternatively, running all three implementations in parallel *in production*.

      • I never understood why people rely on unit tests.

        Because they don't have a language that catches syntax errors or misspelled variable names at compile time.

      • Not a panacea either. Any non-trivial system has a non-trivial definition of how exactly it should behave. If each implementation defines its own, they'll never agree on anything non-trivial. But if all your implementations use the same definition, you're not safe from bugs in the definition itself. Of course, you're not safe from that with unit tests, either.

      • I never understood why people rely on unit tests.

        I've never understood why people fly wildly to the extremes of any given debate...

        Unit tests are one part of a good testing strategy. You shouldn't have only unit tests, but it's likely a bad idea if you have none. If unit test are hard to write, then your design it probably highly coupled and complex, often signs of poor design. And working at the lowest level it's easier to get high coverage with unit tests versus higher level tests (where the number of per

      • by DrXym ( 126579 )
        Rely? No. But they're damned useful. Unit tests are easy to write, debug, and fast to run and they allow you to test and prove your code works under various inputs and outputs. If someone finds a new failure, it can be added to tests. It doesn't mean your code is 100% perfect but it's better than not even knowing if it works at all.

        IMO writing tests becomes more of a pain when things like integration, automation / gui testing happen. It becomes a law of diminishing returns. I've seen tests so complex that

    • Unit tests don't tend to catch security issues, and they never test all possible edge cases.

  • by BAReFO0t ( 6240524 ) on Sunday January 31, 2021 @07:20PM (#61013400)

    Who thought it was a good idea to disable basic bounds checks and the like for software like this??
    Or use a language not made for this, like C.

    This isn't a AAA game! The code's goal is not to be fast with the fenders flying off, but to be secure like a tank! And it certainly doesn't need low-level access or manual resource management outside of making sure used memory is overwritten after use.

    • Who thought it was a good idea to disable basic bounds checks and the like for software like this?? Or use a language not made for this, like C.

      This isn't a AAA game! The code's goal is not to be fast with the fenders flying off, but to be secure like a tank! And it certainly doesn't need low-level access or manual resource management outside of making sure used memory is overwritten after use.

      You most likely are incorrect about many of your assumptions. Most hardware comes with cryptographic acceleration that may actually require using assembly to take full advantage of. Most likely that is not the case, but some of these accelerations may not be possible in anything but C. That doesn't mean they can't minimize the amount of C used, of course, but it may actually be required. Also, the last time I used this library was to do many tens of thousands of signing operations per second and speed w

    • This is a C library designed to be consumed by C programs running on an operating system written in C.
      C has no bound checking.
      This code is used in areas which are more performance-critical than an "AAA game".
      Sorry but no part of your comment makes sense.
    • C is actually perfectly fine if you follow certain basic coding methods. It's used in an awful lot of safety-critical software with things like MISRA (as a part of an ISO 26262/IEC 61508 process) where it achieves damn high levels of reliability. You just need to apply it correctly. Conversely, you can write unreliable crap in any language.
    • by tlhIngan ( 30335 )

      Who thought it was a good idea to disable basic bounds checks and the like for software like this??
      Or use a language not made for this, like C.

      Depends. If the code originated in the mid-90s, it makes complete sense because the code has to be efficient. Even the early 2000s.

      These days, there are many better options available so starting from scratch, you might burn extra CPU cycles doing those and dealing with the people who complain about software bloat.

      And writing a cryptographic library is not easy - it's

  • I ran the the code through Codesonar and the exploit was found in about 3 minutes. For some reason these tools seem only used in engineering, aviation and aero space industries, and seem to be unheard of in anything to do with GNU, iâ(TM)m guessing because there is no free version that actually isnâ(TM)t useless.
    • I was a professional tester. The answer is management, who believe coders are overpaid, and they do the real testing, and the B team - testers who could not make the transition to coders, are just trained chimps to step through the motions. The second reason is expensive tools, means training, and paying more money. The third is management do not want a tool like this to red-mark shocking code - they already know the old stuff is crap. Assigning good people to maintenance and defect removal is just a waste
    • I ran the the code through Codesonar and the exploit was found in about 3 minutes. For some reason these tools seem only used in engineering, aviation and aero space industries

      ... and other places where you can afford to pay it-you-need-to-ask-you-can't-afford-it prices for a software tool. Basically no-one smaller than a billion-dollar corporation, or perhaps a several-hundred-million-dollar corporation, can afford to use these things. Shout out to Coverity and PVS Studio for allowing free use for open source projects.

An authority is a person who can tell you more about something than you really care to know.

Working...