Theoretical Breakthrough Made In Random Number Generation (threatpost.com) 152
msm1267 quotes a report from Threatpost: Two University of Texas academics have made what some experts believe is a breakthrough in random number generation that could have longstanding implications for cryptography and computer security. David Zuckerman, a computer science professor, and Eshan Chattopadhyay, a graduate student, published a paper in March that will be presented in June at the Symposium on Theory of Computing. The paper describes how the academics devised a method for the generation of high quality random numbers. The work is theoretical, but Zuckerman said down the road it could lead to a number of practical advances in cryptography, scientific polling, and the study of other complex environments such as the climate. "We show that if you have two low-quality random sources -- lower quality sources are much easier to come by -- two sources that are independent and have no correlations between them, you can combine them in a way to produce a high-quality random number," Zuckerman said. "People have been trying to do this for quite some time. Previous methods required the low-quality sources to be not that low, but more moderately high quality. We improved it dramatically." The technical details are described in the academics' paper "Explicit Two-Source Extractors and Resilient Functions."
Multiplication (Score:1)
And the secret algorithm is... multiplication!
Note: IANAM
Re: Multiplication (Score:1)
Multiplication and other arithmetic operators are indeed secret as far as the average American high schooler is concerned.
Re: Multiplication (Score:5, Informative)
Multiplication and other arithmetic operators are indeed secret as far as the average American high schooler is concerned.
Well, a Mathematician was interrogated because some woman on a plane saw him........
doing math, which in today's America is a sign of trrsm.
Re: (Score:2)
Yes, a woman from Wales.
Re: (Score:2)
Yes, a woman from Wales.
As much as people like to point out that this woman was form Wales, was she in Wales at the time she accused the Mathematician of being a terrorist?
We are so fearful in America, ready to panic over stupid shit like alarm clock innards and math symbols, and it is engouraged to be in that state of fear, as witnessed by commercials I here very day commanding that citizens call authorities if they see or hear anything (their emphasis, not mine) out of place.
They were in America, in Philadelphia - and she
Re: (Score:2)
So did she get her math education while she was here or do you suppose she got it in Wales?
Re: (Score:2)
So did she get her math education while she was here or do you suppose she got it in Wales?
She got the reaction to her stupidity in Philadelphia. Which happens to be in the US.
Re: (Score:2)
Absolutely. We are now a fear based society. The Chicken Little of the world.
Re: (Score:2)
And the secret algorithm is... multiplication!
No, then zero will occur more often than other numbers.
I suppose xor ought to work if the two sources are totally unrelated. But then you have to be sure that they really are completely unrelated. For example, feed it the same source twice and all you get is zero.
So I imagine they must have done something a lot more clever than that.
Re: (Score:1)
It certainly won't make them worse (if they are unrelated), and if they are periodic, the new period will be the least common multiple of the originals.
FP (Score:2)
Re: (Score:1)
It could just be that it's high quality randomness.
Re: (Score:2)
I read it too, and I fail to see the breakthrough. There are plenty of pseudo random number generators, such as the Mersenne Twister [wikipedia.org], with very long periods, so just occassionally XORing even a poor quality random number into the feedback loop, is enough to make it completely unpredictable.
Re: FP (Score:1)
Pseudo
Re: (Score:2)
That's more or less the basis for CryptMT...
Re:FP (Score:5, Informative)
Mersenne Twister is pretty much the standard for simulating a uniform distribution in a lot of scientific computing. These depend not only upon unpredictability (useful for avoiding biases, and clearly important in the security realm), but also upon properties of the uniform distribution.
But when we test it out, we find it's still not as great as we'd like: look at a histogram of outputs, and you'll see that until you get really large numbers of function calls, the histogram isn't particularly uniform. (In particular, numbers near the bottom and top of the range don't get called quite as often.) This means that simulation properties that rely upon uniform distributions over both long and short time periods may be thrown off, and short- and mid-time simulation results may well stem from the MT rather than from the mathematical model. Moreover, low-probability events may have artificially smaller probabilities in the simulations (because of the non-uniformity of the distributions near the bottom and top ends of the range).
Over very short numbers of function calls (a few hundred to a few thousand), the outputs can even tend to cluster in a small neighborhood. So suppose that you are simulating a tissue with many cells, and calling MT as your uniform distribution to decide if the cells divide or stay dormant (each with independent probability p, so each cell divides if PRNG/max(PRNG) < p). The math says that for a uniform distribution, you don't need to worry about what order you evaluate your decision across all the cells. But if the PRNG outputs cluster over several sequential calls, then a neighborhood of cells may simultaneously divide if they are all evaluated close to one another sequentially. In analyzing the spatial behavior of such a simulation, you may draw incorrect conclusions in smaller spatial structures that, again, derive from non-uniformity of the PRNG, rather than problems with predictability. (And then you may accept/reject a particular hypothesis or mathematical model pre-maturely.)
So, there's definitely more to it than just unpredictability, depending upon where the code is being used.
Re: (Score:2)
Re: (Score:2)
Well sure, if you're fine with your simulation being rate-limited by /dev/random.
Affordable hardware entropy sources output on the order of hundreds of kbps, which is, you know, abysmally slow for the amount of calls that are required for meaningful simulation. Without hardware entropy sources (remember, many of these are running on grad students' laptops), using 'true' randomness drops from "embarrassingly" to "unusably" slow.
Re: (Score:2)
Re: (Score:3)
Is there a reason why using the hardware entropy source to change the state of a quality PRNG those few hundred thousand times per second wouldn't work?
That is a normal construct. It's even codified in SP800-90A with the additional input option on the Reseed() function.
Re: (Score:1)
> Is there a reason why using the hardware entropy source to change the state of a quality PRNG those few hundred thousand times per second wouldn't work?
That's actually how /dev/urandom works: http://www.2uo.de/myths-about-urandom/ ...and yes, it's fine.
Re: (Score:2)
You mean other than short-range non-uniformity in the Mersenne Twister (perhaps the most commonly used quality PRNG), which this thread of comments is about?
Re:FP (Score:5, Interesting)
For large-scale simulations you need them to be pseudo-random, as in repeatable. If you are running a parallel simulation across hundreds or thousands of nodes, you can't send random data to all the nodes. You'd spend all your time sending that, not getting anywhere with your simulation.
Instead, what you do is that each node runs its own random generator, but seeded with the same state. That way they all have the same random source, with no need to flood the interconnect with (literally) random data.
Another reason is repeatability of course. You may often want to run the exact same simulation many times when you're developing and debugging your code.
Re: (Score:2)
Just prove it ;) (Score:3)
Let me know when you can prove that, Nobel have a prize for you ;)
That being the problem with randomness, it is borderline impossible to prove (and if you are unlucky, easier to disprove).
The associated problem in cryptography is trustable randomness ;)
The classic case of that is the RNG embedded in most Intel chips, where Intel refuse to give
specific details, and no one would trust them anyway, so it is not used for anything really secure.
That is not a problem with Intel specifically of course, it is a pro
Re: (Score:2)
Re:FP (Score:5, Informative)
Mersenne Twister is pretty much the standard for simulating a uniform distribution in a lot of scientific computing.
No. Permuted Congruential Generators are. There's no contest. MT is slower and doesn't pass TestU01.
MT is used for reasons of inertia.
Re:FP (Score:5, Informative)
Mersenne Twister is pretty much the standard for simulating a uniform distribution in a lot of scientific computing
kind of is, for the reason as you say of:
MT is used for reasons of inertia.
MT19937 is pretty good. Much better than the LCGs that were popular before it. Those have some really serious pitfalls and you can hit them awfully easily. MT19937 is generally pretty good, though not perfect. It has far fewer pitfalls than LCGs and you're actually pretty unlikely to hit them (it doesn't fail TestU01, it passes almost all of the tests and fails a few). It's also fast enough that you have to try quite hard to make the RNG the limiting factor (it is possible though).
You can go wrong with MT19937, but it's hard to. That makes it pretty decent. Prior to C++11, xorshift used to be my go-to generator that didn't suck because it was a 5 line paste. These days, it's MT19937 simply because it's in the standard library and good enough for almost all tasks.
Re: (Score:3)
Sure, lots of people can do that, but can they do the math behind it well enough to get a paper out of it?
Re:FP (Score:5, Interesting)
I read it too, and I fail to see the breakthrough. There are plenty of pseudo random number generators, such as the Mersenne Twister [wikipedia.org], with very long periods, so just occassionally XORing even a poor quality random number into the feedback loop, is enough to make it completely unpredictable.
This proof is about proving mathematically that an algorithm is a good entropy extractor. MT has no such proof. It doesn't even meet the definition of an entropy extractor and isn't cryptographically secure. For algorithms in the same class, PCGs [pcg-random.org] are currently top of the pile, but they still aren't secure.
They claim to have an algorithm for a secure two input extractor (where the two inputs are independent) which can extract from low entropy source. 'low' means less than 0.5. Single input extractors suffer from an absence of proofs that they can extract from sources with min-entropy less than 0.5.
I fail to see it as new because the Barack, Impagliazzo, Wigdersen extractor from 2007 did the same trick using three sources (A*B)+C where values from A,B and C are treated as elements in a GF(2^P) field and can operate with low min-entropy inputs. This is so cheap that the extractor is smaller than the smallest entropy sources. Intel published this paper using the BIW extractor for what is probably the most efficient (bits/s/unit_area and bits/s/W) to date. [deadhat.com]
I fail to see that it is useful, because real world sources have min-entropy much higher that 0.5. Low entropy inputs is a problem mathematicians think we have, but we don't. We engineer these things to be in the 0.9..0.999 range.
On top of it, these two papers are as clear as mud. I tried and failed to identify an explicit construct and ether I'm too dumb to find it or they hid it well. I did find an implication that the first result uses a depth-4 monotone circuit that takes long bit strings and compresses them to 1 bit and the second paper extends this with a matrix multiply. That sounds both inefficient and more expensive than an additional small source needed to use the cheaper 3-input BIW extractor.
My take is that there have been two breakthroughs in extractor theory in recent years. Dodis's papers on randomness classes and extraction with cbc-mac and hmac for single input extractors and the BIW paper for three input extractors. Both useful in real world RNGs.
I may be wrong though. I've asked a real mathematician to decode the claims.
Re: (Score:2)
The contribution here is to provide a solid theoretical footing for your "just occassionally XORing", which if done informally as you propose is more than likely to lead to surprising statistical defects. Not that I follow the proofs either, but I understand the value of the work.
Re: (Score:2)
All I got out of it was a function of independent random variables, and that level of math was in my very first stats course.
I'll give them the benefit of the doubt and assume I missed something.
Re: (Score:2)
It could be both. Remember that computer science and mathematics very often overlap with each other, especially with cryptography. If you couldn't understand it then maybe you had the wrong focus in computer science. I knew a lot of CS grad students and profs who were essentially mathematicians who couldn't program but who focused on a theoretical computer science area.
Re: (Score:2)
Remember that computer science and mathematics very often overlap with each other
Yes, especially if you realize you can put all the computer science on top of the lambda calculus, which is mathematics. Then it turns out that computer science is a proper subset of mathematics. Which indeed is a specific form of overlap (although the common usage of the word "overlap" could create a mistaken impression that there is a part of computer science which isn't in mathematics).
Re: (Score:2)
Yes, which is why the majority of computer science programs are not very good these days. Back in the 80s I was already seeing a trend with companies insisting that faculty start teaching more job-ready skills, and that was when many important universities didn't even have a computer science department and it was still a part of the math department. Ie, they wanted to see the university teach C instead of Pascal for the introductory courses, things like that. However I rarely see that sort of pressure pu
What a talent! (Score:1)
Re: (Score:1)
This is a failure to make a good a random FullNameGenerator from two less good generators (FirstNameGenerator, SurnameGenerator). David is a common name. Zuckerman is a common name. David Zuckerman is not uncommon enough.
Why not hardware (Score:2)
Re:Why not hardware (Score:5, Informative)
Hardware random number generation based on stochastic random processes (e.g. Johnson noise across a resistor) is a real thing.
There is the minor difficulty that it requires that you actually add the hardware to do the sampling - sure HP didn't mind dropping a /dev/hwrng into my new $3500 laptop, but most people don't buy top-of-the-line hardware, and nobody likes hardware dongles.
The REAL difficulty is that you're now subject to all the usual threats to analog processes. We like to just *assume* that all the noise sources in our analog devices are uncorrelated because that makes our analyses tractable, but that's not the case. In other words, you want to believe that the 20th bit of the noise-sampling ADC is independently flapping in the breeze and providing 1/0 randomly and in equal measure, but that doesn't make it so. This is especially the case inside a box packed with high-speed digital circuitry, which will be coupling nonrandom, nonwhite signals into anything it can. Worst of all, these signals will tend to be correlated with the ADC: Its conversion clock, like almost every other clock signal in a modern computer, is probably either phase-locked to or derived directly from the same crystal in order to provide reliable, synchronous timing between circuits.
Re: (Score:2)
Re: (Score:3)
A problem with the hardware RNGs is that they very often don't just give you a random number on demand. They often require you to have a delay between starting and ending the operation for example, and the longer you wait the more entropy is generated. Not bad drawbacks for the occasional crypto input, but could be a problem for running simulations.
Re: (Score:2)
There are things which can be done to improve the situation. Build two (or three) symmetrical noise sources and subtract one from the other to remove common mode influences; three sources allow for redundancy and cross checking. For a simple design, a complex ADC is not required; use a comparator followed by a flip-flop or latch. Use the average DC level of the noise to set the comparator's trip point. Or use an ADC instead of the comparator so that the noise source can also be fully characterized in it
Re: (Score:2)
I wonder about using something like AES or a standard encryption algorithm on the stream of numbers coming from the RNG/PRNG, with the encryption key coming from a different RNG pool. This might help with the unpredictability aspect.
Re: (Score:2)
1- Maybe you prefer a repeatable pseudo-random sequence to real random data. Like for testing, or to make stream ciphers.
2- Hardware RNGs are often biased, they may for example output a "0" 70% of the time and a "1" 30% of the time. The output needs processing to remove the bias. Combining a biased but truly random generator with an unbiased PRNG can be an effective way to archive this.
Re: (Score:2)
Why not just use analog hardware to do it; i.e. sampling of white noise?
Did you read the papers? That's what it is about.
it's all noise down here (Score:1)
heck with that. I use brown noise in my SJW random number generator. I tried using black noise, but the numbers kept moving to and 'fro. I used to use pink noise, but it was in an elevator with some XY chromosomes, and next thing I knew, the sequencer was stuck in a loop.
Re: (Score:2)
I'll take the bait-
sources cited? It's been a while since I've done a project on this, but I'd like to see this evidence of 'non-randomness of white noise' you speak of
the problem (Score:1)
to be sure: http://dilbert.com/strip/2001-10-25
Low reliability random vs dedicated white noise (Score:3)
I see this very salient question raised earlier-- why not use a dedicated white noise generator for random picks?
Most places you are going to need a good random integer are going to be in places where a dedicated diode based noise source is going to add cost to the system, and thus not be ideal. Things like an ATM machine, a cellphone, a laptop, whatever.
The cost is not in the actual component cost (a diode costs what, a few cents?) but in a standardized interface to that noise source.
Conversely, there are several very low quality sample sources that have no physical correlation with each other on such systems that can be leveraged, which have well defined methods of access, which add zero additional cost to the system.
Say for instance, microphone input + radio noise floor on a cellphone. Perhaps even data from the front and rear cameras as well. (EG, take Foo miliseconds of audio from the microphone, get an snr statistic from the radio, and have a pseudorandom shuffle done on data from the front and rear cameras. These sources are, independently, horrible sources of random. They are however, independent of each other. If this guy has found a really good way of getting good random from bad sources, then a quality random sequence can be generated quickly and cheaply this way. This would allow the device to rapidly produce quality encrypted connections without a heavy computational load, vastly improving the security of the communication, and do so without any additional OS level support, or extra hardware.)
When deriving a key pair for the likes of GPG, it can take several minutes of "random" activity on the computer to generate a sufficiently high quality entropy pool for key generation. That is far too slow to have really good, rapidly changing encryption on a high security connection. Something like this could let you generate demonstrably random AES 256 keys rapidly on the fly, and have a communication channel not even the NSA can crack.
The trick here is for them to reliably demonstrate that the rapidly produced sequences are indeed quality random sequences with no decernable pattern.
Re: (Score:2)
Re: (Score:2)
This will be true of all digital sources, unless you want to export part of the key synthesis to an external device that different local em biases.
If you do that, then you have to trust that source implicitly not to have a subtly poisoned distribution (See the NSA's backdoored random source that caused such controversy in the past few years.)
Perfection is likely not attainable, even with hardware RNGs. This lets you get reasonably good samplings from wholly local, but less robust sources. To effectively exp
Re: (Score:2)
The point was people are talking about PRNGs (guaranteed known distribution, can mimic high-quality random data, can provide cryptographic security e.g. in a key scheduling algorithm) or noise-based RNGs (unknown distribution, often provide poor-quality random data, can be influenced by other operations inside the machine). Noise-based RNGs aren't magically high-quality, flat-distribution, unpredictable number sources; they can be the worst random number sources available.
To effectively exploit the unifying em bias of the device, you would need to know very specific features of the device, and how ambient signals interact with it to shape the distributions, THEN introduce very strong EM signals to the system to get the predictable patterns needed to assault the resulting encryption based on those distributions.
You'd only have to know about t
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
The problem with hardware RNGs is that it can be difficult to detect errors and difficult to trust them. If some part of the system fails and introduces bias the only practical way of detecting that is to run expensive randomness tests on the output.
Intel does include an RNG in many of its CPUs, but how far can we trust it? It seems like the NSA has had its claws in Intel for a while, extent unknown. The RNG is an obvious target for them, along with the AES instructions found in i5 and i7 series CPUs.
In com
Re: (Score:2)
Don't look now but ... (Score:2)
"two sources that are independent and have no correlations between them"
I TOLD you mot to look. Now thanks to the observer effect,.there IS something they both have in common! Co-relationships are now inevitable unless we can find a recipient that has nothing in common with you - anyone got a spare alien around to read the message?
To give some context (Score:1)
uhm, no (Score:2)
The importance of random numbers in Crypto is that many symmetric encryption protocols rely on efficiently generating random numbers (e.g., nonces, symmetric keys).
The quality of the random numbers you use to generate prime keys for RSA are of course important too, but generally generating primes isn't done as frequently (because testing them is relatively slow) so efficiently combining two low-entropy sources isn't as critical because you can combine sources of low-entropy and achieve high quality random n
Re: (Score:2)
Nonces do not need to be random or unpredictable. A counter produces perfectly acceptable nonces, see, for example, the counter-mode for block ciphers. The defining characteristic of a nonce is that it gets used only once with respect to a given larger context.
Re: (Score:2)
This is not it. In crypto, you need "secrets" that an attacker cannot easily guess. These need to be "unpredictable" for the attacker, they do not need to be "random" or even "unpredictable" for the defender. (They can then, for example, be fed into a prime-number generator, but there are numerous other uses.) For simplicity, "unpredictable" is usually called "random", but that is not strictly true. Most "random" numbers for crypto are generated using CPRNGs (Cryptographic-PSEUDO-Random-Number-Generators).
As long as Apple gets it and shuffle sucks less (Score:4, Funny)
As long as Apple gets it and shuffle stops jumping back to the same area many, many times a day then I'll be happy. That shit gets on my nerves, yo!
Re: (Score:1)
You don't want a true random number generator then.
Take 23 random people, and there will be a better than 50% chance that two of them will have the same birthday.
Apple actually had to make its "random" shuffle less random when people complained about hearing the same songs too often. A bit like picking 23 people but making sure they don't have a common birthday. The more random you make it, the more people will complain about it not being random enough.
Maybe your shuffle still has the old, truly random shuf
Re: (Score:2)
Re: (Score:2)
It *is* a shuffle. With shuffle and repeat turned on you do not hear the same song more than once. And, actually, it doesn't reshuffle when it gets to the end of the shuffled list. It plays the same shuffled order again. You have to toggle shuffle off and back on to get a new shuffle, or you have to manually select a song which also nets a new shuffle.
I usually play the same playlist at work. Currently this playlist has 4,709 songs in it, which is simply every song in my library that I have given at le
Re: (Score:2)
Shuffle plays the shuffled list from start to finish. Nothing is repeated until the whole list has been played, at which point the same order is repeated. You have to manually toggle shuffle off and back on to get a new order.
if i understand it correctly (Score:2)
Re: (Score:2)
summary (Score:3)
One way to think about this is to understand how to get a unbiased number out of a biased coin? A simplistic Von-Neumann extractor.
Basically, flip the coin twice, discard HH/TT and assign HT and TH to 0 and one. This throws away alot of flips (depending on how biased the coin is) but gets you what you want if the flips are independent.
Now imagine instead of a 2x2 matrix, a really large matrix that takes in a sequence of flips that depends on the minimum entropy of the source so that you avoid throwing away flips. How do you fill the elements of this matrix?
You can watch here [youtube.com] and learn something...
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
VN is an unbiaser, not an entropy extractor.
It's also fragile.
May be relevant for very small embedded systems (Score:2)
I.e. in situations where you are entropy-starved (so not on regular computers) and getting the, say, 256 bits of entropy to seed a CPRNG is hard to come by. This only applies to really small embedded systems that have no A/D converters or the like. (With an AD converter, just sample some noisy source or even an open input and you get something like > 0.1 bit entropy per sample. Needs careful design though.) Whether such a very small embedded system can the use the math needed here is a different question
Too many secrets (Score:1)
Fortuna? (Score:2)
Two Bees or Not Two Bees (Score:2)
So if you combine a Trump speech with a Palin speech, you get random Shakespeare?
Re: (Score:2)
Worst of all it's random number generator... exactly how hard would it be to test it?
Random number generators are notoriously hard to test. You can run as many tests as you like, and find no pattern. But that doesn't mean no pattern is there.
Re: (Score:2)
The longer people look without finding increases confidence...
Re: (Score:2)
Random number generators are notoriously pointless to test.
You know they're not actually random (I'm with Einstein on this one).
"7,7,7,7,7,7,7,7,7,7,7,7,..." is just as "random" as any other sequence.
People need to shut the fuck up about random numbers because they don't want random numbers.
They want uniformly distributed numbers that don't fit any easily-identifiable pattern.
Re: (Score:2)
"7,7,7,7,7,7,7,7,7,7,7,7,..." is just as "random" as any other sequence.
But you can calculate the exact probability with which this sequence will be produced by a real random number generator, and if this probability is extremely low and nevertheless the sequence occurs, then that's reason enough to reject your pseudo-random number generator.
Re: (Score:2)
Re: (Score:2)
The probability of "7,7,7,7,7,7,7,7,7,7,7,7" is the same as 1,2,3,4,5,6,7,8,9,0,1,2 or any other 13-int sequence.
You are thinking of placing sequences in categories (k consecutive numbers), for which the probability indeed decreases rapidly with k.
But that requires you to define similar sequences, which is a form of finding patterns.
Re: (Score:3)
A few days ago I had dinner with a friend of mine who worked on this project. Curiosity got the best of me and I said: "Level with me man, did you really create a random number generator?" And he said: "More or less."
Re: (Score:2)
"More" was one of his inputs and "Less" was the other.
Re: (Score:2)
Re: (Score:2)
They do outline a construction. It's not just an existence proof.
They do? It didn't seem very explicit at all.
Lots of mathematics though.
Re: (Score:2)
Re: (Score:1)
Neither of these has a uniform distribution. In the former example, lower numbers are more likely. In the latter example, higher numbers are more likely.
Re: (Score:2)
Re: (Score:1)
Or this one [xkcd.com]
Re: (Score:1)
That is literally how security on the PS3 worked.
Re: (Score:2)
Not necessarily.
Low quality random means the distribution has biases or predictable patterns over long sequences.
Take eg, noise from a microphone. I has a distribution bias, and a few other problems that make it bad as a random source.
It is however, completely independent from data taken from a camera, which will have its own biases.
The idea here is pretty good- two biased sources that have uncorrelated biases can get you a less biased, more uniform distribution of values, and better entropy per sample.
Anot
Re: (Score:2)
Good summary. This really is _theoretical_ work.
In practice, you just put enough "dirty"/"weak" random bits into a good CPRNG, and that is it. As long as your estimate of the entropy contained is not too high, this works and is secure. The problem people having with random-number generation is that they either select a bad (C)PRNG or they mess up the entropy estimation in seeding, e.g. due to overlooking negative effects from virtualization. Some also re-use initial, stored seeds (e.g. by cloning a VM and n
Re: This is just what cryptographers have done for (Score:1)
Which hinges on the quality of the hash function. None of these cryptographic hash functions can be fully proven to work. Their applications seem to be leaps and bounds beyond the best theoretical work, but only by stretching their own weak theoretical guarantees very thin.
So this work is somewhat low-level on what it says it can do, but has much firmer foundations.