Papers tagged as Hashing
  1. Practical Graphs for Optimal Side-Channel Resistant Memory Hard Functions 2017 CCS Hashing Implementation eprint.iacr.org
    Joel Alwen, Jeremiah Blocki, and Ben Harsha

    A memory-hard function (MHF) ƒn with parameter n can be computed in sequential time and space n. Simultaneously, a high amortized parallel area-time complexity (aAT) is incurred per evaluation. In practice, MHFs are used to limit the rate at which an adversary (using a custom computational device) can evaluate a security sensitive function that still occasionally needs to be evaluated by honest users (using an off-the-shelf general purpose device). The most prevalent examples of such sensitive functions are Key Derivation Functions (KDFs) and password hashing algorithms where rate limits help mitigate off-line dictionary attacks. As the honest users’ inputs to these functions are often (low-entropy) passwords special attention is given to a class of side-channel resistant MHFs called iMHFs.


    Essentially all iMHFs can be viewed as some mode of operation (making n calls to some round function) given by a directed acyclic graph (DAG) with very low indegree. Recently, a combinatorial property of a DAG has been identified (called “depth-robustness”) which results in good provable security for an iMHF based on that DAG. Depth-robust DAGs have also proven useful in other cryptographic applications. Unfortunately, up till now, all known very depth-robust DAGs are impractically complicated and little is known about their exact (i.e. non-asymptotic) depth-robustness both in theory and in practice.


    In this work we build and analyze (both formally and empirically) several exceedingly simple and efficient to navigate practical DAGs for use in iMHFs and other applications. For each DAG we:
    Prove that their depth-robustness is asymptotically maximal.
    Prove bounds of at least 3 orders of magnitude better on their exact depth-robustness compared to known bounds for other practical iMHF.
    Implement and empirically evaluate their depth-robustness and aAT against a variety of state-of-the art (and several new) depth-reduction and low aAT attacks. We find that, against all attacks, the new DAGs perform significantly better in practice than Argon2i, the most widely deployed iMHF in practice.


    Along the way we also improve the best known empirical attacks on the aAT of Argon2i by implementing and testing several heuristic versions of a (hitherto purely theoretical) depth-reduction attack. Finally, we demonstrate practicality of our constructions by modifying the Argon2i code base to use one of the new high aAT DAGs. Experimental benchmarks on a standard off-the-shelf CPU show that the new modifications do not adversely affect the impressive throughput of Argon2i (despite seemingly enjoying significantly higher aAT).

  2. T/Key: Second-Factor Authentication From Secure Hash Chains 2017 CCS Hashing Implementation acmccs.github.io
    Dmitry Kogan, Nathan Manohar, and Dan Boneh

    Time-based one-time password (TOTP) systems in use today require storing secrets on both the client and the server. As a result, an attack on the server can expose all second factors for all users in the system. We present T/Key, a time-based one-time password system that requires no secrets on the server. Our work modernizes the classic S/Key system and addresses the challenges in making such a system secure and practical. At the heart of our construction is a new lower bound analyzing the hardness of inverting hash chains composed of independent random functions, which formalizes the security of this widely used primitive. Additionally, we develop a near-optimal algorithm for quickly generating the required elements in a hash chain with little memory on the client. We report on our implementation of T/Key as an Android application. T/Key can be used as a replacement for current TOTP systems, and it remains secure in the event of a server-side compromise. The cost, as with S/Key, is that one-time passwords are longer than the standard six characters used in TOTP.

  3. From Collisions to Chosen-Prefix Collisions - Application to Full SHA-1 2019 Attacks Cryptanalysis Eurocrypt Hashing eprint.iacr.org
    Gaëtan Leurent and Thomas Peyrin

    A chosen-prefix collision attack is a stronger variant of a collision attack, where an arbitrary pair of challenge prefixes are turned into a collision. Chosen-prefix collisions are usually significantly harder to produce than (identical-prefix) collisions, but the practical impact of such an attack is much larger. While many cryptographic constructions rely on collision-resistance for their security proofs, collision attacks are hard to turn into a break of concrete protocols, because the adversary has limited control over the colliding messages. On the other hand, chosen-prefix collisions have been shown to break certificates (by creating a rogue CA) and many internet protocols (TLS, SSH, IPsec).


    In this article, we propose new techniques to turn collision attacks into chosen-prefix collision attacks. Our strategy is composed of two phases: first, a birthday search that aims at taking the random chaining variable difference (due to the chosen-prefix model) to a set of pre-defined target differences. Then, using a multi-block approach, carefully analysing the clustering effect, we map this new chaining variable difference to a colliding pair of states using techniques developed for collision attacks.


    We apply those techniques to MD5 and SHA1, and obtain improved attacks. In particular, we have a chosen-prefix collision attack against SHA1 with complexity between 266.9 and 269.4 (depending on assumptions about the cost of finding near-collision blocks), while the best-known attack has complexity 277.1. This is within a small factor of the complexity of the classical collision attack on SHA1 (estimated as 264.7). This represents yet another warning that industries and users have to move away from using SHA1 as soon as possible.

  4. A New Distribution-Sensitive Secure Sketch and Popularity-Proportional Hashing 2017 Crypto Hashing eprint.iacr.org
    Joanne Woodage and Rahul Chatterjee and Yevgeniy Dodis and Ari Juels and Thomas Ristenpart

    Motivated by typo correction in password authentication, we investigate cryptographic error-correction of secrets in settings where the distribution of secrets is a priori (approximately) known. We refer to this as the distribution-sensitive setting.


    We design a new secure sketch called the layer-hiding hash (LHH) that offers the best security to date. Roughly speaking, we show that LHH saves an additional log H_0(W) bits of entropy compared to the recent layered sketch construction due to Fuller, Reyzin, and Smith (FRS). Here H_0(W) is the size of the support of the distribution W. When supports are large, as with passwords, our new construction offers a substantial security improvement.


    We provide two new constructions of typo-tolerant password-based authentication schemes. The first combines a LHH or FRS sketch with a standard slow-to-compute hash function, and the second avoids secure sketches entirely, correcting typos instead by checking all nearby passwords. Unlike the previous such brute-force-checking construction, due to Chatterjee et al., our new construction uses a hash function whose run-time is proportional to the popularity of the password (forcing a longer hashing time on more popular, lower entropy passwords). We refer to this as popularity-proportional hashing (PPH). We then introduce a frame-work for comparing different typo-tolerant authentication approaches. We show that PPH always offers a better time / security trade-off than the LHH and FRS constructions, and for certain distributions outperforms the Chatterjee et al. construction. Elsewhere, this latter construction offers the best trade-off. In aggregate our results suggest that the best known secure sketches are still inferior to simpler brute-force based approaches.

  5. New Collision Attacks on Round-Reduced Keccak 2017 Attacks Eurocrypt Hashing eprint.iacr.org
    Kexin Qiao, Ling Song, Meicheng Liu, and Jian Guo

    In this paper, we focus on collision attacks against Keccak hash function family and some of its variants. Following the framework developed by Dinur et al. at FSE~2012 where 4-round collisions were found by combining 3-round differential trails and 1-round connectors, we extend the connectors one round further hence achieve collision attacks for up to 5 rounds. The extension is possible thanks to the large degree of freedom of the wide internal state. By linearization of all S-boxes of the first round, the problem of finding solutions of 2-round connectors are converted to that of solving a system of linear equations. However, due to the quick freedom reduction from the linearization, the system has solution only when the 3-round differential trails satisfy some additional conditions. We develop a dedicated differential trail search strategy and find such special differentials indeed exist. As a result, the first practical collision attack against 5-round SHAKE128 and two 5-round instances of the Keccak collision challenges are found with real examples. We also give the first results against 5-round Keccak224 and 6-round Keccak collision challenges. It is remarked that the work here is still far from threatening the security of the full 24-round Keccak family.

  6. Hashing Garbled Circuits for Free 2017 Eurocrypt GarbledCircuits Hashing MPC eprint.iacr.org
    Xiong Fan, Chaya Ganesh, and Vladimir Kolesnikov

    We introduce {\em Free Hash}, a new approach to generating Garbled Circuit (GC) hash at no extra cost during GC generation. This is in contrast with state-of-the-art approaches, which hash GCs at computational cost of up to 6× of GC generation. GC hashing is at the core of the cut-and-choose technique of GC-based secure function evaluation (SFE).


    Our main idea is to intertwine hash generation/verification with GC generation and evaluation. While we {\em allow} an adversary to generate a GC \GCˆ whose hash collides with an honestly generated \GC, such a \GCˆ w.h.p. will fail evaluation and cheating will be discovered. Our GC hash is simply a (slightly modified) XOR of all the gate table rows of GC. It is compatible with Free XOR and half-gates garbling, and can be made to work with many cut-and-choose SFE protocols.


    With today’s network speeds being not far behind hardware-assisted fixed-key garbling throughput, eliminating the GC hashing cost will significantly improve SFE performance. Our estimates show substantial cost reduction in typical settings, and up to factor 6 in specialized applications relying on GC hashes.


    We implemented GC hashing algorithm and report on its performance.

  7. Conditional Cube Attack on Reduced-Round Keccak Sponge Function 2017 Attacks Eurocrypt Hashing eprint.iacr.org
    Senyang Huang, Xiaoyun Wang, Guangwu Xu, Meiqin Wang, and Jingyuan Zhao

    The security analysis of Keccak, the winner of SHA-3, has attracted considerable interest. Recently, some attention has been paid to the analysis of keyed modes of Keccak sponge function. As a notable example, the most efficient key recovery attacks on Keccak-MAC and Keyak were reported at EUROCRYPT’15 where cube attacks and cubeattack- like cryptanalysis have been applied. In this paper, we develop a new type of cube distinguisher, the conditional cube tester, for Keccak sponge function. By imposing some bit conditions for certain cube variables, we are able to construct cube testers with smaller dimensions. Our conditional cube testers are used to analyse Keccak in keyed modes. For reduced-round Keccak-MAC and Keyak, our attacks greatly improve the best known attacks in key recovery in terms of the number of rounds or the complexity. Moreover, our new model can also be applied to keyless setting to distinguish Keccak sponge function from random permutation.We provide a searching algorithm to produce the most efficient conditional cube tester by modeling it as an MILP (mixed integer linear programming) problem. As a result, we improve the previous distinguishing attacks on Keccak sponge function significantly. Most of our attacks have been implemented and verified by desktop computers. Finally we remark that our attacks on the the reduced-round Keccak will not threat the security margin of Keccak sponge function.

  8. The first collision for full SHA-1 2017 Cryptanalysis Crypto Hashing eprint.iacr.org
    Marc Stevens, Elie Bursztein, Pierre Karpman, Ange Albertini, Yarik Markov

    SHA-1 is a widely used 1995 NIST cryptographic hash function standard that was officially deprecated by NIST in 2011 due to fundamental security weaknesses demonstrated in various analyses and theoretical attacks. Despite its deprecation, SHA-1 remains widely used in 2017 for document and TLS certificate signatures, and also in many software such as the GIT versioning system for integrity and backup purposes.


    A key reason behind the reluctance of many industry players to replace SHA-1 with a safer alternative is the fact that finding an actual collision has seemed to be impractical for the past eleven years due to the high complexity and computational cost of the attack.


    In this paper, we demonstrate that SHA-1 collision attacks have finally become practical by providing the first known instance of a collision.


    Furthermore, the prefix of the colliding messages was carefully chosen so that they allow an attacker to forge two distinct PDF documents with the same SHA-1 hash that display different arbitrarily-chosen visual contents.


    We were able to find this collision by combining many special cryptanalytic techniques in complex ways and improving upon previous work. In total the computational effort spent is equivalent to 2^63.1 calls to SHA-1’s compression function, and took approximately 6,500 CPU years and 100 GPU years. While the computational power spent on this collision is larger than other public cryptanalytic computations, it is still more than 100,000 times faster than a brute force search.

  9. Depth-Robust Graphs and Their Cumulative Memory Complexity 2017 Eurocrypt Hashing Passwords eprint.iacr.org
    Joël Alwen, Jeremiah Blocki, and Krzysztof Pietrzak

    Data-independent Memory Hard Functions (iMHFS) are finding a growing number of applications in security; especially in the domain of password hashing. An important property of a concrete iMHF is specified by fixing a directed acyclic graph (DAG) Gn on n nodes. The quality of that iMHF is then captured by the following two pebbling complexities of Gn: \begin​{itemize} \item The parallel cumulative pebbling complexity Π∥cc(Gn) must be as high as possible (to ensure that the amortized cost of computing the function on dedicated hardware is dominated by the cost of memory).


    \item The sequential space-time pebbling complexity Πst(Gn) should be as close as possible to Π∥cc(Gn) (to ensure that using many cores in parallel and amortizing over many instances does not give much of an advantage). \end{itemize} In this paper we construct a family of DAGs with best possible parameters in an asymptotic sense, i.e., where Π∥cc(Gn)=Ω(n2/log(n)) (which matches a known upper bound) and Πst(Gn) is within a constant factor of Π∥cc(Gn).


    Our analysis relies on a new connection between the pebbling complexity of a DAG and its depth-robustness (DR) – a well studied combinatorial property. We show that high DR is {\em sufficient} for high Π∥cc. Alwen and Blocki (CRYPTO’16) showed that high DR is {\em necessary} and so, together, these results fully characterize DAGs with high Π∥cc in terms of DR.


    Complementing these results, we provide new upper and lower bounds on the Π∥cc of several important candidate iMHFs from the literature. We give the first lower bounds on the memory hardness of the Catena and Balloon Hashing functions in a parallel model of computation and we give the first lower bounds of any kind for (a version) of Argon2i.


    Finally we describe a new class of pebbling attacks improving on those of Alwen and Blocki (CRYPTO’16). By instantiating these attacks we upperbound the Π∥cc of the Password Hashing Competition winner Argon2i and one of the Balloon Hashing functions by O(n1.71). We also show an upper bound of O(n1.625) for the Catena functions and the two remaining Balloon Hashing functions.

  10. Scrypt is Maximally Memory-Hard 2017 Eurocrypt Hashing Passwords eprint.iacr.org
    Joël Alwen, Binyi Chen, Krzysztof Pietrzak, Leonid Reyzin, and Stefano Tessaro

    Memory-hard functions (MHFs) are hash algorithms whose evaluation cost is dominated by memory cost. As memory, unlike computation, costs about the same across different platforms, MHFs cannot be evaluated at significantly lower cost on dedicated hardware like ASICs. MHFs have found widespread applications including password hashing, key derivation, and proofs-of-work.


    This paper focuses on scrypt, a simple candidate MHF designed by Percival, and described in RFC 7914. It has been used within a number of cryptocurrencies (e.g., Litecoin and Dogecoin) and has been an inspiration for Argon2d, one of the winners of the recent password-hashing competition. Despite its popularity, no rigorous lower bounds on its memory complexity are known.


    We prove that scrypt is optimally memory hard, i.e., its cumulative memory complexity (cmc) in the parallel random oracle model is Ω(n2w), where w and n are the output length and number of invocations of the underlying hash function, respectively. High cmc is a strong security target for MHFs introduced by Alwen and Serbinenko (STOC ’15) which implies high memory cost even for adversaries who can amortise the cost over many evaluations and evaluate the underlying hash functions many times in parallel. Our proof is the first showing optimal memory hardness for any MHF.

  11. Better Than Advertised: Improved Collision-Resistance Guarantees for MD-Based Hash Functions 2017 CCS Hashing acmccs.github.io
    Mihir Bellare, Joseph Jaeger, and Julia Len

    The MD transform that underlies the MD and SHA families iterates a compression function h to get a hash function H. The question we ask is, what property X of h guarantees collision resistance (CR) of H? The classical answer is that X itself be CR. We show that weaker conditions X, in particular forms of what we call constrained-CR, suffice. This reduces demands on compression functions, to the benefit of security, and also, forensically, explains why collision-finding attacks on compression functions have not, historically, lead to immediate breaks of the corresponding hash functions. We obtain our results via a definitional framework called RS security, and a parameterized treatment of MD, that also serve to unify prior work and variants of the transform.

  12. Zero-Knowledge Contingent Payments Revisited: Attacks and Payments for Services 2017 Attacks Blockchains CCS Hashing Implementation ZK acmccs.github.io
    Matteo Campanelli, Rosario Gennaro, Steven Goldfeder, and Luca Nizzardo

    Zero Knowledge Contingent Payment (ZKCP) protocols allow fair exchange of sold goods and payments over the Bitcoin network. In this paper we point out two main shortcomings of current proposals for ZKCP, and propose ways to address them.


    First we show an attack that allows a buyer to learn partial information about the digital good being sold, without paying for it. This break in the zero-knowledge condition of ZKCP is due to the fact that in the protocols we attack, the buyer is allowed to choose common parameters that normally should be selected by a trusted third party. We implemented and tested this attack: we present code that learns, without paying, the value of a Sudoku cell in the “Pay-to-Sudoku” ZKCP implementation. We also present ways to fix this attack that do not require a trusted third party.


    Second, we show that ZKCP are not suited for the purchase of digital services rather than goods. Current constructions of ZKCP do not allow a seller to receive payments after proving that a certain service has been rendered, but only for the sale of a specific digital good. We define the notion of Zero-Knowledge Contingent Service Payment (ZKCSP) protocols and construct two new protocols, for either public or private verification. We implemented our ZKCSP protocols for Proofs of Retrievability, where a client pays the server for providing a proof that the client’s data is correctly stored by the server.We also implement a secure ZKCP protocol for “Pay-to-Sudoku” via our ZKCSP protocol, which does not require a trusted third party.


    A side product of our implementation effort is a new optimized circuit for SHA256 with less than a quarter than the number of AND gates of the best previously publicly available one. Our new SHA256 circuit may be of independent use for circuit-based MPC and FHE protocols that require SHA256 circuits.

  13. Speeding up detection of SHA-1 collision attacks using unavoidable attack conditions 2017 Hashing Usenix usenix.org
    Marc Stevens and Daniel Shumow

    Counter-cryptanalysis, the concept of using cryptanalytic techniques to detect cryptanalytic attacks, was introduced at CRYPTO 2013 [23] with a hash collision detection algorithm. That is, an algorithm that detects whether a given single message is part of a colliding message pair constructed using a cryptanalytic collision attack on MD5 or SHA-1.


    Unfortunately, the original collision detection algorithm is not a low-cost solution as it costs 15 to 224 times more than a single hash computation. In this paper we present a significant performance improvement for collision detection based on the new concept of unavoidable conditions. Unavoidable conditions are conditions that are necessary for all feasible attacks in a certain attack class. As such they can be used to quickly dismiss particular attack classes that may have been used in the construction of the message. To determine an unavoidable condition one must rule out any feasible variant attack where this condition might not be necessary, otherwise adversaries aware of counter-cryptanalysis could easily bypass this improved collision detection with a carefully chosen variant attack. Based on a conjecture solidly supported by the current state of the art, we show how we can determine such unavoidable conditions for SHA-1.


    We have implemented the improved SHA-1 collision detection using such unavoidable conditions and which is more than 20 times faster than without our unavoidable condition improvements. We have measured that overall our implemented SHA-1 with collision detection is only a factor 1.60 slower, on average, than SHA-1. With the demonstration of a SHA-1 collision, the algorithm presented here has been deployed by Git, GitHub, Google Drive, Gmail, Microsoft OneDrive and others, showing the effectiveness of this technique.