Papers tagged as 2020
  1. MIRAGE: Succinct Arguments for Randomized Algorithms with Applications to Universal zk-SNARKs 2020 Implementation Usenix ZK zkSNARK usenix.org
    Ahmed Kosba, Dimitrios Papadopoulos, Charalampos Papamanthou, Dawn Song

    The last few years have witnessed increasing interest in the de- ployment of zero-knowledge proof systems, in particular ones with succinct proofs and efficient verification (zk-SNARKs). One of the main challenges facing the wide deployment of zk-SNARKs is the requirement of a trusted key generation phase per different computation to achieve practical proving performance. Existing zero-knowledge proof systems that do not require trusted setup or have a single trusted preprocess- ing phase suffer from increased proof size and/or additional verification overhead. On the other other hand, although uni- versal circuit generators for zk-SNARKs (that can eliminate the need for per-computation preprocessing) have been intro- duced in the literature, the performance of the prover remains far from practical for real-world applications.
    In this paper, we first present a new zk-SNARK system that is well-suited for randomized algorithms—in particular it does not encode randomness generation within the arith- metic circuit allowing for more practical prover times. Then, we design a universal circuit that takes as input any arith- metic circuit of a bounded number of operations as well as a possible value assignment, and performs randomized checks to verify consistency. Our universal circuit is linear in the number of operations instead of quasi-linear like other univer- sal circuits. By applying our new zk-SNARK system to our universal circuit, we build MIRAGE, a universal zk-SNARK with very succinct proofs—the proof contains just one ad- ditional element compared to the per-circuit preprocessing state-of-the-art zk-SNARK by Groth (Eurocrypt 2016). Fi- nally, we implement MIRAGE and experimentally evaluate its performance for different circuits and in the context of privacy-preserving smart contracts.

  2. SEAL: Attack Mitigation for Encrypted Databases via Adjustable Leakage 2020 EncryptedDatabases ORAM SearchableEncryption Usenix usenix.org
    Ioannis Demertzis, Dimitrios Papadopoulos, Charalampos Papamanthou, Saurabh Shintre

    Building expressive encrypted databases that can scale to large volumes of data while enjoying formal security guar- antees has been one of the holy grails of security and cryp- tography research. Searchable Encryption (SE) is considered to be an attractive implementation choice for this goal: It naturally supports basic database queries such as point, join, group-by and range, and is very practical at the expense of well-defined leakage such as search and access pattern. Never- theless, recent attacks have exploited these leakages to recover the plaintext database or the posed queries, casting doubt to the usefulness of SE in encrypted systems. Defenses against such leakage-abuse attacks typically require the use of Obliv- ious RAM or worst-case padding—such countermeasures are however quite impractical. In order to efficiently defend against leakage-abuse attacks on SE-based systems, we pro- pose SEAL, a family of new SE schemes with adjustable leakage. In SEAL, the amount of privacy loss is expressed in leaked bits of search or access pattern and can be defined at setup. As our experiments show, when protecting only a few bits of leakage (e.g., three to four bits of access pattern), enough for existing and even new more aggressive attacks to fail, SEAL query execution time is within the realm of practical for real-world applications (a little over one order of magnitude slowdown compared to traditional SE-based en- crypted databases). Thus, SEAL could comprise a promising approach to build efficient and robust encrypted databases.

  3. Efficient Constructions for Almost-everywhere Secure Computation 2020 Eurocrypt MPC eprint.iacr.org
    Siddhartha Jayanti, Srinivasan Raghuraman and Nikhil Vyas

    The importance of efficient MPC in today’s world needs no retelling. An obvious barebones requirement to execute protocols for MPC is the ability of parties to communicate with each other. Traditionally, we solve this problem by assuming that every pair of parties in the network share a dedicated secure link that enables reliable message transmission. This assumption is clearly impractical as the number of nodes in the network grows, as it has today. In their seminal work, Dwork, Peleg, Pippenger and Upfal introduced the notion of almost-everywhere secure primitives in an effort to model the reality of large scale global networks and study the impact of limited connectivity on the properties of fundamental fault-tolerant distributed tasks. In this model, the underlying communication network is sparse and hence some nodes may not even be in a position to participate in the protocol (all their neighbors may be corrupt, for instance). A protocol for almost everywhere reliable message transmission, which would guarantee that a large subset of the network can transmit messages to each other reliably, implies a protocol for almost-everywhere agreement where nodes are required to agree on a value despite malicious or byzantine behavior of some subset of nodes, and an almost-everywhere agreement protocol implies a protocol almost-everywhere secure MPC that is unconditionally or information-theoretically secure. The parameters of interest are the degree d of the network, the number t of corrupted nodes that can be tolerated and the number x of nodes that the protocol may give up. Prior work achieves d=O(1) for t=O(n/logn) and d=O(logqn) for t=O(n) for some fixed constant q>1.


    In this work, we first derive message protocols which are efficient with respect to the total number of computations done across the network. We use this result to show an abundance of networks with d=O(1) that are resilient to t=O(n) random corruptions. This randomized result helps us build networks which are resistant to worst-case adversaries. In particular, we improve the state of the art in the almost everywhere reliable message transmission problem in the worst-case adversary model by showing the existence of an abundance of networks that satisfy d=O(logn) for t=O(n), thus making progress on this question after nearly a decade. Finally, we define a new adversarial model of corruptions that is suitable for networks shared amongst a large group of corporations that: (1) do not trust each other, and (2) may collude, and construct optimal networks achieving d=O(1) for t=O(n) in this model.

  4. Formalizing Data Deletion in the Context of the Right to be Forgotten 2020 Eurocrypt eprint.iacr.org
    Sanjam Garg, Shafi Goldwasser and Prashant Nalini Vasudevan

    The right of an individual to request the deletion of their personal data by an entity that might be storing it – referred to as the right to be forgotten – has been explicitly recognized, legislated, and exercised in several jurisdictions across the world, including the European Union, Argentina, and California. However, much of the discussion surrounding this right offers only an intuitive notion of what it means for it to be fulfilled – of what it means for such personal data to be deleted. In this work, we provide a formal definitional framework for the right to be forgotten using tools and paradigms from cryptography. In particular, we provide a precise definition of what could be (or should be) expected from an entity that collects individuals’ data when a request is made of it to delete some of this data. Our framework captures several, though not all, relevant aspects of typical systems involved in data processing. While it cannot be viewed as expressing the statements of current laws (especially since these are rather vague in this respect), our work offers technically precise definitions that represent possibilities for what the law could reasonably expect, and alternatives for what future versions of the law could explicitly require. Finally, with the goal of demonstrating the applicability of our framework and definitions, we consider various natural and simple scenarios where the right to be forgotten comes up. For each of these scenarios, we highlight the pitfalls that arise even in genuine attempts at implementing systems offering deletion guarantees, and also describe technological solutions that provably satisfy our definitions. These solutions bring together techniques built by various communities.

  5. Friet: An Authenticated Encryption Scheme with Built-in Fault Detection 2020 AuthenticatedEncryption Eurocrypt SymmetricKey eprint.iacr.org
    Thierry Simon, Lejla Batina, Joan Daemen, Vincent Grosso, Pedro Maat Costa Massolino, Kostas Papagiannopoulos, Francesco Regazzoni, and Niels Samwel

    In this work we present a duplex-based authenticated encryption scheme Friet based on a new permutation called Friet-P. We designed Friet-P with a novel approach for cryptographic permutations and block ciphers that takes fault-attack resistance into account and that we introduce in this paper. In this method, we build a permutation fC to be embedded in a larger one, f . First, we define f as a sequence of steps that all abide a chosen error-correcting code C, i.e., that map C-codewords to C-codewords. Then, we embed fC in f by first encoding its input to an element of C, applying f and then decoding back from C. This last step detects a fault when the output of f is not in C. We motivate the design of the permutation we use in Friet and report on performance in soft- and hardware. We evaluate the fault-detection capabilities of the software and simulated hardware implementations with attacks. Finally, we perform a leakage evaluation. Our code is available at https://github.com/thisimon/Friet.git.

  6. Everybody's a Target: Scalability in Public-Key Encryption 2020 EllipticCurves Eurocrypt PKI PublicKeyEncryption eprint.iacr.org
    Benedikt Auerbach, Federico Giacon and Eike Kiltz

    For 1≤m≤n, we consider a natural m-out-of-n multi-instance scenario for a public-key encryption (PKE) scheme. An adversary, given n independent instances of PKE, wins if he breaks at least m out of the n instances. In this work, we are interested in the scaling factor of PKE schemes, SF, which measures how well the difficulty of breaking m out of the n instances scales in m. That is, a scaling factor SF=ℓ indicates that breaking m out of n instances is at least ℓ times more difficult than breaking one single instance. A PKE scheme with small scaling factor hence provides an ideal target for mass surveillance. In fact, the Logjam attack (CCS 2015) implicitly exploited, among other things, an almost constant scaling factor of ElGamal over finite fields (with shared group parameters).


    For Hashed ElGamal over elliptic curves, we use the generic group model to argue that the scaling factor depends on the scheme’s granularity. In low granularity, meaning each public key contains its independent group parameter, the scheme has optimal scaling factor SF=m; In medium and high granularity, meaning all public keys share the same group parameter, the scheme still has a reasonable scaling factor SF=m−−√. Our findings underline that instantiating ElGamal over elliptic curves should be preferred to finite fields in a multi-instance scenario.


    As our main technical contribution, we derive new generic-group lower bounds of Ω(mp−−−√) on the difficulty of solving both the m-out-of-n Gap Discrete Logarithm and the m-out-of-n Gap Computational Diffie-Hellman problem over groups of prime order p, extending a recent result by Yun (EUROCRYPT 2015). We establish the lower bound by studying the hardness of a related computational problem which we call the search-by-hypersurface problem.

  7. Blind Schnorr Signatures in the Algebraic Group Model 2020 Eurocrypt Signatures eprint.iacr.org
    Georg Fuchsbauer, Antoine Plouviez and Yannick Seurin

    We study the security of schemes related to Schnorr signatures in the algebraic group model (AGM) proposed by Fuchsbauer, Kiltz, and Loss (CRYPTO 2018), where the adversary can only compute new group elements by applying the group operation. Schnorr signatures can be proved secure in the random oracle model (ROM) under the discrete logarithm assumption (DL) by rewinding the adversary; but this security proof is loose. We start with giving a tight security proof under DL in the combination of the AGM and the ROM. Our main focus are blind Schnorr signatures, whose only known security proof is in the combination of the ROM and the generic group model, under the assumption that the so-called ROS problem is hard. We show that in the AGM+ROM the scheme is secure assuming hardness of the one-more discrete logarithm problem and the ROS problem. As the latter can be solved in sub-exponential time using Wagner’s algorithm, this is not entirely satisfying. Hence, we then propose a very simple modification of the scheme (which leaves key generation and signature verification unchanged) and show that, instead of ROS, its security relies on the hardness of a related problem which appears much harder than ROS. Finally, we give a tight reduction of the CCA2 security of Schnorr-signed ElGamal key encapsulation to DL, again in the AGM+ROM.

  8. Compact NIZKs from Standard Assumptions on Bilinear Maps 2020 Eurocrypt NIZK eprint.iacr.org
    Shuichi Katsumata, Ryo Nishimaki, Shota Yamada and Takashi Yamakawa

    A non-interactive zero-knowledge (NIZK) protocol enables a prover to convince a verifier of the truth of a statement without leaking any other information by sending a single message. The main focus of this work is on exploring short pairing-based NIZKs for all NP languages based on standard assumptions. In this regime, the seminal work of Groth, Ostrovsky, and Sahai (J.ACM’12) (GOS-NIZK) is still considered to be the state-of-the-art. Although fairly efficient, one drawback of GOS-NIZK is that the proof size is multiplicative in the circuit size computing the NP relation. That is, the proof size grows by O(|C|λ), where C is the circuit for the NP relation and λ is the security parameter. By now, there have been numerous follow-up works focusing on shortening the proof size of pairing-based NIZKs, however, thus far, all works come at the cost of relying either on a non-standard knowledge-type assumption or a non-static q-type assumption. Specifically, improving the proof size of the original GOS-NIZK under the same standard assumption has remained as an open problem.


    Our main result is a construction of a pairing-based NIZK for all of NP whose proof size is additive in |C|, that is, the proof size only grows by |C|+\poly(λ), based on the decisional linear (DLIN) assumption. Since the DLIN assumption is the same assumption underlying GOS-NIZK, our NIZK is a strict improvement on their proof size.


    As by-products of our main result, we also obtain the following two results: (1) We construct a perfectly zero-knowledge NIZK (NIPZK) for NP relations computable in NC1 with proof size |w|⋅\poly(λ) where |w| is the witness length based on the DLIN assumption. This is the first pairing-based NIPZK for a non-trivial class of NP languages whose proof size is independent of |C| based on a standard assumption. (2)~We construct a universally composable (UC) NIZK for NP relations computable in NC1 in the erasure-free adaptive setting whose proof size is |w|⋅\poly(λ) from the DLIN assumption. This is an improvement over the recent result of Katsumata, Nishimaki, Yamada, and Yamakawa (CRYPTO’19), which gave a similar result based on a non-static q-type assumption.


    The main building block for all of our NIZKs is a constrained signature scheme with decomposable online-offline efficiency. This is a property which we newly introduce in this paper and construct from the DLIN assumption. We believe this construction is of an independent interest.

  9. Private Aggregation from Fewer Anonymous Messages 2020 DifferentialPrivacy Eurocrypt Privacy arxiv.org
    Badih Ghazi, Pasin Manurangsi, Rasmus Pagh, Ameya Velingker

    Consider the setup where $n$ parties are each given a number $x_i \in \mathbb{F}_q$ and the goal is to compute the sum $\sum_i x_i$ in a secure fashion and with as little communication as possible. We study this problem in the anonymized model of Ishai et al. (FOCS 2006) where each party may broadcast anonymous messages on an insecure channel.
    We present a new analysis of the one-round “split and mix” protocol of Ishai et al. In order to achieve the same security parameter, our analysis reduces the required number of messages by a $\Theta(\log n)$ multiplicative factor. We complement our positive result with lower bounds showing that the dependence of the number of messages on the domain size, the number of parties, and the security parameter is essentially tight.
    Using a reduction of Balle et al. (2019), our improved analysis of the protocol of Ishai et al. yields, in the same model, an $\left(\varepsilon, \delta\right)$-differentially private protocol for aggregation that, for any constant $\varepsilon > 0$ and any $\delta = \frac{1}{\mathrm{poly}(n)}$, incurs only a constant error and requires only a constant number of messages per party. Previously, such a protocol was known only for $\Omega(\log n)$ messages per party.

  10. Implementing Grover oracles for quantum key search on AES and LowMC 2020 Attacks Eurocrypt QuantumAttacks SymmetricKey eprint.iacr.org
    Samuel Jaques, Michael Naehrig, Martin Roetteler and Fernando Virdia

    Grover’s search algorithm gives a quantum attack against block ciphers by searching for a key that matches a small number of plaintext-ciphertext pairs. This attack uses O(N−−√) calls to the cipher to search a key space of size N. Previous work in the specific case of AES derived the full gate cost by analyzing quantum circuits for the cipher, but focused on minimizing the number of qubits. In contrast, we study the cost of quantum key search attacks under a depth restriction and introduce techniques that reduce the oracle depth, even if it requires more qubits. As cases in point, we design quantum circuits for the block ciphers AES and LowMC. Our circuits give a lower overall attack cost in both the gate count and depth-times-width cost models. In NIST’s post-quantum cryptography standardization process, security categories are defined based on the concrete cost of quantum key search against AES. We present new, lower cost estimates for each category, so our work has immediate implications for the security assessment of post-quantum cryptography. As part of this work, we release Q# implementations of the full Grover oracle for AES-128, -192, -256 and for the three LowMC instantiations used in \picnic, including unit tests and code to reproduce our quantum resource estimates. To the best of our knowledge, these are the first two such full implementations and automatic resource estimations.

  11. Private Information Retrieval with Sublinear Online Time 2020 Eurocrypt PIR eprint.iacr.org
    Henry Corrigan-Gibbs and Dmitry Kogan

    We present the first protocols for private information retrieval that allow fast (sublinear-time) database lookups without increasing the server-side storage requirements. To achieve these efficiency goals, our protocols work in an offline/online model. In an offline phase, which takes place before the client has decided which database bit it wants to read, the client fetches a short string from the servers. In a subsequent online phase, the client can privately retrieve its desired bit of the database by making a second query to the servers. By pushing the bulk of the server-side computation into the offline phase (which is independent of the client’s query), our protocols allow the online phase to complete very quickly—in time sublinear in the size of the database. Our protocols can provide statistical security in the two-server setting and computational security in the single-server setting. Finally, we prove that, in this model, our protocols are optimal in terms of the trade-off they achieve between communication and running time.

  12. SmartVerif: Push the Limit of Automation Capability of Verifying Security Protocols by Dynamic Strategies 2020 FormalVerification Usenix usenix.org
    Yan Xiong, Cheng Shu, Wenchao Hung, Fuyou Miao, Wansen Wang, and Hengyi Ouyang

    Current formal approaches have been successfully used to find design flaws in many security protocols. However, it is still challenging to automatically analyze protocols due to their large or infinite state spaces. In this paper, we propose SmartVerif, a novel and general framework that pushes the limit of automation capability of state-of-the-art verification approaches. The primary technical contribution is the dynamic strategy inside SmartVerif, which can be used to smartly search proof paths. Different from the non-trivial and error-prone design of existing static strategies, the design of our dynamic strategy is simple and flexible: it can automatically optimize itself according to the security protocols without any human intervention. With the optimized strategy, SmartVerif can localize and prove supporting lemmata, which leads to higher probability of success in verification. The insight of designing the strategy is that the node representing a supporting lemma is on an incorrect proof path with lower probability, when a random strategy is given. Hence, we implement the strategy around the insight by introducing a reinforcement learning algorithm. We also propose several methods to deal with other technical problems in implementing SmartVerif. Experimental results show that SmartVerif can automatically verify all security protocols studied in this paper. The case studies also validate the efficiency of our dynamic strategy.

  13. Big Numbers - Big Troubles: Systematically Analyzing Nonce Leakage in (EC)DSA Implementations 2020 Attacks Implementation SideChannels Signatures Usenix usenix.org
    Samuel Weiser, David Schrammel, Lukas Bodner, and Raphael Spreitzer

    Side-channel attacks exploiting (EC)DSA nonce leakage easily lead to full key recovery. Although (EC)DSA implementations have already been hardened against side-channel leakage using the constant-time paradigm, the long-standing cat-and-mouse-game of attacks and patches continues. In particular, current code review is prone to miss less obvious side channels hidden deeply in the call stack. To solve this problem, a systematic study of nonce leakage is necessary. We present a systematic analysis of nonce leakage in cryptographic implementations. In particular, we expand DATA, an open-source side-channel analysis framework, to detect nonce leakage. Our analysis identified multiple unknown nonce leakage vulnerabilities across all essential computation steps involving nonces. Among others, we uncover inherent problems in Bignumber implementations that break claimed constant-time guarantees of (EC)DSA implementations if secrets are close to a word boundary. We found that lazy resizing of Bignumbers in OpenSSL and LibreSSL yields a highly accurate and easily exploitable side channel, which has been acknowledged with two CVEs. Surprisingly, we also found a tiny but expressive leakage in the constant-time scalar multiplication of OpenSSL and BoringSSL. Moreover, in the process of reporting and patching, we identified newly introduced leakage with the support of our tool, thus preventing another attack-patch cycle. We open-source our tool, together with an intuitive graphical user interface we developed.

  14. TPM-Fail: TPM meets Timing and Lattice Attacks 2020 Hardware Lattices SideChannels Signatures Usenix tpm.fail
    Daniel Moghimi, Berk Sunar, Thomas Eisenbarth, and Nadia Heninger

    Trusted Platform Module (TPM) serves as a hardware-based root of trust that protects cryptographic keys from privileged system and physical adversaries. In this work, we per-form a black-box timing analysis of TPM 2.0 devices deployed on commodity computers. Our analysis reveals thatsome of these devices feature secret-dependent execution times during signature generation based on elliptic curves. In particular, we discovered timing leakage on an Intel firmware-based TPM as well as a hardware TPM. We show how this information allows an attacker to apply lattice techniques torecover 256-bit private keys for ECDSA and EC Schnorr signatures. On Intel fTPM, our key recovery succeeds after about1,300 observations and in less than two minutes. Similarly, weextract the private ECDSA key from a hardware TPM manufactured by STMicroelectronics, which is certified at Common Criteria (CC) EAL 4+, after fewer than 40,000 observations.We further highlight the impact of these vulnerabilities by demonstrating a remote attack against a StrongSwan IPsecVPN that uses a TPM to generate the digital signatures for authentication. In this attack, the remote client recovers the server’s private authentication key by timing only 45,000authentication handshakes via a network connection.The vulnerabilities we have uncovered emphasize the difficulty of correctly implementing known constant-time techniques, and show the importance of evolutionary testing and transparent evaluation of cryptographic implementations.Even certified devices that claim resistance against attacks require additional scrutiny by the community and industry, as we learn more about these attacks.

  15. Secure parallel computation on national scale volumes of data 2020 DifferentialPrivacy MPC Usenix usenix.org
    Sahar Mazloom, Phi Hung Le, Samuel Ranellucci, and Dov Gordon

    We revisit the problem of performing secure computation of graph-parallel algorithms, focusing on the applications of securely outsourcing matrix factorization, and histograms. Leveraging recent results in low-communication secure multi-party computation, and a security relaxation that allows the computation servers to learn some differentially private leakage about user inputs, we construct a new protocol that reduces overall runtime by 320X, reduces the number of AES calls by 750X , and reduces the total communication by 200X . Our system can securely compute histograms over 300 million items in about 4 minutes, and it can perform sparse matrix factorization, which is commonly used in recommendation systems, on 20 million records in about 6 minutes. Furthermore, in contrast to prior work, our system is secure against a malicious adversary that corrupts one of the computing servers.

  16. Pixel: Multi-signatures for Consensus 2020 Blockchains CryptocurrencyScaling ProofOfStake Signatures Usenix eprint.iacr.org
    Manu Drijvers, Sergey Gorbunov, Gregory Neven, and Hoeteck Wee

    In Proof-of-Stake (PoS) and permissioned blockchains, a committee of verifiers agrees and sign every new block of transactions. These blocks are validated, propagated, and stored by all users in the network. However, posterior corruptions pose a common threat to these designs, because the adversary can corrupt committee verifiers after they certified a block and use their signing keys to certify a different block. Designing efficient and secure digital signatures for use in PoS blockchains can substantially reduce bandwidth, storage and computing requirements from nodes, thereby enabling more efficient applications.


    We present Pixel, a pairing-based forward-secure multi-signature scheme optimized for use in blockchains, that achieves substantial savings in bandwidth, storage requirements, and verification effort. Pixel signatures consist of two group elements, regardless of the number of signers, can be verified using three pairings and one exponentiation, and support non-interactive aggregation of individual signatures into a multi-signature. Pixel signatures are also forward-secure and let signers evolve their keys over time, such that new keys cannot be used to sign on old blocks, protecting against posterior corruptions attacks on blockchains. We show how to integrate Pixel into any PoS blockchain. Next, we evaluate Pixel in a real-world PoS blockchain implementation, showing that it yields notable savings in storage, bandwidth, and block verification time. In particular, Pixel signatures reduce the size of blocks with 1500 transactions by 35% and reduce block verification time by 38%.

  17. McTiny: Fast High-Confidence Post-Quantum Key Erasure for Tiny Network Servers 2020 PQC Storage Usenix eprint.iacr.org
    Daniel J. Bernstein

    Recent results have shown that some post-quantum cryptographic systems have encryption and decryption performance comparable to fast elliptic-curve cryptography (ECC) or even better. However, this performance metric is considering only CPU time and ignoring bandwidth and storage. High-confidence post-quantum encryption systems have much larger keys than ECC. For example, the code-based cryptosystem recommended by the PQCRYPTO project uses public keys of 1MB.


    Fast key erasure (to provide “forward secrecy”) requires new public keys to be constantly transmitted. Either the server needs to constantly generate, store, and transmit large keys, or it needs to receive, store, and use large keys from the clients. This is not necessarily a problem for overall bandwidth, but it is a problem for storage and computation time on tiny network servers. All straightforward approaches allow easy denial-of-service attacks.


    This paper describes a protocol, suitable for today’s networks and tiny servers, in which clients transmit their code-based one-time public keys to servers. Servers never store full client public keys but work on parts provided by the clients, without having to maintain any per-client state. Intermediate results are stored on the client side in the form of encrypted cookies and are eventually combined by the server to obtain the ciphertext. Requirements on the server side are very small: storage of one long-term private key, which is much smaller than a public key, and a few small symmetric cookie keys, which are updated regularly and erased after use. The protocol is highly parallel, requiring only a few round trips, and involves total bandwidth not much larger than a single public key. The total number of packets sent by each side is 971, each fitting into one IPv6 packet of less than 1280 bytes.


    The protocol makes use of the structure of encryption in code-based cryptography and benefits from small ciphertexts in code-based cryptography.

  18. Security under Message-Derived Keys: Signcryption in iMessage 2020 AuthenticatedEncryption Eurocrypt PublicKeyEncryption SecureMessaging SymmetricKey eprint.iacr.org
    Mihir Bellare and Igors Stepanovs

    At the core of Apple’s iMessage is a signcryption scheme that involves symmetric encryption of a message under a key that is derived from the message itself. This motivates us to formalize a primitive we call Encryption under Message-Derived Keys (EMDK). We prove security of the EMDK scheme underlying iMessage. We use this to prove security of the signcryption scheme itself, with respect to definitions of signcryption we give that enhance prior ones to cover issues peculiar to messaging protocols. Our provable-security results are quantitative, and we discuss the practical implications for iMessage.

  19. PSI from PaXoS: Fast, Malicious Private Set Intersection 2020 Eurocrypt PSI eprint.iacr.org
    Benny Pinkas, Mike Rosulek and Ni Trieu and Avishay Yanai

    We present a 2-party private set intersection (PSI) protocol which provides security against malicious participants, yet is almost as fast as the fastest known semi-honest PSI protocol of Kolesnikov et al. (CCS 2016).


    Our protocol is based on a new approach for two-party PSI, which can be instantiated to provide security against either malicious or semi-honest adversaries. The protocol is unique in that the only difference between the semi-honest and malicious versions is an instantiation with different parameters for a linear error-correction code. It is also the first PSI protocol which is concretely efficient while having linear communication and security against malicious adversaries, while running in the OT-hybrid model (assuming a non-programmable random oracle).


    State of the art semi-honest PSI protocols take advantage of cuckoo hashing, but it has proven a challenge to use cuckoo hashing for malicious security. Our protocol is the first to use cuckoo hashing for malicious-secure PSI. We do so via a new data structure, called a probe-and-XOR of strings (PaXoS), which may be of independent interest. This abstraction captures important properties of previous data structures, most notably garbled Bloom filters. While an encoding by a garbled Bloom filter is larger by a factor of $O(\lambda)$ than the original data, we describe a significantly improved PaXoS based on cuckoo hashing that achieves constant rate while being no worse in other relevant efficiency measures.

  20. Stacked Garbling for Disjunctive Zero-Knowledge Proofs 2020 Eurocrypt GarbledCircuits ZK eprint.iacr.org
    David Heath and Vladimir Kolesnikov

    Zero-knowledge (ZK) proofs (ZKP) have received wide attention, focusing on non-interactivity, short proof size, and fast verification time. We focus on the fastest total proof time, in particular for large Boolean circuits. Under this metric, Garbled Circuit (GC)-based ZKP (Jawurek et al., [JKO], CCS 2013) remained the state-of-the-art technique due to the low-constant linear scaling of computing the garbling. We improve GC-ZKP for proof statements with conditional clauses. Our communication is proportional to the longest branch rather than to the entire proof statement. This is most useful when the number m of branches is large, resulting in up to factor $m\times$ improvement over JKO. In our proof-of-concept illustrative application, prover P demonstrates knowledge of a bug in a codebase consisting of any number of snippets of actual C code. Our computation cost is linear in the size of the codebase and communication is constant in the number of snippets. That is, we require only enough communication for a single largest snippet! Our conceptual contribution is stacked garbling for ZK, a privacy-free circuit garbling scheme that can be used with the JKO GC-ZKP protocol to construct more efficient ZKP. Given a Boolean circuit C and computational security parameter $\kappa$, our garbling is $L \cdot \kappa$ bits long, where $L$ is the length of the longest execution path in C. All prior concretely efficient garbling schemes produce garblings of size $|C| \cdot \kappa$. The computational cost of our scheme is not increased over prior state-of-the-art. We implement our GC-ZKP and demonstrate significantly improved ($m\times$ over JKO) ZK performance for functions with branching factor $m$. Compared with recent ZKP (STARK, Libra, KKW, Ligero, Aurora, Bulletproofs), our scheme offers much better proof times for larger circuits ($35-1000\times$ or more, depending on circuit size and compared scheme). For our illustrative application, we consider four C code snippets, each of about 30-50 LOC; one snippet allows an invalid memory dereference. The entire proof takes 0.15 seconds and communication is 1.5 MB.

  21. The State of the Uniform: Attacks on Encrypted Databases Beyond the Uniform Query Distribution 2020 Attacks EncryptedDatabases Oakland SearchableEncryption eprint.iacr.org
    Evgenios M. Kornaropoulos and Charalampos Papamanthou and Roberto Tamassia

    Recent foundational work on leakage-abuse attacks on encrypted databases has broadened our understanding of what an adversary can accomplish with a standard leakage profile. Nevertheless, all known value reconstruction attacks succeed under strong assumptions that may not hold in the real world. The most prevalent assumption is that queries are issued uniformly at random by the client. We present the first value reconstruction attacks that succeed without any knowledge about the query or data distribution. Our approach uses the search-pattern leakage, which exists in all known structured encryption schemes but has not been fully exploited so far. At the core of our method lies a support size estimator, a technique that utilizes the repetition of search tokens with the same response to estimate distances between encrypted values without any assumptions about the underlying distribution. We develop distribution-agnostic reconstruction attacks for both range queries and k-nearest neighbor (k-NN) queries based on information extracted from the search-pattern leakage. Our new range attack follows a different algorithmic approach than state-of-the-art attacks, which are fine-tuned to succeed under the uniformly distributed queries. Instead, we reconstruct plaintext values under a variety of skewed query distributions and even outperform the accuracy of previous approaches under the uniform query distribution. Our new k-NN attack succeeds with far fewer samples than previous attacks and scales to much larger values of k. We demonstrate the effectiveness of our attacks by experimentally testing them on a wide range of query distributions and database densities, both unknown to the adversary.

  22. Protecting the 4G and 5G Cellular Paging Protocols against Security and Privacy Attacks 2020 CellularProtocols PETS petsymposium.org
    Ankush Singla, Syed Rafiul Hussain, Omar Chowdhury, Elisa Bertino, Ninghui Li

    This paper focuses on protecting the cellular paging protocol — which balances between the quality-of-service and battery consumption of a device— against security and privacy attacks. Attacks against this protocol can have severe repercussions, for instance,allowing attacker to infer a victim’s location, leak a victim’s IMSI, and inject fabricated emergency alerts.To secure the protocol, we first identify the underlying design weaknesses enabling such attacks and then pro-pose efficient and backward-compatible approaches to address these weaknesses. We also demonstrate the deployment feasibility of our enhanced paging protocol by implementing it on an open-source cellular protocol library and commodity hardware. Our evaluation demonstrates that the enhanced protocol can thwart attacks without incurring substantial overhead.

  23. Discontinued Privacy: Personal Data Leaks in Apple Bluetooth-Low-Energy Continuity Protocols 2020 Bluetooth PETS WirelessProtocols petsymposium.org
    Guillaume Celosia, Mathieu Cunche

    Apple Continuity protocols are the underlying network component of Apple Continuity services which allow seamless nearby applications such as activity and file transfer, device pairing and sharing a network connection. Those protocols rely on Bluetooth Low Energy (BLE) to exchange information between devices: Apple Continuity messages are embedded in the pay-load of BLE advertisement packets that are periodically broadcasted by devices. Recently, Martin et al. identified [1] a number of privacy issues associated with Apple Continuity protocols; we show that this was just the tip of the iceberg and that Apple Continuity protocols leak a wide range of personal information. In this work, we present a thorough reverse engineering of Apple Continuity protocols that we use to uncover a collection of privacy leaks. We introduce new artifacts, including identifiers, counters and battery levels, that can be used for passive tracking, and describe a novel active tracking attack based on Handoff messages. Beyond tracking issues, we shed light on severe privacy flaws. First, in addition to the trivial exposure of device characteristics and status, we found that HomeKit accessories betray human activities in a smarthome. Then, we demonstrate that AirDrop and Nearby Action protocols can be leveraged by passive observers to recover e-mail addresses and phone numbers of users. Finally, we exploit passive observations on the advertising traffic to infer Siri voice commands of a user.

  24. Computation on Encrypted Data using Dataflow Authentication 2020 AuthenticatedEncryption PETS petsymposium.org
    Andreas Fischer, Benny Fuhry, Florian Kerschbaum, and Eric Bodden

    Encrypting data before sending it to the cloud protects it against hackers and malicious insiders, but requires the cloud to compute on encrypted data. Trusted (hardware) modules, e.g., secure enclaves like Intel’s SGX, can very efficiently run entire programs in encrypted memory. However, it already has been demonstrated that software vulnerabilities give an attacker ample opportunity to insert arbitrary code into the program. This code can then modify the data flow of the program and leak any secret in the program to an observer in the cloud via SGX side-channels. Since any larger program is rife with software vulnerabilities, it is not a good idea to outsource entire programs to an SGX enclave. A secure alternative with a small trusted code base would be fully homomorphic encryption (FHE) – the holy grail of encrypted computation. However, due to its high computational complexity it is unlikely to be adopted in the near future. As a result researchers have made several proposals for transforming programs to perform encrypted computations on less powerful encryption schemes. Yet, current approaches fail on programs that make control-flow decisions based on encrypted data. In this paper, we introduce the concept of data flow authentication (DFAuth). DFAuth prevents an adversary from arbitrarily deviating from the data flow of a program. Hence, an attacker cannot perform an attack as outlined before on SGX. This enables that all programs, even those including operations on control-flow decision variables, can be computed on encrypted data. We implemented DFAuth using a novel authenticated homomorphic encryption scheme, a Java bytecode-to-bytecode compiler producing fully executable programs, and SGX enclaves. A transformed neural network that performs machine learning on sensitive medical data can be evaluated on encrypted inputs and encrypted weights in 0.86 seconds.

  25. HybCache: Hybrid Side-Channel-Resilient Caches for Trusted Execution Environments 2020 IntelSGX SideChannels Usenix usenix.org
    Ghada Dessouky, Tommaso Frassetto, and Ahmad-Reza Sadeghi

    Modern multi-core processors share cache resources for maximum cache utilization and performance gains. However, this leaves the cache vulnerable to side-channel attacks, where inherent timing differences in shared cache behavior are exploited to infer information on the victim’s execution patterns, ultimately leaking private information such as a secret key. The root cause for these attacks is mutually distrusting processes sharing the cache entries and accessing them in a deterministic and consistent manner. Various defenses against cache side-channel attacks have been proposed. However, they suffer from serious shortcomings: they either degrade performance significantly, impose impractical restrictions, or can only defeat certain classes of these attacks. More importantly, they assume that side-channel-resilient caches are required for the entire execution workload and do not allow the possibility to selectively enable the mitigation only for the security-critical portion of the workload.


    We present a generic mechanism for a flexible and soft partitioning of set-associative caches and propose a hybrid cache architecture, called HybCache. HybCache can be configured to selectively apply side-channel-resilient cache behavior only for isolated execution domains, while providing the non-isolated execution with conventional cache behavior, capacity and performance. An isolation domain can include one or more processes, specific portions of code, or a Trusted Execution Environment (e.g., SGX or TrustZone). We show that, with minimal hardware modifications and kernel support, HybCache can provide side-channel-resilient cache only for isolated execution with a performance overhead of 3.5–5%, while incurring no performance overhead for the remaining execution workload. We provide a simulator-based and hardware implementation of HybCache to evaluate the performance and area overheads, and show how HybCache mitigates typical access-based and contention-based cache attacks