Papers tagged as Censorship
  1. Proof-of-Censorship: Enabling centralized censorship-resistant content providers 2018 Censorship FinancialCryptography Privacy fc18.ifca.ai
    Ian Martiny, Ian Miers, and Eric Wustrow

    Content providers often face legal or economic pressures to censor or remove objectionable or infringing content they host. While decentralized providers can enable censorship-resistant storage, centralized content providers remain popular for performance and usability reasons. But centralized content providers can always choose not to respond to requests for a specific file, making it difficult to prevent censorship. If it is not possible to prevent, is it possible to detect and punish censorship on a centralized service?


    A natural approach is to periodically audit the service provider by downloading the file. However, failure to download a file is not a proof of censorship. First, the provider could claim benign failure. Second, the proof is non-transferable: verifying censorship requires third parties to individually request the censored file. Moreover, a content provider may only selectively deny access to particular users or only for a short time frame. As such, checking by downloading does not work even for third parties who are online and willing to make queries.


    In this paper, we introduce proof of censorship, whereby a content provider cannot delete or otherwise selectively remove content from their service without creating transferable cryptographic proof of their misdeed. Even if the provider restores the file at a later date, the proof remains valid, allowing the reputation of a content provider’s commitment to censorship resistance to be based on the existence (or absence) of such proofs.

  2. BurnBox: Self-Revocable Encryption in a World Of Compelled Access 2018 Censorship Surveillance Usenix usenix.org
    Nirvan Tyagi, Muhammad Haris Mughees, Thomas Ristenpart and Ian Miers

    Dissidents, journalists, and others require technical means to protect their privacy in the face of compelled access to their digital devices (smartphones, laptops, tablets, etc.). For example, authorities increasingly force disclosure of all secrets, including passwords, to search devices upon national border crossings. We therefore present the design, implementation, and evaluation of a new system to help victims of compelled searches. Our system, called BurnBox, provides self-revocable encryption: the user can temporarily disable their access to specific files stored remotely, without revealing which files were revoked during compelled searches, even if the adversary also compromises the cloud storage service. They can later restore access. We formalize the threat model and provide a construction that uses an erasable index, secure erasure of keys, and standard cryptographic tools in order to provide security supported by our formal analysis. We report on a prototype implementation, which showcases the practicality of BurnBox.

  3. The use of TLS in Censorship Circumvention 2019 Censorship Measurement NDSS TLS ndss-symposium.org
    Sergey Frolov and Eric Wustrow

    TLS, the Transport Layer Security protocol, has
    quickly become the most popular protocol on the Internet, already
    used to load over 70% of web pages in Mozilla Firefox. Due
    to its ubiquity, TLS is also a popular protocol for censorship
    circumvention tools, including Tor and Signal, among others.


    However, the wide range of features supported in TLS makes
    it possible to distinguish implementations from one another by
    what set of cipher suites, elliptic curves, signature algorithms, and
    other extensions they support. Already, censors have used deep
    packet inspection (DPI) to identify and block popular circumven-
    tion tools based on the fingerprint of their TLS implementation.


    In response, many circumvention tools have attempted to
    mimic popular TLS implementations such as browsers, but this
    technique has several challenges. First, it is burdensome to keep
    up with the rapidly-changing browser TLS implementations, and
    know what fingerprints would be good candidates to mimic.
    Second, TLS implementations can be difficult to mimic correctly,
    as they offer many features that may not be supported by the
    relatively lightweight libraries used in typical circumvention tools.
    Finally, dependency changes and updates to the underlying li-
    braries can silently impact what an application’s TLS fingerprint
    looks like, making it difficult for tools to control.


    In this paper, we collect and analyze real-world TLS traffic
    from over 11.8 billion TLS connections over 9 months to identify
    a wide range of TLS client implementations actually used on
    the Internet. We use our data to analyze TLS implementations
    of several popular censorship circumvention tools, including
    Lantern, Psiphon, Signal, Outline, Tapdance, and Tor (Snowflake
    and meek). We find that the many of these tools use TLS
    configurations that are easily distinguishable from the real-world
    traffic they attempt to mimic, even when these tools have put
    effort into parroting popular TLS implementations.


    To address this problem, we have developed a library, uTLS,
    that enables tool maintainers to automatically mimic other pop-
    ular TLS implementations. Using our real-world traffic dataset,
    we observe many popular TLS implementations we are able to
    correctly mimic with uTLS, and we describe ways our tool can
    more flexibly adopt to the dynamic TLS ecosystem with minimal
    manual effort.