Paper 2024/1869
Black-box Collision Attacks on Widely Deployed Perceptual Hash Functions
Abstract
Perceptual hash functions identify multimedia content by mapping similar inputs to similar outputs. They are widely used for detecting copyright violations and illegal content but lack transparency, as their design details are typically kept secret. Governments are considering extending the application of these functions to Client-Side Scanning (CSS) for end-to-end encrypted services: multimedia content would be verified against known illegal content before applying encryption. In 2021, Apple presented a detailed proposal for CSS based on the NeuralHash perceptual hash function. After strong criticism pointing out privacy and security concerns, Apple has withdrawn the proposal, but the NeuralHash software is still present on Apple devices. Brute force collisions for NeuralHash (with a 96-bit result) require $2^{48}$ evaluations. Shortly after the publication of NeuralHash, it was demonstrated that it is easy to craft two colliding inputs for NeuralHash that are perceptually dissimilar. In the context of CSS, this means that it is easy to falsely incriminate someone by sending an innocent picture with the same hash value as illegal content. This work shows a more serious weakness: when inputs are restricted to a set of human faces, random collisions are highly likely to occur in input sets of size $2^{16}$. Unlike the targeted attack, our attacks are black-box attacks: they do not require knowledge of the design of the perceptual hash functions. In addition, we show that the false negative rate is high as well. We demonstrate the generality of our approach by applying a similar attack to PhotoDNA, a widely deployed perceptual hash function proposed by Microsoft with a hash result of 1152 bits. Here we show that specific small input sets result in near-collisions, with similar impact. These results imply that the current designs of perceptual hash function are completely unsuitable for large-scale client scanning, as they would result in an unacceptably high false positive rate. This work underscores the need to reassess the security and feasibility of perceptual hash functions, particularly for large-scale applications where privacy risks and false positives have serious consequences.
Metadata
- Available format(s)
-
PDF
- Category
- Attacks and cryptanalysis
- Publication info
- Preprint.
- Keywords
- Perceptual HashingCollisionsClient-Side ScanningNeuralHashPhotoDNACSAM detection
- Contact author(s)
-
diane leblanc-albarel @ kuleuven be
bart preneel @ esat kuleuven be - History
- 2025-02-24: revised
- 2024-11-15: received
- See all versions
- Short URL
- https://ia.cr/2024/1869
- License
-
CC BY-NC-ND
BibTeX
@misc{cryptoeprint:2024/1869, author = {Diane Leblanc-Albarel and Bart Preneel}, title = {Black-box Collision Attacks on Widely Deployed Perceptual Hash Functions}, howpublished = {Cryptology {ePrint} Archive, Paper 2024/1869}, year = {2024}, url = {https://eprint.iacr.org/2024/1869} }