The principle of anonymization for data sharing has become a very popular paradigm for the preservation of privacy of the data subjects. Since the introduction of k-anonymity, dozens of methods and enhanced privacy definitions have been proposed. However, over-eager attempts to minimize the information lost by the anonymization potentially allow private information to be inferred. Proof-of-concept of this “minimality attack” has been demonstrated for a variety of algorithms and definitions [?].
In this paper, we provide a comprehensive analysis and study of this attack, and demonstrate that with care its effect can be almost entirely countered. The attack allows an adversary to increase his (probabilistic) belief in certain facts about individuals over the data. We show that (a) a large class of algorithms are not affected by this attack, (b) for a class of algorithms that have a “symmetric” property, the attacker's belief increases by at most a small constant, and (c) even for an algorithm chosen to be highly susceptible to the attack, the attacker's belief when using the attack increases by at most a small constant factor. We also provide a series of experiments that show in all these cases that the confidence about the sensitive value of any individual remains low in practice, while the published data is still useful for its intended purpose. From this, we conclude that the impact of such method-based attacks can be minimized.
[ bib ] Back
This file was generated by bibtex2html 1.92.