FACE RECOGNITION: PRIVACY ISSUES AND ENHANCING TECHNIQUES

AUTHOR
Alberto Cammozzo

ABSTRACT

Face recognition techniques and use

Face detection is used to automatically detect or isolate faces from the rest of the picture and –for videos– track a given face or person in the flow of video frames. These algorithms only spot a face in a photo or video. They may be used to enhance privacy, for instance blurring faces of passers-by in pictures taken in public (as Google Street View does). Activist app SecureSmartCam automatically obfuscates photos taken in protests to protect the identity of the protestors. Face detection is used in digital signage (video billboards) to display targeted ads appropriate to the age and sex or mood of people watching. Billboards can also recognize returning visitors to engage interaction with them.

Face matching automatically compares a given face with other images in some archive and selects those where the same person is present. This technology is based on several sophisticated biometric techniques, to match any face even in a video stream with a database of already known faces. It is often used by surveillance services in courthouses, stadiums, malls, sHYPERLINK “http://www.praguepost.com/news/8380-monday-news-briefing.html” transport infrastructures or airports, sometimes combined with iris scan or tracking. Combined with the wealth of publicly available pictures from social networking, matching poses privacy issues: from a single picture it is possible to link together images belonging to a single person. A face matching search engine using Fickr, Picasa and Youtube and social network’s repositories is now absolutely feasible, as demonstrated by prototype softwware or products planned for release. The privacy issues are huge: indiscriminate face matching would allow anyone to match a picture taken with a cellphone with the wealth of pictures he can find on-line: a stalker’s paradise. The “creepiness” of such a service has been acknowledged by Google’s executive Eric Schmidt. Also false positives are worrying: what happens if you are mistaken with some fugitive criminal by one of the many law-enforcement cameras? Or enter a casino being recognized as a problem gambler?

Face identification allows to identify someone linking together pictorial with identity data. Automatic identification requires that the matched face is already linked with identity data in a database. Manual identification happens when identification is either through voluntary enrollment or by someone else with “tagging”. By manually tagging someone you make possible her subsequent identification. Facebook and Picasa already implement automatic face matching of tagged faces, with significant privacy consequences.

Identity verification allows to automatically perform matching and identification on a face that has been previously identified. Certain computer operating systems allow biometric identity verification instead of using traditional credentials. Some firms or schools use face recognition for their time attendance systems. This poses serious threats to privacy if biometric identification data leaks out of the identification systems, since many systems are interoperable: standardized facial biometric “signatures” allow identification even without actual pictures. It is conceivable to plan a global biometric face recognition database.

Privacy issues

Major privacy issues linked to pictorial data and face recognition can be summarized as follows:

(1) unintended use: data collected for some purpose and in a given scope is used for some other purpose in a different scope, for instance surveillance cameras in malls used for marketing purposes;

(2) data retention: the time of retention of pictures (or information coming from matched faces) should be appropriate for the purpose they are collected, and any information has to be deleted when expired. For instance digital signage systems should have a very limited time-span, while time attendance systems or security systems have different needs to reach their intended goal;

(3) context leakage: images taken in some social context of life (affective, family, workplace, in public) should not leak outside that domain. Following this principle, images taken in public places or public events should never be matched without explicit consent, since the public social context assumes near anonymity, especially in political or religious gatherings;

(4) information asymmetry: pictorial data may be used without explicit consent of the person depicted, or even without the knowledge that that information has been collected for some purpose. I may have no hint that there are pictures of me taken in public places and uploaded in repositories; as long as pictures remain anonymous my privacy is quite preserved, but if face matching is applied, this breaks privacy contexts. Someone may easily hold information about me I do not know myself.

Privacy enhancing techniques

Even if matching is the major threat, research on face recognition privacy enhancing techniques concentrates on identification. One possible approach to enhance privacy is splitting the matching and identification tasks [Erkin et al, 2009], partial de-identification of faces [Newton, Sweeney, Malin,2005] or revocation capability [Boult,2006], in order to reinforce people’s trust. Some attempts have been made to develop opt-out techniques to protect privacy in public places: temporary blinding of cctv cameras, wearing a pixelated hood or special camouflage make-up. These and other obfuscation techniques [Brunton Nissenbaum, 2011], like posting on-line “wrong” faces, aim at re-balance information asymmetry.

REFERENCES

Mikhail J. Atallah, vol. 5672 (Springer Berlin Heidelberg, 2009), 235-253

T. Boult, “Robust Distance Measures for Face-Recognition Supporting Revocable Biometric Tokens.,” in Automatic Face and Gesture Recognition, IEEE International Conference on, vol. 0 (Los Alamitos, CA, USA: IEEE Computer Society, 2006), 560-566.

Finn Brunton and Helen Nissenbaum, “Vernacular resistance to data collection and analysis: A political theory of obfuscation,” First Monday, May 2, 2011

Zekeriya Erkin et al., “Privacy-Preserving Face Recognition,” in Privacy Enhancing Technologies, ed. Ian Goldberg

Elaine M. Newton, Latanya Sweeney, and Bradley Malin, “Preserving Privacy by De-Identifying Face Images,” IEEE Transactions on Knowledge and Data Engineering 17, no. 2 (2005): 232-243.

Harry Wechsler, Reliable face recognition methods: system design, implementation and evaluation(Springer, 2007).