Erosion of Privacy in Computer Vvision Systems

Maciej Smiatacz


During the last decade the progress in the field of computer vision has become incredibly fast. Five years ago the algorithms of pattern recognition and picture processing seemed too complex to have real-world applications. Today even personal computers are fast enough to perform these tasks effectively and the first commercial computer vision systems appeared. Although they may help to combat fraud, reduce crime or speed the passage of people at airports some attention must be given to the problem of “erosion of privacy” that can be caused by this technology.

A biometric system is a pattern recognition system that establishes the authenticity of a specific physiological or behavioral characteristic possessed by a user. The most known example of biometrics is the fingerprint recognition but from the computer vision point of view face recognition is far more interesting. Of course it would be nice to replace the PINs, passwords and social security numbers with the algorithms that are able to recognise the users by simply looking at them but there are some specific problems. First, the face recognition system must use a large database of electronic photos of the users. This data can be easily transferred or secretly processed, for example the picture acquired by some tourist information system at the airport could be used to get the access to some confidential files. Moreover, one can never be sure that the system is not trying to classify his race or to determine his psychological characteristics by analysing his face expressions.

Then there is the question of reliability. Although researchers report recognition rates of nearly 100% these results are often obtained with non-realistic data sets. Typical accuracy of the best face recognition systems is only 50% if the photographs were taken one year before the test and PC Magazine reported that one of the commercial products can be fooled if you print a photograph of somebody’s face and then place it in front of the camera. The performance of a verification system is characterised by two error statistics: false-eject rate and false-alarm rate. A false reject occurs when a system rejects a valid identity; a false alarm occurs when a system incorrectly accepts an identity. This means that the face recognition system designers always must compromise: for a bank ATM system it is more important to avoid irritating legitimate customers while for systems that provide access to a secure area the false-alarm rate must be as low as possible. However, this is not the technology that we can fully trust.

Three projects related to biometrics are currently under way in Faculty of Electronics, Telecommunications and Informatics of Technical University of Gdansk, some of them in co-operation with Belarusian Academy of Sciences (BAS). The goal of the first project is to create a universal framework that could be used to perform exhaustive tests of different face recognition algorithms. The next one is going to provide an implementation of active shape models technique and several others methods that could be helpful in automatic face localisation and tracking. The BAS project, on the other hand, is concentrating on selection of invariant features of the human face that would make the recognition possible during entire lifetime. Although the first stage of all three projects concentrates on technical aspects of human face biometrics and the respective models and algorithms, the second stage about to begin will have to resolve the visual data safety problems to secure privacy and make operations of our systems more trustful to the users.

In recent years a lot of research has been done on automatic object location and tracking, gesture recognition and analysis of motion pictures in general. These problems are extremely complex but as the computers get more and more powerful we may soon realise that there are systems that completely understand our actions. Again it can help us to prevent terrorist attacks or simply to catch the thieves at a supermarket but, on the other hand, the Orwell’s vision of people living under permanent control has never been closer to reality. First automatic-surveillance systems based on computer vision have already appeared and possibly they are going to be more effective than human guards are but they can hardly be as intelligent as human beings. This means they will fail in atypical situations or it will be easy to cheat them by providing some unusual signals that would make the system work in conditions than were not taken into account during the design process.

The probability that the biometric data will be misused by government institutions is fairly low but we must not forget that this technology will soon be available to everybody. A simple web cam costs less than 100$ and there are face recognition programs costing about 200$. Everybody can place a web cam outside a building and anyone can use the pictures transmitted from it. From the technical point of view it is quite easy to run a face recognition program that continuously analyses the pictures from remote web cam and inserts a log entry every time particular person appears in some place.

The ethical problems of computer vision have already been noticed by International Biometric Industry Association that prepared several standards and recommendations for system manufacturers. Probably the new technology will also require some legislative reforms. Anyway, it would be unwise to avoid the computer vision development for the fear that it would be used unfairly.