top of page

Part 1 -Facial Recognition: A Challenge for Europe or a Threat for Human Rights?



These three articles deal with the issue of the use of facial recognition, mainly in the European Union. Its purpose is to provide a thorough and coherent analysis of its lawfulness in accordance with European legislation. Even though there is a concrete legal background provided by European Directives and Regulations, many EU member states apply facial recognition without a solid legal basis. Therefore, the article pursues to offer valid answers to data rights issues that arise. For this purpose, the study is divided into three axes. The first chapter provides a wide analysis of the European legislation which governs the use of facial recognition. Into the second chapter, special emphasis is given to the application of this method at national level. A critical approach is attempted with the aim to define the legal basis and illuminate the legal gaps raised. The third chapter gives an alternative approach to the issue since it demonstrates the wide use of facial technology at international level and its different legal regulations. The final conclusions not only reflect the research’s findings but also propose effective safeguards for the lawful application of facial recognition in order to improve the European digital strategy.



A “Europe fit for the digital age” is one of the top 6 Commission priorities for 2019-2024. It focuses on the development of a high-level digital strategy which puts to the forefront the use of new technologies in order to create new perspectives for businesses, to enhance security and reliability in technology and to gain greater progress in society. As predicted, the EU’s digital strategy aims to put new technology to the benefit of social good, to produce a fair and competitive digital economy with benefits for both businesses and people and, finally, to bring an open, democratic and sustainable society. All these goals will be achieved by many actions, both at national and European level. One of those actions is the use of artificial intelligence which “can bring many benefits, such as better healthcare, safer and cleaner transport, more efficient manufacturing, and cheaper and more sustainable energy. The EU’s approach to AI will give people the confidence to embrace these technologies while encouraging businesses to develop them”.


Artificial intelligence consists of performing many functions that were traditionally executed only by humans. Its scope extends to all levels of social life, such as health, transport, business and the economy. It should also be noted that the EU invests significant amounts to increase benefits brought from artificial intelligence to our society and economy. In the White Paper of the European Commission, entitled «White Paper on Artificial Intelligence- Α European approach to excellence and trust» it is explicitly highlighted that artificial intelligence is closely connected with European legislation on human rights, especially in relation to the protection of privacy and data rights. As the possibilities for monitoring and analyzing people's daily habits and actions increase, as indicated in the workplace environment, it is easy to conclude that significant risks arise related to the above issues.

Facial recognition is a representative example of the application of artificial intelligence. According to Opinion 3/2012 on developments in biometric technologies of the Article 29 Data Protection Working Party facial recognition is defined as «the automatic processing of digital images which contain the faces of individuals for identification, authentication/verification or categorisation of those individuals». It can be executed through various methods, such as video surveillance systems and smartphones, fingerprint readers, vein pattern readers or just a smile into a camera which might replace cards, codes, passwords and signatures. In the White Paper it is emphasized that facial recognition might have two dimensions, identification and authentication of the person. As noted, «identification means that the template of a person’s facial image is compared to many other templates stored in a database to find out if his or her image is stored there. Authentication (or verification) on the other hand is often referred to as one-to-one matching. It enables the comparison of two biometric templates, usually assumed to belong to the same individual. Two biometric templates are compared to determine if the person shown on the two images is the same person. Such a procedure is, for example, used at Automated Border Control (ABC) gates used for border checks at airports».


It can be easily concluded that the automated processing of biometric data included in the facial recognition method carries with it risks to privacy and the protection of fundamental rights. Nevertheless, it is a technique that tends to be widely used in several countries, both in Europe and internationally. Therefore, there is a strong interest in examining the special content of facial recognition, especially its legal regulatory framework at EU level (first chapter). Furthermore, we will examine various methods of facial recognition used in certain countries and their legal grounds (second chapter). An additional part of the study is focused on the rules that govern facial recognition at international level in order to provide a comparative study of the issues in question (third chapter). Finally, some personal thoughts will be shared regarding facial recognition’s legal challenges.


The European regulatory framework on facial recognition.


In general, facial recognition is closely related to issues of private life. Privacy has a wide meaning and can take various dimensions. Notably, it includes many terms of physical and social identity of the person, such as the name, the physical, ethical and psychological composition, the right to personal development and self-determination. In the strictest sense, taking into account the method by which facial recognition takes place, European legislation on personal data protection applies automatically. In particular, data collected by facial recognition technology is classified as biometric data, as information about facial features is collected, which constitutes “special categories of personal data”, according to the General Data Protection Regulation. The GDPR divides biometric data into two distinct categories: those relating to the physical, physiological human characteristics, such as weight, dactyloscopic data, eye colour, voice and ear shape recognition and those relating to behavioural characteristics of a natural person, such as keystroke analysis, handwritten signature analysis and eye tracking. Both of these categories allow for and/or confirm the unique identification of that natural person.


Therefore, processing special categories of personal data is lawful if one of the specific conditions of article 92 of the Regulation are applied. In that respect, the opinion of European Data Protection Supervisor who proceeds to a thorough legal and ethical examination of facial recognition should be noted. Firstly, there is the aforementioned requirement to meet one of the conditions of Article 9§2 of the GDPR. Then, special emphasis is given to the content of the consent that should be required. Article 7 of the European regulation sets out the nature of the consent: “If the data subject's consent is given in the context of a written declaration which also concerns other matters, the request for consent shall be presented in a manner which is clearly distinguishable from the other matters, in an intelligible and easily accessible form, using clear and plain language. Any part of such a declaration which constitutes an infringement of this Regulation shall not be binding”. The question arises as to what extent it is possible to obtain a consent with those elements? How can we be sure that the subject of the data gives his consent free and without any reservation?


Furthermore, accountability and transparency should be observed. As it is emphasized, “it is almost impossible to trace the origin of the input data; facial recognition systems are fed by numerous images collected by the internet and social media without our permission. Consequently, anyone could become the victim of an algorithm’s cold testimony and be categorised (and more than likely discriminated) accordingly”. Regarding this issue, we should add the provisions of the GDPR regarding data protection by design and by default.Under these provisions, “the controller shall, both at the time of the determination of the means for processing and at the time of the processing itself, implement appropriate technical and organisational measures, such as pseudonymisation, which are designed to implement data-protection principles, such as data minimisation, in an effective manner and to integrate the necessary safeguards into the processing in order to meet the requirements of this Regulation and protect the rights of data subjects”. In other words, any organization, natural or physical person who intends to process data rights is encouraged to adopt any necessary, appropriate and useful measure, at the earliest stage of the design of the processing operations as well as to ensure the accomplishment of all conditions demanded for the lawfulness of the processing, according to the article 6 of the GDPR. Additionally, in accordance with the special guidelines on article 25 of the GDPR, “a technical or organisational measure can be anything from the use of advanced technical solutions to the basic training of personnel, for example on how to handle customer data". Furthermore, 'data protection by default' refers to the choices made by a controller regarding any preexisting configuration value or processing option that is assigned in a software application, computer program or device that has the effect of adjusting, in particular but not limited to, the amount of personal data collected, the extent of their processing, the period of their storage and their accessibility”.


The European Data Protection Supervisor seems to be uncertain regarding compliance in relation to the principle of data minimization. As the method of facial recognition itself is not fully accurate and clear, the collection of the necessary data is called into question.

Finally, facial recognition is disputable from an ethical point of view as well as its value in a democratic society. The treatment of the human personality as an "object" clearly violates fundamental human rights, weakening the value of the individual.

For all these reasons, the European Data Protection Supervisor seems to take a more negative stance regarding the use of facial recognition technology, especially since it is often used in respect to vulnerable social groups. Furthermore, he is against automated recognition technologies in public spaces, suggesting their temporary ban. However, without prohibiting facial recognition to an absolute degree, he puts a special burden of responsibility on the national data protection supervisors, who are also called upon to decide on this issue. Hence, this advice sets out his opinion on artificial intelligence which clarifies the safeguards of artificial intelligence with respect to fundamental human rights and recommends national data protection authorities issue specific guidelines on this matter.


In addition to the legal analysis performed by the European Data Protection Supervisor, the processing of personal data by the method of facial recognition shall meet some specific provisions of EU data protection legislation. In that sense, both Article 57§1c of the GDPR and Article 46§1c of Directive 2016/680 require the prior opinion of the national data protection supervisory authority for any measure restricting the protection of personal data. Certainly, the data protection impact assessment is needed in order to demonstrate the dangers to fundamental human rights and freedoms as well as to suggest efficient and appropriate solutions.


In conclusion, it becomes obvious that EU legislation sets quite strict requirements for the legality of the application of facial recognition. In this respect, Margrethe Vestager, the European Commission’s executive vice president for digital affairs, is very critical regarding this method as it breaches GDPR provisions, especially those for the obtainment of clear consent without any reservation. However, she didn’t exclude the possibility of the latter occurring in special occasions, such as in the domain of security, and invited national data authorities to review the legal grounds which will allow member states to make their own domestic decisions. Hence, responding to recommendations of some members of the European Parliament, the EU executive of the European Commission’s DG Connect, Mr Kilian Gross, declared during the European Parliament’s Internal Market Committee, which took place recently, that a future ban on the use of facial recognition technology in Europe should not be excluded. Therefore, he highlighted the findings of the White Paper on Artificial Intelligence which are related to the use of facial recognition.




bottom of page