“The purpose of these experiments is to accustom the population to the presence of these surveillance technologies” by Olivier Tesquet

On May 11, a working group of the Senate Law Commission presented a report advocating experimenting facial recognition in France for a period of three years. On the pretext of wanting to regulate the use of this technology, the three rapporteurs Arnaud de Belenet (LREM), Jérôme Durain (PS) and Marc-Philippe Daubresse (LR) propose around thirty measures to “ remove the risk of a surveillance company“, while opening the door to experimentation in the public space.

The senators wish in their very first proposal to achieve ” a national survey aimed at evaluating the perception of biometric recognition by the French ” and “identify the springs of a better acceptability of this technology”. A proposal that raises fears of a phenomenon of habituation of the population vis-à-vis facial recognition. Even if this phenomenon remains little known to the general public, this technology has already been used by the police in France for several years now. The association “La Quadrature du Net” notes that in 2021 the police ” performed 1,600 facial recognition operations per day, outside any legal framework » from the 8 million faces of the TAJ file (Processing of criminal records). The association also launched this Tuesday, May 24 a collective complaint against the Ministry of the Interior and the French State against the technopolice and the surveillance of public space.

Olivier Tesquet, journalist at Télérama, specializing in digital issues and public freedoms and author of State of technological emergency answered Luc Auffret’s questions for QG on the risks of such technology.

Olivier Tesquet is a journalist at Télérama and author of “State of Technological Emergency” published by Premier Parallèle

QG: What are the dangers of facial recognition today in France?

Olivier Tesquet: The risk, and it exists today, is that this technology is used without sufficient legislative and judicial safeguards. The general public does not know it, but facial recognition is already used in France, in the Processing of criminal records (TAJ), which has 8 million photos and has been the subject of more than 600,000 requests since 2019. Sometimes, in a more or less traditional way and on vague legal bases, this technology is used during police checks. We then find ourselves in a situation that we have already experienced with other technologies such as the drones of the Global Security Act, censored by the Constitutional Council and then recycled in another text: use precedes law, which has no more for the sole mission than to legalize what is illegal.

There is then an extremely significant risk of infringement of fundamental freedoms, which the report underlines. The most spectacular and dangerous use of this technology is its use in real time in the public space for police purposes. One should not walk down the street subject to a permanent and general identity check.

Policeman equipped with a body camera, used in particular to film and record the faces of demonstrators

QG: The rapporteurs wanted to insist on the setting up of red lines… Are they sufficient or is there a real risk of opening the door to a surveillance company?

The risk is always the same: a red line is drawn on the ground but it is accompanied by exceptions, in particular of a security nature. By establishing these exceptions, we always take the risk of trivialization and a ratchet effect which will make it very difficult to go back once this technology has started to be deployed. The report nevertheless recommends real prohibitions, such as the detection of emotions, inherited from racist pseudosciences of the 19th century, or even social credit, which has been much talked about, with the example of China. But here too, this does not mean that one is completely immune to this type of control, which can materialize in a more insidious way. In France, for example, social benefit organizations massively analyze data to identify at-risk recipients, resulting in an algorithmic and discretionary scoring of individuals.

QG: Is there not a risk of public acceptance of this technology if such a law were adopted?

I am always very wary vis-à-vis these so-called experimental frameworks because very often, for the same reasons as those stated above, they condemn to inevitability. I wonder: is it about trying out a technology to see if it improves society, or just getting people used to its presence? When I hear certain manufacturers say that this or that technology “cannot be disinvented”, does that mean that we must necessarily use it? It takes a lot of moral strength to say no. It is difficult when the state of emergency, which has become the mode of government since 2015, has helped to make the exceptional ordinary and the temporary permanent. Nor do I forget the deadlines for major sporting events – Rugby World Cup in 2023, Olympic Games in 2024 – which are always intense moments of normalization of these tools.

More generally, I have the feeling that we are not debating the technology, but only its methods of use. However, we have been able to see in recent years that large American cities, like san francisco, have decided to ban facial recognition. This shows that all that is needed is political will.

QG: The third point of the report wishes to prohibit real-time remote biometric surveillance during demonstrations on public roads. But isn’t there a risk that the captured images will be analyzed by facial recognition software??

Theoretically, the use by the police of video surveillance images is supervised by the judicial authority. However, very often, we realize that these are used wildly. We come back to the age-old question of control: who accesses the data and how? Without strong guarantees, we can fear the same overflows with the biometric processing of images.

Vigilance is all the more necessary as law enforcement officials would like to integrate facial recognition into other police files, such as that of wanted persons (RPF), but also those relating to attacks on public security. (CRISTINA, GIPASP), and which are regularly criticized because they make it possible to monitor political, trade union or religious opinions.

In this context, with all these parameters and despite the cautious recommendations of the Senate, there is no need to be very creative in dystopia to imagine a future in which very granular surveillance of social movements would be deployed, relying on this technology. A researcher like Vanessa Codaccioni has demonstrated this very well : the repressive policies born of anti-terrorism have overflowed into common law, and today uses the same legal tools to monitor environmental activists or yellow vests as potential terrorists.

In many respects, it is unfortunately a return to basics, coupled with a change of scale. Historically, since the 19th century, the only categories of population who had to prove their identity were those considered dangerous: poor, foreigners, repeat criminals. Today, not only are there new ones, but tomorrow anyone could find themselves caught in the driftnets of indiscriminate surveillance.

Luc Auffret

Leave a Comment