
Hot potato: In a uncommon (latest) story about non-generative AI, the French National Assembly has accredited the usage of AI to assist with video surveillance of the 2024 Paris Olympics. The transfer got here regardless of objections from rights teams, who say its use may violate civil liberties whereas paving the way in which for future use of intrusive algorithm-driven video surveillance in Europe.
According to The Reg, the French authorities has adopted Article 7 of the pending legislation for the 2024 Olympic and Paralympic Games, authorizing the automated evaluation of surveillance video from fastened and drone cameras.
The system is claimed to detect particular suspicious occasions in public locations, corresponding to uncommon conduct, pre-scheduled occasions and crowd surges.
While the AI surveillance plan may very well be challenged within the Supreme Constitutional Court, France is anticipated to be the primary nation within the European Union to make use of such a system.
France seems to be ignoring warnings from 38 civil society organizations who expressed issues concerning the know-how in an open letter. They mentioned the proposed surveillance measures violated worldwide human rights legislation as a result of they violated the rules of necessity and proportionality and posed an unacceptable threat to elementary rights such because the rights to privateness, freedom of meeting and affiliation, and the best to non-discrimination.
The letter warns that if the AI system is adopted, it’s going to set a precedent of unreasonable and disproportionate surveillance in public areas.
“If the aim of algorithm-driven cameras is to detect particular suspicious incidents in public locations, then they have to seize and analyze the bodily traits and conduct of people current in these areas, corresponding to their physique posture, gait, actions, gestures or Appearance,” the open letter learn. “Isolating the person from the context through which the system’s targets would in any other case be not possible to realize constitutes a ‘distinctive id.'”
As within the case of AI surveillance, there’s additionally the concern of discrimination. “The use of algorithmic programs to combat crime has resulted in over-regulation, structural discrimination within the felony justice system, and the over-criminalization of racial, ethnic and spiritual minorities,” the teams added.
Mher Hakobyan, Amnesty International’s synthetic intelligence regulation advocacy adviser, mentioned the choice places France vulnerable to being completely remodeled right into a dystopian surveillance state.
France’s National Informatics and Liberties (CNIL) supervisory board backed the invoice on the situation that no biometric knowledge be processed, however privateness advocates say such a factor is not possible.
“You can do two issues: object detection or human conduct evaluation — the latter is the processing of biometric knowledge,” mentioned Daniel Leufer, coverage advisor at digital rights group Access Now.
Masthead: Henning Schlottmann