Artificial Intelligence Act

13 marzo 2024

The EU Parliament approved the Artificial Intelligence Act that ensures safety and compliance with fundamental rights, while boosting innovation. 

The regulation, agreed in negotiations with member states in December 2023, was endorsed by MEPs with 523 votes in favour, 46 against and 49 abstentions.

It aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field. The regulation establishes obligations for AI based on its potential risks and level of impact.

Banned applications 

The new rules ban certain AI applications that threaten citizens’ rights, including biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. Emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behaviour or exploits people’s vulnerabilities will also be forbidden.

Law enforcement exemptions 

The use of biometric identification systems (RBI) by law enforcement is prohibited in principle, except in exhaustively listed and narrowly defined situations. “Real-time” RBI can only be deployed if strict safeguards are met, e.g. its use is limited in time and geographic scope and subject to specific prior judicial or administrative authorisation. Such uses may include, for example, a targeted search of a missing person or preventing a terrorist attack. Using such systems post-facto (“post-remote RBI”) is considered a high-risk use case, requiring judicial authorisation being linked to a criminal offence.

Obligations for high-risk systems

Clear obligations are also foreseen for other high-risk AI systems (due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law). Examples of high-risk AI uses include critical infrastructure, education and vocational training, employment, essential private and public services (e.g. healthcare, banking), certain systems in law enforcement, migration and border management, justice and democratic processes (e.g. influencing elections). Such systems must assess and reduce risks, maintain use logs, be transparent and accurate, and ensure human oversight. Citizens will have a right to submit complaints about AI systems and receive explanations about decisions based on high-risk AI systems that affect their rights.


Transparency requirements 

General-purpose AI (GPAI) systems, and the GPAI models they are based on, must meet certain transparency requirements, including compliance with EU copyright law and publishing detailed summaries of the content used for training. The more powerful GPAI models that could pose systemic risks will face additional requirements, including performing model evaluations, assessing and mitigating systemic risks, and reporting on incidents.

Additionally, artificial or manipulated images, audio or video content (“deepfakes”) need to be clearly labelled as such.

Measures to support innovation and SMEs

Regulatory sandboxes and real-world testing will have to be established at the national level, and made accessible to SMEs and start-ups, to develop and train innovative AI before its placement on the market.

Archivio news

 

News dello studio

nov7

07/11/2025

Ordinanza 27558/2025: la responsabilità per i dati sanitari illecitamente diffusi ricade sulla Provincia autonoma, non sulla ASL

Garante Protezione Dati Personali c. Provincia Autonoma di Bolzano) Nel contesto di una violazione dei dati personali (data breach), il titolare del trattamento dei dati è il soggetto che determina

nov7

07/11/2025

Diritto all' Oblio

In tema di diritto all'oblio, il giudizio di bilanciamento con il diritto all'informazione nel legittimo esercizio del diritto di cronaca, quale espressione dell'art. 21 Cost., richiede una valutazione

nov7

07/11/2025

AI e LinkedIn addestrerà i suoi sistemi utilizzando i dati personali degli utenti che non si saranno opposti Sul sito dell’Autorità la scheda informativa per agevolare l’esercizio del diritto di opposizi

Gli utenti LinkedIn - e i non utenti i cui dati possono essere comunque presenti sul social network perché pubblicati da altri utenti - hanno il diritto di opporsi al trattamento dei propri

News Giuridiche

nov7

07/11/2025

NASPI e lavoro carcerario intramurario: evoluzione del sistema di tutela

La giurisprudenza ha esteso al lavoratore

nov7

07/11/2025

Mediazione civile: le conseguenze della mancata partecipazione

Breve esame della giurisprudenza di merito

nov7

07/11/2025

L'accordo quadro UE-USA per i controlli di sicurezza

Il Parere EDPS 24/2025 tra esigenze di