First effects of the european union’s ia regulation:

Curbing abusive practices

First effects of the European Union’s ia Regulation: Curbing Abusive Practices

First effects of the European Union’s ia Regulation: Curbing Abusive Practices

The following statement is already well known: last August, the European Regulation on Artificial Intelligence (hereinafter “EU AI Act”) came into force. What is less obvious is what happens from now on. While many companies and professionals are already familiarizing themselves with its implications, a key question remains: when will authorities be able to enforce compliance and sanction violations?

In procedural terms, although the EU AI Act establishes a general period of 2 years as a transitional period, it does not follow the usual vacatio legis in which all its provisions apply simultaneously after this adaptation period. Instead, the Regulation has opted for a gradual approach with a progressive implementation schedule for some of its precepts. In other words, the EU AI Act does not come into play all at once but is rolled out in phases. Each block of provisions has its own activation date, bringing about the transformation of the European digital landscape step by step.

Precisely, the first of these milestones is already here: February 2, 2025. From now on, certain practices linked to the use of artificial intelligence will be completely banned within the European Union. This means that any company that engages in any of the prohibited behaviours will have to cease immediately.
Which practices are those? Which uses of AI cross the red line? We tell you about them below:
 

1. Subliminal manipulation and exploitation of vulnerabilities


AI systems that manipulate people imperceptibly, leading them to make decisions they would not have made consciously, especially if this may harm them or adversely affect third parties, are prohibited. This same principle applies to cases in which AI systems take advantage of the vulnerability of specially protected groups, such as children, the elderly or those in situations of social exclusion.

These types of techniques have proven to be particularly dangerous in areas such as advertising or politics, where individual decisions can have a significant collective impact (especially for those groups that may be more susceptible to technological influence). Protecting citizens from these risks is essential to ensure that technological innovation is not used for abusive purposes.
 

2. Social evaluation and classification


Systems that evaluate or classify people based on their social behaviour, personal characteristics or personality (whether based on actual or predicted data) are prohibited, when the result of this evaluation generates socially harmful situations, without justification or out of context.

This is the case of the well-known “social scoring”, a practice that consists of assigning scores to people according to their personal and behavioural data, evaluating their reliability in areas such as credit or social behaviour. This practice can affect important decisions about access to services, employment or benefits.
 

3. Profiling for crime prediction


Systems that serve to predict the likelihood that a person will commit a crime based on personality profiles or personal characteristics are also prohibited. This restriction is a response to the risks of discrimination and miscarriage of justice that these systems may generate because they are not based on objective evidence.

Exception: systems that support judges or security forces in the framework of police investigations or in cases of national security - where the EU AI Act is no longer directly applicable - to assess the involvement of a specific person in a given crime will be admitted, provided they are based on verifiable facts directly related to the crime in question. In this case, the system would not be predicting, but rather helping to interpret objective data already known.

A practical example of this exception could be the use of a cell phone forensic analysis system to assist in a police investigation. If a suspect's phone contains text messages, emails, or call logs that are directly related to a crime (e.g., threats of violence or evidence of contact with victims), the system can help investigators interpret that verifiable data. In this case, the system does not predict whether the person will commit a crime in the future, but rather helps interpret objective facts that already exist, such as messages or calls related to the crime in question.
 

4. Databases for facial recognition


The use of AI systems that massively collect images from the Internet or video surveillance cameras to create databases to identify or verify the identity of individuals based on the unique characteristics of their face is prohibited. Such a system could complement the prohibited techniques referred to in paragraph 7 of this list.
 

5. Inference of emotions


Systems that seek to interpret the emotions of a person in the workplace or in educational institutions will not be allowed. This involves inferring emotions from facial expressions, gestures, movements, tone of voice, etc. This practice is particularly invasive in that the exposure of a person's emotion places the person in a position of vulnerability.

Exception: cases in which emotion inference is made in order to protect the health or ensure the safety of a person will be acceptable. For example, the use of an AI system to detect symptoms of fatigue in an airline pilot in order to avoid accidents.
 

6. Inference of personal characteristics through biometrics


Systems that classify individuals on the basis of their biometric data (such as fingerprints or facial patterns) to determine or infer their race, sexual orientation, religious convictions, etc., should be excluded from the European territory. Such a system can lead to invasions of privacy and discrimination by inferring intimate aspects without the individual's consent.

Exception: this prohibition does not apply when biometric data, embodied in legitimately obtained images or records, is filtered or tagged for law enforcement purposes. This may be valid in police or security investigations, where the analysis of biometric characteristics helps to identify relevant persons in a criminal investigation or to protect public safety. For example, for the comparison of fingerprints from a crime scene with those stored in a database in order to identify a suspect.
 

7. Real-time biometric surveillance


Finally, systems that allow people to be identified remotely and instantly in public spaces by analysing their biometric characteristics, such as their face or the way they walk, are prohibited. This type of monitoring makes it possible to recognize specific people without the need to be close to them, immediately and continuously. With this measure, Europe closes the door to the indiscriminate use of mass surveillance technologies, thus protecting citizens' privacy.

Exception: these systems may be used in strictly necessary cases, such as the search for missing persons, the prevention of terrorist attacks or the identification of suspects in serious crimes, provided that there is a clear and direct threat to the life or safety of individuals.

This regulatory framework sends a clear message to technology companies: innovation cannot advance at the expense of human rights. As a result, as of February 2, all entities developing and marketing AI systems in Europe must align with these rules or face sanctions.

By eliminating these practices from its marketplace, the European Union reinforces its commitment to technological ethics, setting clear boundaries to protect citizens from the most invasive and potentially harmful applications of technology. Moreover, this preventive and risk-based approach not only protects against the immediate dangers of AI, but also sets a global precedent on how to approach the digital revolution responsibly. It is the first step towards a working model that will be consolidated in the coming decades.

Finally, it is necessary to stress that, although EU AI Act is not yet fully enforceable to date, artificial intelligence systems have implications that are regulated by previous regulations that are mandatory, as is the case of copyright regulations or the General Data Protection Regulation. This means, for example, that it is imperative to comply with the provisions of the aforementioned regulation in relation to the processing of personal data carried out within the framework of an AI system.