Art. 5 of the EU Artificial Intelligence Act lists and describes explicitly prohibited practices involving the use of AI. This article sets fundamental legal boundaries for the use of AI applications, reflecting essential ethical standards as well. In this series of posts, we will examine these prohibitions one by one, starting with the prohibition in comma 1, point (a) of the article. This clause states that it is forbidden to sell, deploy, or use an AI system that employs subliminal, manipulative, or deceptive techniques to distort a person’s or group’s behaviour, in such a way that their ability to make informed decisions is significantly impaired, leading them to make choices they wouldn’t normally make, which causes or is likely to cause significant harm to that person, another person, or a group of persons. The four following points, in my opinion, capture the essence of the clause.
Intentionality of Design: the clause implies – as a critical aspect for determining liability or compliance – that the subliminal, manipulative, or deceptive techniques must be intentionally integrated into the AI system.
Objective: the aforementioned techniques must be specifically used to distort a person’s or group’s behaviour, by significantly impairing their ability to make informed decisions. Undermining decision-making capacities, then, is the mechanism through which these forbidden AI systems operate.
Consequences: these manipulative techniques must lead individuals to make decisions they wouldn’t ordinarily make if such AI systems were not used. This aspect underscores the cause-effect relationship between the use of the AI system and the altered decision-making process.
Requirement of Harm: the decisions influenced by these techniques must cause or be likely to cause significant harm to the user or others. This requirement connects the considerations above to tangible outcomes, emphasizing the potential for real-world impact.
The intentions of this clause is to prevent the deployment of AI systems designed to become dangerous social actors but the devil, as always, lies in the details. For instance, terms like “subliminal”, “manipulative”, “deceptive”, and “significant harm” are somewhat subjective and open to interpretation. This vagueness can lead to challenges in enforcement and might require extensive legal proceedings to clarify. Even more importantly, demonstrating whether a decision would not have been made in the absence of the use of an AI system, even a malicious one, may prove to be challenging. Indeed, establishing a clear causal link between the use of an AI system and a specific decision is complex. Decisions are influenced by a multitude of factors—personal, social, economic, and psychological. Disentangling the impact of AI from these factors is not straightforward. What is more, collecting evidence to prove that a decision was influenced by AI can be technically and logistically difficult. It may require, for example, access to protected or even classified data logs and information about the way an AI system was actually used within a decision-making process.
As a final thought consider this version of the clause: is forbidden to sell, deploy, or use an AI system that employs subliminal, manipulative, or deceptive techniques to distort a person’s or group’s behaviour, in such a way that their ability to make informed decisions is significantly impaired, leading them to make choices they wouldn’t normally make, which causes or is likely to cause significant good to that person, another person, or a group of persons. Would you include it as well in the EU AI Act? (My answer is yes: in my view, the ability to make informed decisions prevails over any good intention).
