The EU AI Act #2: No-no AI practices – Part 2

In this post, we continue the examination of Article 5 of the EU Artificial Intelligence Act, which sets fundamental legal boundaries and ethical standards for the use of AI applications within the EU territory. In the previous post, we examined a prohibition aimed to prevent the deployment of AI systems designed to become dangerous social actors by impairing people’s ability to make informed decisions, thereby causing them – actually or likely – significant harm.

The clause in comma 1, point (b) of the article is similar to some extent, in that it forbids “the placing on the market, the putting into service, or the use of an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability, or a specific social or economic situation, with the objective or the effect of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm.” In this case, the focus is clearly on protecting individuals or groups whose vulnerabilities may be exploited by AI systems, leading to a distortion of their behaviour that causes or it is reasonably likely to cause harm.

The prohibition considered in the preceding post emphasized the intentionality of the design of potentially malicious AI systems, but in this case the distortion of behaviour caused by a system may simply be an unwanted effect of its use. This seems to incentivize AI software producers and providers to better evaluate, in advance, the possible consequences of deploying such software. Furthermore, it is commendable, in my view, that the vulnerabilities referred to in the clause are not only related to disabilities, but also to age or social or economic situations. Again, whether a system has distorted someone’s behaviour and has therefore (reasonably likely) caused significant harm may be difficult to establish, but this is – unavoidably – a topic for theoretical discussions and examinations of real cases, even from a legal perspective. This also pertains to which social or economic situations the article actually covers.

A personal consideration, however: what about AI-powered systems that, for instance, stimulate a young user to spend many hours interacting with them, as it seems to be happening with a lot of online games or social media (whether or not AI is involved)? Isn’t this exploiting a vulnerability (not only of youngsters)? At the moment, there doesn’t appear to be clear evidence that, for example, online social networking affects normal aspects of human behaviour and causes psychiatric disorders (see for example this study). However, what if this evidence becomes stronger? Should these types of systems be prohibited? My answer is yes, unless they are modified, appropriately and timely.


Leave a comment