This and (presumably many) future posts will focus on the recently approved EU Artificial Intelligence Act, passed by the European Parliament in March 2024. Here you can find a high-level summary of the act. To explore the full text, you can use this explorer tool, while this compliance checker tool allows you to determine which parts of the text are most relevant to you. As one can read in the summary, a classification of AI-based systems according to risk lies at the core of the AI Act.
Systems that pose unacceptable risks are prohibited. Two examples of such systems are social scoring models, aimed at evaluating or classifying individuals or groups based on social behaviour or personal traits, causing detrimental or unfavorable treatment of those people, and systems that deploy subliminal, manipulative, or deceptive techniques to distort behaviour and impair informed decision-making, causing significant harm.
High risk systems are regulated. One instance of such systems are models used by public authorities for assessing eligibility to benefits and services, including their allocation, reduction, revocation, or recovery.
Limited risk AI systems are subject to lighter transparency obligations. Developers and deployers must ensure that end-users are aware they are interacting with AI (such as chatbots and deepfakes).
Systems which pose minimal risk (e.g. AI-enabled video games and spam filters) are unregulated at least for the time being, as generative AI, for instance, might change the panorama.
The AI Act places the majority of obligations on providers (developers) of high-risk AI systems. Importantly, it specifies that providers or developers are those who intend to place high-risk AI systems on the market or put them into service in the EU, regardless of whether they are based in the EU or a third country. This includes third-country providers whose high-risk AI system outputs are used in the EU. Furthermore, the AI Act clarifies that users are natural or legal persons who deploy an AI system in a professional capacity, rather than affected end-users. Users (deployers) of high-risk AI systems have certain obligations, albeit fewer than providers (developers). This applies to users located in the EU as well as third-country users whose AI system outputs are used in the EU.
Finally, specific provisions of the AI Act— to be discussed in later posts— concern General Purpose AI (GPAI) models. These are systems that display significant generality and are capable of competently performing a wide range of distinct tasks, regardless of the way the model is placed on the market, and can be integrated into a variety of downstream systems or applications. The AI Act reserves particular provisions for GPAI models that pose systemic risks, which are those models that, during training, required a cumulative amount of compute greater than 1025 floating-point operations (FLOPs). ChatGPT 4, for instance, should fall into this category.
In conclusion, the AI Act introduces a comprehensive framework for regulating AI systems in the EU, with a focus on ensuring transparency, accountability, and risk mitigation. From the obligations placed on providers and users of high-risk AI systems to the specific provisions for General Purpose AI models, the Act aims to balance innovation with responsible deployment. More on the exploration of its implications in upcoming posts.