The EU AI Act #2: No-no AI practices – Part 3 – Social scoring

Social scoring involves using AI systems to evaluate individuals or groups by analyzing various pieces of information about their behavior, often across multiple contexts. These systems can identify patterns or make predictions about a person’s characteristics, habits, or likely actions over time. This process creates a consolidated score, which is then used to make judgments or decisions about individuals or groups.

Art. 5, paragraph (c) of the EU AI Act explicitly bans AI systems that evaluate or classify individuals or groups over time based on their:

  • Social behavior, whether observed, deduced based on existing data (inferred), or forecasted for the future (predicted).
  • Personal or personality characteristics, whether known or inferred.

The social scores derived from these systems are prohibited if they result in:

  • Unfavorable treatment in unrelated social contexts. For example, a person’s behavior on social media affecting their access to housing or education.
  • Disproportionate or unjustified harm based on social behavior or its gravity. This ensures that even within related contexts, punitive measures must align with the nature and seriousness of the behavior.

Such undesirable outcomes, indeed, are discriminatory, exacerbate inequalities, and marginalize certain groups. They are contrary to the rights to dignity, equality, and non-discrimination. Additionally, using data from one context (e.g., shopping habits) to make decisions in an unrelated domain (e.g., employment) undermines fairness.

The prohibition, then, reflects a commitment to ensuring that AI systems respect core European values of justice and non-discrimination. However, as stated in the article recital, it should not affect lawful evaluation practices of natural persons that are carried out for a specific purpose in accordance with Union and national law.

The case of Rongcheng county-level city (China) provides a real-world example of social credit system that has sparked much debate in the media. In Rongcheng, residents receive a baseline credit score of 1,000, adjusted based on their actions. Positive activities, like winning national competitions, can increase scores, while negative actions, such as spreading “harmful information” online, decrease them. China initially proposed the plan to construct a Social Credit System in 2007 with the aim of restoring market order through financial creditworthiness of business and individuals, but the system has since broadened to encompass many aspects of daily life. Rongcheng’s social scoring system, however, is a local implementation that applies only within the city and it is analyzed in this article.

The existence of these systems has raised alarms internationally. Activists and media outlets worry about their potential misuse, privacy violations, and their potential chilling effect on free expression, especially if scaled to larger or national contexts (see, e.g., here). Meanwhile, China has announced a new social credit law, and a balanced view of its implications is offered here.

The case of SyRI (Systeem Risico Indicatie) in the Netherlands highlights another controversial use of AI systems. SyRI was an automated system adopted by the Dutch government to detect welfare fraud by cross-referencing data from various administrative organs. While it is not strictly a social scoring system, and therefore it is not entirely appropriate to discuss it in this context, it shares similarities with such systems. Like social scoring, SyRI aggregated data from multiple sources to evaluate individuals and created risk profiles to flag those deemed more likely to commit welfare fraud. Proponents argued that the system enhanced efficiency and helped target the misuse of public funds. However, critics raised concerns about privacy violations, a lack of transparency, and the risk of discimination. The system faced significant pushback, ultimately leading to its discontinuation in 2020 after a court ruling deemed it a violation of privacy rights (see an analysis of the case here). These examples illustrate the delicate balance between technological utility and ethical considerations, reinforcing the importance of regulations like those proposed in the EU AI Act. Nevertheless, as we will also explore in later posts, the Act remains of course open to improvement.

P.S. I cannot update the blog very frequently, so please be patient

,

Leave a comment