AI, trust and science #1

AI has shown promise in numerous scientific fields and projects, from astronomy and astrophysics to biochemistry, and from ecology to drug design. Indeed, AI systems can be used for analysing vast amounts of data with complex relationships in a reasonable time, discovering patterns, recognizing features, or making predictions.

At first, AI might be seen as just another scientific tool, akin to a telescope, that can safely be used whenever it proves to be reliable. Yet, as suggested in this paper by Inkeri Koskinen, this view of the role of AI in science deserves a closer look. The concept of necessary trust, which Koskinen deems pivotal in collaborative scientific research, comes into play here. As science grows more complex and specialized, individual researchers cannot independently justify knowledge claims. Instead, they rely on a collective approach, where trust in the expertise and integrity of colleagues is indispensable.

This necessary trust isn’t about blind faith; it’s a rational, controlled trust cultivated through mechanisms like critical discussions, peer review, and training. These measures are designed to minimize the need for blind trust, enabling retrospective verification of work and correction of errors. Despite these controls, a certain level of trust remains essential, particularly as individual researchers often lack the expertise to fully assess each other’s specialized contributions.

Different philosophical accounts vary in how they interpret trust, ranging from ‘thick’ to ‘thin’ conceptions. ‘Thick’ accounts emphasize the moral and affective aspects of trust, suggesting that it is only appropriate towards agents capable of moral responsibility. In these views, trust involves depending on the goodwill of others and is deeply affected by moral considerations like integrity and conscientiousness. ‘Thin’ accounts, on the other hand, view trust more as a form of reliance, without strongly emphasizing the moral and affective elements, and can extend to non-agents like machines.

In the context of scientific collaboration, according to Koskinen, the necessary trust is of the ‘thick’ variety. This means scientists need to trust each other beyond simple reliance, expecting moral responsibility in their colleagues’ work. This trust is crucial because scientists often depend on each other’s work in ways that are not fully transparent to them. They must trust that their colleagues will not engage in fraudulent or careless work, that they will adhere to collectively accepted procedures, and that they will preserve objectivity.

Trust, here, is morally rich but rationally grounded and involves managing the risks involved in knowledge acquisition. It’s a recognition of the full agency and moral responsibility of fellow scientists.

This understanding of trust presents a problem when applied to AI systems. Since necessary trust can only exist among agents accountable for their actions, it cannot extend to AI systems. Initially, this might not seem problematic, as AI could be regarded as a sort of instrument that can safely be used whenever it proves reliable, as written above. However, Koskinen’s perspective challenges this notion. In my next post, I plan to delve more thoroughly into this topic.


Leave a comment