In this post we have discussed the problem of using AI in scientific research focusing on the work of the philosopher Inkeri Koskinen, and the notion of ‘necessary trust’. We continue the discussion in this second post. Koskinen points out that, according to the literature, a scientist’s reliance on instruments whose functioning she does not completely understand is acceptable if there is someone else who understands how these instruments work, has diligently followed accepted procedures, takes responsibility for them in a way that is appropriate and justified from a scientific point of view, and can be trusted.
Therefore, in scientific practices and under the necessary trust view, depending on instruments whose functioning is not understood in every detail is seen as a typical form of dependence on other scientists as agents capable of being held accountable and trusted. However, the integration of AI applications into science presents a challenge.
Paul Humphreys, over a decade ago, identified a key issue with computer simulations: their essential epistemic opacity. This concept refers to the inherent inability of human agents to fully comprehend the intricate details and processes of these computational systems, owing to their complex nature and speed. Humphreys distinguishes between ordinary (accidental) and essential epistemic opacity.
In the first case there is usually someone within the research group or community who understands the instrument’s workings, allowing others to trust their expertise according to the necessary trust view. However, the use of essentially opaque ‘black box’ AI applications changes this dynamic. These applications are so complex that no human can fully understand or be accountable for them, thus eliminating the possibility of rationally grounded trust in an accountable agent.
This problem is exacerbated when AI systems incorporate decisions (such as what answer or interpretation should be prioritized in case of multiple options) that are not strictly related to the knowledge production process. The necessary trust view holds that researchers should be able to trust the individuals responsible for these decisions. However, with AI there may be no such accountable individual.
In some cases, it may be possible to probe the inner workings of an AI tool to determine, to some extent, what factors influenced its final output. However, in my experience, the result of this probing often fails to provide a form of ‘reasoning’ transparent or understandable to humans, especially in terms of cause and effect relationships. It is more akin to probing an alien mind, one that arrives at a potentially correct answer, yet through inexplicable means.
How, then, can we interpret the reliance on AI tools? This question will be tackled in the next and final post on Koskinen’s work.
