AI, trust and science #3

In this and this post, we have discussed the role of AI in scientific research, referring to the work of the philosopher Inkeri Koskinen and the notion of necessary trust. We conclude the discussion in this post. The essential problem is whether AI software can be considered a scientific instrument like, say, the Hubble Space Telescope, or a tool like an ordinary computer algorithm that allows us to carry out calculations that would otherwise be impossible to complete in any reasonable time. Referring to the work of several authors, Koskinen concludes that this is not the case. Indeed, knowledge production using the Hubble telescope and ordinary computer tools involves a system made of humans and machines where humans are able to take responsibility for the functioning of the whole and, contrary to what happens with AI, can monitor all the relevant processes.

On the other hand, Clark (2015) does not consider this monitoring ability as essential, since not even our own mind processes are fully transparent to us, nor do we typically require them to be so. Why, then, would we demand that the workings of AI tools be fully transparent and monitorable? As noted by Koskinen, though, even Clark observes that conscious awareness of our thought processes becomes relevant when we systematically transmit knowledge (and not simple facts) to others, such as when we write scientific papers or foster methods and practices that help students probe and test their beliefs and knowledge sources, thus deepening their understandings (Koskinen, 2023). Even more importantly, scrutinizing one’s thoughts is an essential scientific practice for supporting conclusions, as a good scientist is aware of the many biases and pitfalls that may affect human reasoning. Additionally, the justification of a claim is independent of the actual cognitive processes that led to it and must be public and scrutinizable. It is not clear how an opaque AI application can achieve all this.

Moreover, Clark notes that when we encounter a new tool or technology, we exercise extra care in learning to use it as well as in understanding its potentialities and limitations. Once complete familiarity with such a tool or technology is achieved, attention can be lowered and replaced with what can be called ‘unreflective use’; this is compatible with the necessary trust view discussed by Koskinen. However, with AI applications, there may never be a point where one fully understands the workings of the application at hand. Additionally, at a certain stage, it may be possible to automatically endorse the output of an ordinary scientific tool or software, a condition that Clark and Chalmers (1998) set for considering an object part of the cognitive system of an individual. However, even this may never be feasible with AI software.

In conclusion, as noted by Koskinen, the introduction of opaque AI applications to science significantly alters the role of machines in ways that we have yet to address.


Leave a comment