In the previous three posts we have considered some of the issues posed by using AI in scientific research. This paper by Lisa Messeri and Molly J. Crockett published on Nature and this interview with the authors on Scientific American add to, and extend, the arguments on such a problematic status of AI. The authors examine four ‘visions of AI’, which are currently under exploration and proposal but not yet fully implemented. These visions are named Surrogate, Oracle, Quant, and Arbiter. The Surrogate replaces human subjects, the Oracle synthesizes existing research to produce outputs such as reviews or new hypotheses, the Quant processes large amounts of data, and the Arbiter evaluates research.
Nature’s paper describes how implementing such visions of AI may actually make science less innovative, diverse, and more vulnerable to errors, and may introduce a phase of scientific enquiry where more contents are produced but less understanding is gained. At the end of their interview on Scientific American Messeri and Crockett note that in scientific training there’s a strong emphasis on removing from the scientific method not only biases, but also personal experiences and opinions. Autonomous, AI “self-driving” labs, as desired by some, seem to realize this ideal. However, the authors note, it’s becoming more and more apparent that having scientists with diverse thoughts, experiences, and training is crucial for producing robust, innovative, and creative knowledge. This diversity shouldn’t be lost. To keep the quality and vitality of scientific knowledge production high, it’s important to ensure that humans remain a key part of the process.
