Professors Andrea Tannapfel, head of the Institute of Pathology; Anke Reinacher-Schick, an oncologist at Ruhr-St. Universität’s Josef Hospital; and Klaus Gerwert, founding director of the Proton Therapy and Drug Discovery Institute (PRODI), collaborated with bioinformatics scientist Axel Mosig on the study. The team created an artificial neural network that can determine whether or not a tissue sample actually contains cancer. The goal was to train the AI by using thousands of microscopic images of tissue, some of which contained tumors.
Tumor detection in tissue images is a task that can be taught to an AI system. Until now, though, the process by which it arrived at its conclusions remained unclear. Ruhr-Universität Bochum‘s Research Center for Protein Diagnostics (PRODI) is working on a new method to make AI decisions more understandable and reliable. The method, developed under Professor Axel Mosig’s tutelage, is detailed in an online article in the journal Medical Image Analysis dated August 24, 2022.
It’s not clear which distinguishing features a network picks up from the training data, as Axel Mosig puts it: “Neural network are initially black boxes.” They can’t justify their choices like human specialists can. But as bioinformaticist and co-author of the study David Schuhmacher says, “it’s important that the AI can explain itself and is, therefore, trustworthy.” This is especially important in medical applications.
Falsifiable hypotheses form the foundation of AI.
The explainable AI developed by the Bochum group is predicated on scientifically testable hypotheses, the only kinds of meaningful statements currently known to humanity. A failed experiment is required to prove a failed hypothesis. In artificial intelligence, inductive reasoning is often used. This is when specific observations (called “training data”) are used to build a more general model, which is then used to judge new observations.
This underlying difficulty was described by philosopher David Hume 250 years ago and is easily illustrative: no matter how many white swans we observe, we could never conclude from this data that all swans are white and that no black swans exist at all. For this reason, scientists employ the use of deductive reasoning. With this method, you begin with a broad assumption. For instance, the discovery of a black swan disproves the theory that they are all white.
The cancer detection map indicates the specific location of the tumor.
Physicist Stephanie Schörner, who also contributed to the study, says, “At first glance, inductive AI and the deductive scientific method seem almost incompatible.” However, the team of investigators did manage to crack the code. The new neural network that the researchers made is made up of the activation map of the microscopic tissue image and the classification of whether or not a tissue sample has a tumor.
The hypothesis upon which the activation map is based is testable; it is that the regions of the sample where tumors are present exactly match the regions of the activation derived from the neural network. To verify this theory, we can use site-specific molecular techniques.
For example, to tell the difference between certain therapy-relevant tumor subtypes, “thanks to the interdisciplinary structures at PRODI, we have the best conditions for incorporating the hypothesis-based approach into the future development of trustworthy biomarker AI,” says Axel Mosig.