One Platform, Many Tools
Programming AI is a complicated process. Huge amounts of data need to be collected, reviewed, and evaluated. The Fraunhofer Institute for Digital Medicine MEVIS is developing a platform that integrates all key steps and eases collaboration between programmers and clinicians.
Artificial intelligence (AI) is becoming increasingly important in medicine. Adaptive algorithms can now recognize organs in CT or MR image data with more and more accuracy and determine whether tumors are benign or malignant. This promises new possibilities for the clinical routine. For example, diagnostic assistants can often streamline laborious, routine tasks and simplify a hospital’s workflow. In addition, treatment planning algorithms could generate clues about a patient’s tolerance for a certain drug.
However, developing such AI systems is challenging. The software must be trained with as much high-quality data as possible. If a program must reliably recognize and accurately measure a specific region of the liver on MR images, it needs to be trained with a vast amount of image data annotated by clinicians onto which the liver has already been outlined. Even though several programming tools exist to accomplish the involved tasks, they are often not well integrated. Fraunhofer MEVIS is, thus, developing a collaborative AI platform that unifies all essential tools and allows the main players - programmers and physicians - to work together. “Our platform should integrate everything into one system,” explains MEVIS computer scientist Hans Meine. “Everyone involved can log in to the platform and complete all of their tasks.”
Integrated quality assurance
It all begins with data handling. Nowadays, exporting data from one program and importing it into another is a tiresome, manual task. The new platform is intended to automate and, thus, simplify these processes. As with professional photo management software, it will collect and catalog data sets and present them clearly.
The data can be assessed (automatically when possible) according to their quality, and images with poor quality can be filtered out. Staff can add comments to each data set, such as what is visible on a CT scan and how pathological changes could become noticeable. Doing so incorporates important medical knowledge, allowing the computer to iteratively refine its learned skills.
“One example is segmentation, which is, for instance, when an algorithm attempts to detect specific areas of the lung automatically,” explains Bianca Lassen-Schmidt, one of Meine’s colleagues. “If a clinician notices that the segmentation was not sufficiently successful, he or she can improve it manually.” The corrected image is then fed back into the program to train and improve the algorithm for more accurate results. “Of course, we aim to ensure that the software doesn’t become worse,” says Lassen-Schmidt. “This can be avoided by verifying it regularly using a test data set.” Only after the algorithm passes this test will it adopt the learned changes, thereby fulfilling an important quality assurance requirement.
Training at multiple clinics
Another challenge: If an algorithm is trained with data from only one specific clinic, it might only function there, because different clinics use varying equipment and imaging protocols. Even for an identical clinical impression, data sets can vary subtly. This can be more confusing for AI than for humans. For an algorithm to work at multiple clinics, it is desirable to train it using data from as many hospitals as possible. This is complicated, however, by data privacy requirements, which often prohibit patient data from leaving the hospital.
“We are investigating how to train an algorithm using data from multiple clinics without requiring that data ever leaves the premises,” explains Bianca Lassen-Schmidt. The strategy involves training the algorithm in Clinic A and then continuing in Clinic B. The algorithm then returns to Clinic A to refine the process. Because the software only “carries” learned patterns and not patient data out of each clinic, data privacy requirements are respected.
“We have already developed most of the individual components of our AI platform in-house at Fraunhofer MEVIS,” says Hans Meine. “We are now in the process of combining them into a complete package to test with other partners, for instance, to perform clinical studies.” In the long term, the collaborative platform might even help different clinics investigating similar topics to work more closely together. “For medical research,” believes Bianca Lassen-Schmidt, “this could make a great impact.”