AI-Collaboration-Toolkit

We empower multi-center medical research collaborations by providing an AI toolkit for distributed data curation, federated training, validation, and application of AI models.

One Platform, Many Tools

Various clinical and technical researchers and developers perform different tasks in a dynamic collaboration with the goal of distilling raw medical data into robust AI models.
© Fraunhofer MEVIS
Fraunhofer MEVIS provides an integrated toolkit for worldwide collaborative development of AI solutions.

Programming AI is a complicated process. Huge amounts of data need to be collected, reviewed, and evaluated. Our research group on “Collaborative AI in Healthcare” is developing a platform that integrates all key steps and eases collaboration between programmers and clinicians.

 

Artificial intelligence (AI) is becoming increasingly important in medicine. Adaptive algorithms can now recognize organs in CT or MR image data with more and more accuracy and determine whether tumors are benign or malignant. This promises new possibilities for the clinical routine. For example, diagnostic assistants can often streamline laborious, routine tasks and simplify a hospital’s workflow. In addition, treatment planning algorithms could generate clues about a patient’s tolerance for a certain drug.

However, developing such AI systems is challenging. The software must be trained with as much high-quality data as possible. If a program must reliably recognize and accurately measure a specific region of the liver on MR images, it needs to be trained with a vast amount of image data annotated by clinicians onto which the liver has already been outlined. Even though several programming tools exist to accomplish the involved tasks, they are often not well integrated. Fraunhofer MEVIS is, thus, developing a collaborative AI platform that unifies all essential tools and allows the main players - programmers and physicians - to work together. “Our platform should integrate everything into one system,” explains MEVIS computer scientist Hans Meine. “Everyone involved can log in to the platform and complete all of their tasks.”

 

Integrated quality assurance

It all begins with data handling. Nowadays, exporting data from one program and importing it into another is a tiresome, manual task. The new platform is intended to automate and, thus, simplify these processes. As with professional photo management software, it will collect and catalog data sets and present them clearly.

The data can be assessed (automatically when possible) according to their quality, and images with poor quality can be filtered out. Staff can add comments to each data set, such as what is visible on a CT scan and how pathological changes could become noticeable. Doing so incorporates important medical knowledge, allowing the computer to iteratively refine its learned skills.

“One example is segmentation, which is, for instance, when an algorithm attempts to detect specific areas of the lung automatically,” explains Bianca Lassen-Schmidt, one of Meine’s colleagues. “If a clinician notices that the segmentation was not sufficiently successful, he or she can improve it manually.” The corrected image is then fed back into the program to train and improve the algorithm for more accurate results. “Of course, we aim to ensure that the software doesn’t become worse,” says Lassen-Schmidt. “This can be avoided by verifying it regularly using a test data set.” Only after the algorithm passes this test will it adopt the learned changes, thereby fulfilling an important quality assurance requirement.

 

Training at multiple clinics

Another challenge: If an algorithm is trained with data from only one specific clinic, it might only function there, because different clinics use varying equipment and imaging protocols. Even for an identical clinical impression, data sets can vary subtly. This can be more confusing for AI than for humans. For an algorithm to work at multiple clinics, it is desirable to train it using data from as many hospitals as possible. This is complicated, however, by data privacy requirements, which often prohibit patient data from leaving the hospital.

“We are investigating how to train an algorithm using data from multiple clinics without requiring that data ever leaves the premises,” explains Bianca Lassen-Schmidt. The strategy involves training the algorithm in Clinic A and then continuing in Clinic B. The algorithm then returns to Clinic A to refine the process. Because the software only “carries” learned patterns and not patient data out of each clinic, data privacy requirements are respected.

“We have already developed most of the individual components of our AI platform in-house at Fraunhofer MEVIS,” says Hans Meine. “We are now in the process of combining them into a complete package to test with other partners, for instance, to perform clinical studies.” In the long term, the collaborative platform might even help different clinics investigating similar topics to work more closely together. “For medical research,” believes Bianca Lassen-Schmidt, “this could make a great impact.”

SATORI: A Customizable Platform for Reader Studies

Screenshot of a browser showing the SATORI application with various radiological images arranged next to a list of relevant clinical information and structures found in these images.
© Fraunhofer MEVIS
SATORI screenshot from a myocarditis study, showing features common to all studies. Depicted image data is courtesy of University Hospital Mannheim.

A core component of our AI collaboration toolkit is SATORI, our web frontend for curating medical data. It supports both image and non-image data, with strong support for radiological images. Depending on the needs of a given reader study, it is possible to define any number of "hangings" that automatically arrange all relevant information in a suitable layout to review data and often enrich it with additional information on the subject, the respective time point, a particular image, or a structure in an image. SATORI includes our advanced interactive contouring tool that supports snapping and interpolation for accelerated reading time.

SATORI with white matter lesions in MRI and a custom lesion tracking extension showing how lesions split and merge over time
© Fraunhofer MEVIS
SATORI screenshot showing an example of a white matter hyperintensities reading study with custom lesion tracking (here an example with five time points). Depicted image data is courtesy of Prof. Lukas, St. Josef Hospital Bochum, Institute of Neuroradiology.

SATORI can be highly customized through extensions that can be quickly developed with our rapid prototyping platform. Despite running in the browser, the underlying client-server technology allows us integrate any of our existing algorithms for automatic segmentation, registration, and even advanced visualization and custom analyses for various medical domains. The benefits of a browser-based solution are large in this collaborative scenario.

Lung image analysis toolkit

A SATORI configured for lung images shows a CT with analysis results and a 3D rendering of the lung
© Fraunhofer MEVIS
Lung image analysis with SATORI. Depicted image data is from Cancer Imaging Archive repository https://wiki.cancerimagingarchive.net/display/Public/COVID-19.

This advanced SATORI configuration for lung image analysis automatically segments the lungs, lung lobes, bronchi, blood vessels, and findings that are typical for COVID-19. Importing a CT image of a chest automatically triggers the AI-based segmentation. Users can add additional pathologies and findings with various interactive tools and export the quantitative results alongside the screenshots.

Fraunhofer interview with Bianca Lassen-Schmidt at DMEA2022 on the topics of AI Collaboration Toolbox, SATORI and the RACOON project. For more information on the collaborative project RACOON, in which all university radiologies in Germany are partners, click here: https://racoon.network/.

Improved clinical decision support through Deep Learning

Architecture diagram of a U-net named after the U-shaped arrangement of the different layers of neural networks.
© Fraunhofer MEVIS
Diagram depicting a variant of the so-called U-net architecture named after the arrangement of the different layers. Vertically separated layers operate on different scales, which allows these models to integrate both fine-grained detail and more global information.

Deep Learning denotes a range of modern methods to automate cognitive tasks. We use this technology to support doctors with fast, quantitative analyses, reducing their workload. Besides actual diagnostic tools, we also target improved workflow support through more helpful, "intelligent" user interfaces that automatically present the most relevant information at a glance.

Deep Learning in Medical Imaging

Radiomics extracts valuable diagnostic information from images

CT image, table with clinical parameters and heatmap showing their correlation.
© Fraunhofer MEVIS
Radiomics discovers relationships between image features and clinical data. The “heatmap” groups patients with similar features and represents them in areas of the same color.

Radiomics is a second "data-driven approach" that demands large amounts of well-curated data and benefits from the above tools. Similarly to genomics or proteomics, the goal is to provide information that helps identify subgroups of patients for improved assignment of therapies with promising outcomes. Radiomics does this by extracting non-obvious features from radiological images.

Radiomics