RP18 - Extracting prognostic indicators of patient outcome from pre-clinical histopathology image data and additional clinical data

This project aims at the exploration of pre-clinical histopathology image data and their benefits for WisPerMed. It builds on previous work focussing on histopathological whole slide images (WSI) of melanoma skin cancer. We conducted a comprehensive systematic literature review, highlighting the technical aspects of deep learning for various applications [1]. We also explored a new approach on explaining model decisions using global concepts instead of pixel-wise heatmaps [2]. Training and testing were done on two public datasets (Histopathology Non-Melanoma Skin Cancer Segmentation Dataset (“Queensland”) and The Cancer Genome Atlas (TCGA) Skin Cutaneous Melanoma (SKCM)), while in parallel we started with the digitization and identification of our own image data.

Building on these results, we will expand BRAF mutation predictions from patch level to WSI level by means of statistical aggregation as proposed by e.g., Kim et al. [3].

With high quality labeled image data available, follow-up projects will then focus on other target variables, such as automatically detecting the presence of tumor infiltrating lymphocytes (TIL), since the presence of a dense TIL is associated with good prognosis. As a starting point a fine-granularly labeled dataset for lymphocyte infiltration scoring is available [4]. Also, mitotic rate is of special interest since it indicates whether there’s an abnormally high amount of cell division, associated with cancer activity. Research has shown a correlation between mitotic rate and the odds of survival for patients with stage I melanoma. Finally, automatic assessment of tumor necrosis can be a significant prognostic indicator of patient outcome.

A general approach to all applications is to explore the self-supervised learning paradigm by directly pre-training on our dataset to build a rich feature representation and then fine-tune to the task at hand. Additionally, we will investigate to what extent learned image features from pre-trained (self-supervised) models on ImageNet transfer to our dataset. This approach has been shown to outperform supervised ImageNet pre-training due to improved generalizability to the medical domain.informatics.


[1] Sauter D, Lodde G, Nensa F, Schadendorf D, Livingstone E, Kukuk M. Deep learning in computational dermatopathology of melanoma: A technical systematic literature review. Comput Biol Med. 2023 May 29;163:107083. doi: 10.1016/j.compbiomed.2023.107083. Epub ahead of print. PMID: 37315382.

[2] Sauter D.; Lodde G.; Nensa F.; Schadendorf D.; Livingstone E.; Kukuk M. Validating Automatic Concept-Based Explanations for AI-Based Digital Histopathology. Sensors (Basel). 2022 Jul 18;22(14):5346. doi: 10.3390/s22145346. PMID: 35891026; PMCID: PMC9319808.

[3] Kim RH, Nomikou S, Coudray N, Jour G, Dawood Z, Hong R, Esteva E, Sakellaropoulos T, Donnelly D, Moran U, Hatzimemos A, Weber JS, Razavian N, Aifantis I, Fenyo D, Snuderl M, Shapiro R, Berman RS, Osman I, Tsirigos A. Deep Learning and Pathomics Analyses Reveal Cell Nuclei as Important Features for Mutation Prediction of BRAF-Mutated Melanomas. J Invest Dermatol. 2022 Jun;142(6):1650-1658.e6. doi: 10.1016/j.jid.2021.09.034

[4] Abousamra, S. et al. (2022). Deep Learning-Based Mapping of Tumor Infiltrating Lymphocytes in Whole Slide Images of 23 Types of Cancer. Frontiers in Oncology, 5971. https://doi.org/10.3389/fonc.2021.806603.

Next
Previous