CT-based volumetric measures obtained through deep learning: Association with biomarkers of neurodegeneration published on Alzheimer’s & Dementia

Imaging-based volumetric measures. (Image by the Authors of the manuscript.)
CT-based volumetric measures obtained through deep learning: Association with biomarkers of neurodegeneration
Meera Srikrishna, Nicholas J. Ashton, Alexis Moscoso, Joana B. Pereira, Rolf A. Heckemann, Danielle van Westen, Giovanni Volpe, Joel Simrén, Anna Zettergren, Silke Kern, Lars-Olof Wahlund, Bibek Gyanwali, Saima Hilal, Joyce Chong Ruifen, Henrik Zetterberg, Kaj Blennow, Eric Westman, Christopher Chen, Ingmar Skoog, Michael Schöll
Alzheimer’s & Dementia 20, 629–640 (2024)
arXiv: 2401.06260
doi: 10.1002/alz.13445

INTRODUCTION
Cranial computed tomography (CT) is an affordable and widely available imaging modality that is used to assess structural abnormalities, but not to quantify neurodegeneration. Previously we developed a deep-learning–based model that produced accurate and robust cranial CT tissue classification.

MATERIALS AND METHODS
We analyzed 917 CT and 744 magnetic resonance (MR) scans from the Gothenburg H70 Birth Cohort, and 204 CT and 241 MR scans from participants of the Memory Clinic Cohort, Singapore. We tested associations between six CT-based volumetric measures (CTVMs) and existing clinical diagnoses, fluid and imaging biomarkers, and measures of cognition.

RESULTS
CTVMs differentiated cognitively healthy individuals from dementia and prodromal dementia patients with high accuracy levels comparable to MR-based measures. CTVMs were significantly associated with measures of cognition and biochemical markers of neurodegeneration.

DISCUSSION
These findings suggest the potential future use of CT-based volumetric measures as an informative first-line examination tool for neurodegenerative disease diagnostics after further validation.

Comparison of Two-Dimensional- and Three-Dimensional-Based U-Net Architectures for Brain Tissue Classification in One-Dimensional Brain CT published in Frontiers of Computational Neuroscience

CT is split into smaller patches. (Image by the Authors.)
Comparison of Two-Dimensional- and Three-Dimensional-Based U-Net Architectures for Brain Tissue Classification in One-Dimensional Brain CT
Meera Srikrishna, Rolf A. Heckemann, Joana B. Pereira, Giovanni Volpe, Anna Zettergren, Silke Kern, Eric Westman, Ingmar Skoog and Michael Schöll
Frontiers of Computational Neuroscience 15, 785244 (2022)
doi: 10.3389/fncom.2021.785244

Brain tissue segmentation plays a crucial role in feature extraction, volumetric quantification, and morphometric analysis of brain scans. For the assessment of brain structure and integrity, CT is a non-invasive, cheaper, faster, and more widely available modality than MRI. However, the clinical application of CT is mostly limited to the visual assessment of brain integrity and exclusion of copathologies. We have previously developed two-dimensional (2D) deep learning-based segmentation networks that successfully classified brain tissue in head CT. Recently, deep learning-based MRI segmentation models successfully use patch-based three-dimensional (3D) segmentation networks. In this study, we aimed to develop patch-based 3D segmentation networks for CT brain tissue classification. Furthermore, we aimed to compare the performance of 2D- and 3D-based segmentation networks to perform brain tissue classification in anisotropic CT scans. For this purpose, we developed 2D and 3D U-Net-based deep learning models that were trained and validated on MR-derived segmentations from scans of 744 participants of the Gothenburg H70 Cohort with both CT and T1-weighted MRI scans acquired timely close to each other. Segmentation performance of both 2D and 3D models was evaluated on 234 unseen datasets using measures of distance, spatial similarity, and tissue volume. Single-task slice-wise processed 2D U-Nets performed better than multitask patch-based 3D U-Nets in CT brain tissue classification. These findings provide support to the use of 2D U-Nets to segment brain tissue in one-dimensional (1D) CT. This could increase the application of CT to detect brain abnormalities in clinical settings.

Soft Matter Lab presentations at the SPIE Optics+Photonics Digital Forum

Seven members of the Soft Matter Lab (Saga HelgadottirBenjamin Midtvedt, Aykut Argun, Laura Pérez-GarciaDaniel MidtvedtHarshith BachimanchiEmiliano Gómez) were selected for oral and poster presentations at the SPIE Optics+Photonics Digital Forum, August 24-28, 2020.

The SPIE digital forum is a free, online only event.
The registration for the Digital Forum includes access to all presentations and proceedings.

The Soft Matter Lab contributions are part of the SPIE Nanoscience + Engineering conferences, namely the conference on Emerging Topics in Artificial Intelligence 2020 and the conference on Optical Trapping and Optical Micromanipulation XVII.

The contributions being presented are listed below, including also the presentations co-authored by Giovanni Volpe.

Note: the presentation times are indicated according to PDT (Pacific Daylight Time) (GMT-7)

Emerging Topics in Artificial Intelligence 2020

Saga Helgadottir
Digital video microscopy with deep learning (Invited Paper)
26 August 2020, 10:30 AM
SPIE Link: here.

Aykut Argun
Calibration of force fields using recurrent neural networks
26 August 2020, 8:30 AM
SPIE Link: here.

Laura Pérez-García
Deep-learning enhanced light-sheet microscopy
25 August 2020, 9:10 AM
SPIE Link: here.

Daniel Midtvedt
Holographic characterization of subwavelength particles enhanced by deep learning
24 August 2020, 2:40 PM
SPIE Link: here.

Benjamin Midtvedt
DeepTrack: A comprehensive deep learning framework for digital microscopy
26 August 2020, 11:40 AM
SPIE Link: here.

Gorka Muñoz-Gil
The anomalous diffusion challenge: Single trajectory characterisation as a competition
26 August 2020, 12:00 PM
SPIE Link: here.

Meera Srikrishna
Brain tissue segmentation using U-Nets in cranial CT scans
25 August 2020, 2:00 PM
SPIE Link: here.

Juan S. Sierra
Automated corneal endothelium image segmentation in the presence of cornea guttata via convolutional neural networks
26 August 2020, 11:50 AM
SPIE Link: here.

Harshith Bachimanchi
Digital holographic microscopy driven by deep learning: A study on marine planktons (Poster)
24 August 2020, 5:30 PM
SPIE Link: here.

Emiliano Gómez
BRAPH 2.0: Software for the analysis of brain connectivity with graph theory (Poster)
24 August 2020, 5:30 PM
SPIE Link: here.

Optical Trapping and Optical Micromanipulation XVII

Laura Pérez-García
Reconstructing complex force fields with optical tweezers
24 August 2020, 5:00 PM
SPIE Link: here.

Alejandro V. Arzola
Direct visualization of the spin-orbit angular momentum conversion in optical trapping
25 August 2020, 10:40 AM
SPIE Link: here.

Isaac Lenton
Illuminating the complex behaviour of particles in optical traps with machine learning
26 August 2020, 9:10 AM
SPIE Link: here.

Fatemeh Kalantarifard
Optical trapping of microparticles and yeast cells at ultra-low intensity by intracavity nonlinear feedback forces
24 August 2020, 11:10 AM
SPIE Link: here.

Note: the presentation times are indicated according to PDT (Pacific Daylight Time) (GMT-7)