Dynamic live/apoptotic cell assay using phase-contrast imaging and deep learning on bioRxiv

Phase-contrast image before virtual staining. (Image by the Authors.)
Dynamic live/apoptotic cell assay using phase-contrast imaging and deep learning
Zofia Korczak, Jesús Pineda, Saga Helgadottir, Benjamin Midtvedt, Mattias Goksör, Giovanni Volpe, Caroline B. Adiels
bioRxiv: https://doi.org/10.1101/2022.07.18.500422

Chemical live/dead assay has a long history of providing information about the viability of cells cultured in vitro. The standard methods rely on imaging chemically-stained cells using fluorescence microscopy and further analysis of the obtained images to retrieve the proportion of living cells in the sample. However, such a technique is not only time-consuming but also invasive. Due to the toxicity of chemical dyes, once a sample is stained, it is discarded, meaning that longitudinal studies are impossible using this approach. Further, information about when cells start programmed cell death (apoptosis) is more relevant for dynamic studies. Here, we present an alternative method where cell images from phase-contrast time-lapse microscopy are virtually-stained using deep learning. In this study, human endothelial cells are stained live or apoptotic and subsequently counted using the self-supervised single-shot deep-learning technique (LodeSTAR). Our approach is less labour-intensive than traditional chemical staining procedures and provides dynamic live/apoptotic cell ratios from a continuous cell population with minimal impact. Further, it can be used to extract data from dense cell samples, where manual counting is unfeasible.

Neural Network Training with Highly Incomplete Datasets published in Machine Learning: Science and Technology

Working principles for training neural networks with highly incomplete dataset: vanilla (upper panel) vs GapNet (lower panel) (Image by Yu-Wei Chang.)
Neural Network Training with Highly Incomplete Datasets
Yu-Wei Chang, Laura Natali, Oveis Jamialahmadi, Stefano Romeo, Joana B. Pereira, Giovanni Volpe
Machine Learning: Science and Technology 3, 035001 (2022)
arXiV: 2107.00429
doi: 10.1088/2632-2153/ac7b69

Neural network training and validation rely on the availability of large high-quality datasets. However, in many cases only incomplete datasets are available, particularly in health care applications, where each patient typically undergoes different clinical procedures or can drop out of a study. Since the data to train the neural networks need to be complete, most studies discard the incomplete datapoints, which reduces the size of the training data, or impute the missing features, which can lead to artefacts. Alas, both approaches are inadequate when a large portion of the data is missing. Here, we introduce GapNet, an alternative deep-learning training approach that can use highly incomplete datasets. First, the dataset is split into subsets of samples containing all values for a certain cluster of features. Then, these subsets are used to train individual neural networks. Finally, this ensemble of neural networks is combined into a single neural network whose training is fine-tuned using all complete datapoints. Using two highly incomplete real-world medical datasets, we show that GapNet improves the identification of patients with underlying Alzheimer’s disease pathology and of patients at risk of hospitalization due to Covid-19. By distilling the information available in incomplete datasets without having to reduce their size or to impute missing values, GapNet will permit to extract valuable information from a wide range of datasets, benefiting diverse fields from medicine to engineering.

Deep learning in light–matter interactions published in Nanophotonics

Artificial neurons can be combined in a dense neural network (DNN), where the input layer is connected to the output layer via a set of hidden layers. (Image by the Authors.)
Deep learning in light–matter interactions
Daniel Midtvedt, Vasilii Mylnikov, Alexander Stilgoe, Mikael Käll, Halina Rubinsztein-Dunlop and Giovanni Volpe
Nanophotonics, 11(14), 3189-3214 (2022)
doi: 10.1515/nanoph-2022-0197

The deep-learning revolution is providing enticing new opportunities to manipulate and harness light at all scales. By building models of light–matter interactions from large experimental or simulated datasets, deep learning has already improved the design of nanophotonic devices and the acquisition and analysis of experimental data, even in situations where the underlying theory is not sufficiently established or too complex to be of practical use. Beyond these early success stories, deep learning also poses several challenges. Most importantly, deep learning works as a black box, making it difficult to understand and interpret its results and reliability, especially when training on incomplete datasets or dealing with data generated by adversarial approaches. Here, after an overview of how deep learning is currently employed in photonics, we discuss the emerging opportunities and challenges, shining light on how deep learning advances photonics.

Label-free nanofluidic scattering microscopy of size and mass of single diffusing molecules and nanoparticles published in Nature Methods

Kymographs of DNA inside Channel II. (Image by the Authors.)
Label-free nanofluidic scattering microscopy of size and mass of single diffusing molecules and nanoparticles
Barbora Špačková, Henrik Klein Moberg, Joachim Fritzsche, Johan Tenghamn, Gustaf Sjösten, Hana Šípová-Jungová, David Albinsson, Quentin Lubart, Daniel van Leeuwen, Fredrik Westerlund, Daniel Midtvedt, Elin K. Esbjörner, Mikael Käll, Giovanni Volpe & Christoph Langhammer
Nature Methods 19, 751–758 (2022)
doi: 10.1038/s41592-022-01491-6

Label-free characterization of single biomolecules aims to complement fluorescence microscopy in situations where labeling compromises data interpretation, is technically challenging or even impossible. However, existing methods require the investigated species to bind to a surface to be visible, thereby leaving a large fraction of analytes undetected. Here, we present nanofluidic scattering microscopy (NSM), which overcomes these limitations by enabling label-free, real-time imaging of single biomolecules diffusing inside a nanofluidic channel. NSM facilitates accurate determination of molecular weight from the measured optical contrast and of the hydrodynamic radius from the measured diffusivity, from which information about the conformational state can be inferred. Furthermore, we demonstrate its applicability to the analysis of a complex biofluid, using conditioned cell culture medium containing extracellular vesicles as an example. We foresee the application of NSM to monitor conformational changes, aggregation and interactions of single biomolecules, and to analyze single-cell secretomes.

Single-shot self-supervised particle tracking on ArXiv

LodeSTAR tracks the plankton Noctiluca scintillans. (Image by the Authors of the manuscript.)
Single-shot self-supervised particle tracking
Benjamin Midtvedt, Jesús Pineda, Fredrik Skärberg, Erik Olsén, Harshith Bachimanchi, Emelie Wesén, Elin K. Esbjörner, Erik Selander, Fredrik Höök, Daniel Midtvedt, Giovanni Volpe
arXiv: 2202.13546

Particle tracking is a fundamental task in digital microscopy. Recently, machine-learning approaches have made great strides in overcoming the limitations of more classical approaches. The training of state-of-the-art machine-learning methods almost universally relies on either vast amounts of labeled experimental data or the ability to numerically simulate realistic datasets. However, the data produced by experiments are often challenging to label and cannot be easily reproduced numerically. Here, we propose a novel deep-learning method, named LodeSTAR (Low-shot deep Symmetric Tracking And Regression), that learns to tracks objects with sub-pixel accuracy from a single unlabeled experimental image. This is made possible by exploiting the inherent roto-translational symmetries of the data. We demonstrate that LodeSTAR outperforms traditional methods in terms of accuracy. Furthermore, we analyze challenging experimental data containing densely packed cells or noisy backgrounds. We also exploit additional symmetries to extend the measurable particle properties to the particle’s vertical position by propagating the signal in Fourier space and its polarizability by scaling the signal strength. Thanks to the ability to train deep-learning models with a single unlabeled image, LodeSTAR can accelerate the development of high-quality microscopic analysis pipelines for engineering, biology, and medicine.

Directed Brain Connectivity Identifies Widespread Functional Network Abnormalities in Parkinson’s Disease published in Cerebral Cortex

Visual display of the nodes that show significant differences between controls and participants with PD in network measures using the anti-symmetric correlation method. (Image by the Authors.)
Directed Brain Connectivity Identifies Widespread Functional Network Abnormalities in Parkinson’s Disease
Mite Mijalkov, Giovanni Volpe, Joana B Pereira
Cerebral Cortex 32(3), 593–607 (2022)
doi: 10.1093/cercor/bhab237

Parkinson’s disease (PD) is a neurodegenerative disorder characterized by topological abnormalities in large-scale functional brain networks, which are commonly analyzed using undirected correlations in the activation signals between brain regions. This approach assumes simultaneous activation of brain regions, despite previous evidence showing that brain activation entails causality, with signals being typically generated in one region and then propagated to other ones. To address this limitation, here, we developed a new method to assess whole-brain directed functional connectivity in participants with PD and healthy controls using antisymmetric delayed correlations, which capture better this underlying causality. Our results show that whole-brain directed connectivity, computed on functional magnetic resonance imaging data, identifies widespread differences in the functional networks of PD participants compared with controls, in contrast to undirected methods. These differences are characterized by increased global efficiency, clustering, and transitivity combined with lower modularity. Moreover, directed connectivity patterns in the precuneus, thalamus, and cerebellum were associated with motor, executive, and memory deficits in PD participants. Altogether, these findings suggest that directional brain connectivity is more sensitive to functional network differences occurring in PD compared with standard methods, opening new opportunities for brain connectivity analysis and development of new markers to track PD progression.

Multiplex Connectome Changes across the Alzheimer’s Disease Spectrum Using Gray Matter and Amyloid Data published in Cerebral Cortex

Brain nodes. (Image taken from the article.)
Multiplex Connectome Changes across the Alzheimer’s Disease Spectrum Using Gray Matter and Amyloid Data
Mite Mijalkov, Giovanni Volpe, Joana B Pereira
Anna Canal-Garcia, Emiliano Gómez-Ruiz, Mite Mijalkov, Yu-Wei Chang, Giovanni Volpe, Joana B Pereira, Alzheimer’s Disease Neuroimaging Initiative
Cerebral Cortex, bhab429 (2022)
doi: 10.1093/cercor/bhab429

The organization of the Alzheimer’s disease (AD) connectome has been studied using graph theory using single neuroimaging modalities such as positron emission tomography (PET) or structural magnetic resonance imaging (MRI). Although these modalities measure distinct pathological processes that occur in different stages in AD, there is evidence that they are not independent from each other. Therefore, to capture their interaction, in this study we integrated amyloid PET and gray matter MRI data into a multiplex connectome and assessed the changes across different AD stages. We included 135 cognitively normal (CN) individuals without amyloid-β pathology (Aβ−) in addition to 67 CN, 179 patients with mild cognitive impairment (MCI) and 132 patients with AD dementia who all had Aβ pathology (Aβ+) from the Alzheimer’s Disease Neuroimaging Initiative. We found widespread changes in the overlapping connectivity strength and the overlapping connections across Aβ-positive groups. Moreover, there was a reorganization of the multiplex communities in MCI Aβ + patients and changes in multiplex brain hubs in both MCI Aβ + and AD Aβ + groups. These findings offer a new insight into the interplay between amyloid-β pathology and brain atrophy over the course of AD that moves beyond traditional graph theory analyses based on single brain networks.

Comparison of Two-Dimensional- and Three-Dimensional-Based U-Net Architectures for Brain Tissue Classification in One-Dimensional Brain CT published in Frontiers of Computational Neuroscience

CT is split into smaller patches. (Image by the Authors.)
Comparison of Two-Dimensional- and Three-Dimensional-Based U-Net Architectures for Brain Tissue Classification in One-Dimensional Brain CT
Meera Srikrishna, Rolf A. Heckemann, Joana B. Pereira, Giovanni Volpe, Anna Zettergren, Silke Kern, Eric Westman, Ingmar Skoog and Michael Schöll
Frontiers of Computational Neuroscience 15, 785244 (2022)
doi: 10.3389/fncom.2021.785244

Brain tissue segmentation plays a crucial role in feature extraction, volumetric quantification, and morphometric analysis of brain scans. For the assessment of brain structure and integrity, CT is a non-invasive, cheaper, faster, and more widely available modality than MRI. However, the clinical application of CT is mostly limited to the visual assessment of brain integrity and exclusion of copathologies. We have previously developed two-dimensional (2D) deep learning-based segmentation networks that successfully classified brain tissue in head CT. Recently, deep learning-based MRI segmentation models successfully use patch-based three-dimensional (3D) segmentation networks. In this study, we aimed to develop patch-based 3D segmentation networks for CT brain tissue classification. Furthermore, we aimed to compare the performance of 2D- and 3D-based segmentation networks to perform brain tissue classification in anisotropic CT scans. For this purpose, we developed 2D and 3D U-Net-based deep learning models that were trained and validated on MR-derived segmentations from scans of 744 participants of the Gothenburg H70 Cohort with both CT and T1-weighted MRI scans acquired timely close to each other. Segmentation performance of both 2D and 3D models was evaluated on 234 unseen datasets using measures of distance, spatial similarity, and tissue volume. Single-task slice-wise processed 2D U-Nets performed better than multitask patch-based 3D U-Nets in CT brain tissue classification. These findings provide support to the use of 2D U-Nets to segment brain tissue in one-dimensional (1D) CT. This could increase the application of CT to detect brain abnormalities in clinical settings.

Raman Tweezers for Tire and Road Wear Micro- and Nanoparticles Analysis published in Environmental Science: Nano

Optical beam focused into the liquid: the tire particles are pushed away from the laser focus.

Raman Tweezers for Tire and Road Wear Micro- and Nanoparticles Analysis
Pietro Giuseppe Gucciardi, Gillibert Raymond, Alessandro Magazzù, Agnese Callegari, David Bronte Ciriza, Foti Antonino, Maria Grazia Donato, Onofrio M. Maragò, Giovanni Volpe, Marc Lamy de La Chapelle & Fabienne Lagarde
Environmental Science: Nano 9, 145 – 161 (2022)
ChemRxiv: https://doi.org/10.33774/chemrxiv-2021-h59n1
doi: https://doi.org/10.1039/D1EN00553G

Tire and Road Wear Particles (TRWP) are non-exhaust particulate matter generated by road transport means during the mechanical abrasion of tires, brakes and roads. TRWP accumulate on the roadsides and are transported into the aquatic ecosystem during stormwater runoffs. Due to their size (sub-millimetric) and rubber content (elastomers), TRWP are considered microplastics (MPs). While the amount of the MPs polluting the water ecosystem with sizes from ~ 5 μm to more than 100 μm is known, the fraction of smaller particles is unknown due to the technological gap in the detection and analysis of < 5 μm MPs. Here we show that Raman Tweezers, a combination of optical tweezers and Raman spectroscopy, can be used to trap and chemically analyze individual TWRPs in a liquid environment, down to the sub-micrometric scale. Using tire particles mechanically grinded from aged car tires in water solutions, we show that it is possible to optically trap individual sub-micron particles, in a so-called 2D trapping configuration, and acquire their Raman spectrum in few tens of seconds. The analysis is then extended to samples collected from a brake test platform, where we highlight the presence of sub-micrometric agglomerates of rubber and brake debris, thanks to the presence of additional spectral features other than carbon. Our results show the potential of Raman Tweezers in environmental pollution analysis and highlight the formation of nanosized TRWP during wear.

Featured in:
University of Gothenburg > News and Events: New technology enables the detection of microplastics from road wear
Phys.org > News > Nanotechnology:New technology enables the detection of microplastics from road wear
Nonsologreen > Green: Le Raman-tweezers per la guerra alle nanoplastiche che inquinano fiumi e mari

Objective comparison of methods to decode anomalous diffusion published in Nature Communications

An illustration of anomalous diffusion. (Image by Gorka Muñoz-Gil.)
Objective comparison of methods to decode anomalous diffusion
Gorka Muñoz-Gil, Giovanni Volpe, Miguel Angel Garcia-March, Erez Aghion, Aykut Argun, Chang Beom Hong, Tom Bland, Stefano Bo, J. Alberto Conejero, Nicolás Firbas, Òscar Garibo i Orts, Alessia Gentili, Zihan Huang, Jae-Hyung Jeon, Hélène Kabbech, Yeongjin Kim, Patrycja Kowalek, Diego Krapf, Hanna Loch-Olszewska, Michael A. Lomholt, Jean-Baptiste Masson, Philipp G. Meyer, Seongyu Park, Borja Requena, Ihor Smal, Taegeun Song, Janusz Szwabiński, Samudrajit Thapa, Hippolyte Verdier, Giorgio Volpe, Arthur Widera, Maciej Lewenstein, Ralf Metzler, and Carlo Manzo
Nat. Commun. 12, Article number: 6253 (2021)
doi: 10.1038/s41467-021-26320-w
arXiv: 2105.06766

Deviations from Brownian motion leading to anomalous diffusion are found in transport dynamics from quantum physics to life sciences. The characterization of anomalous diffusion from the measurement of an individual trajectory is a challenging task, which traditionally relies on calculating the trajectory mean squared displacement. However, this approach breaks down for cases of practical interest, e.g., short or noisy trajectories, heterogeneous behaviour, or non-ergodic processes. Recently, several new approaches have been proposed, mostly building on the ongoing machine-learning revolution. To perform an objective comparison of methods, we gathered the community and organized an open competition, the Anomalous Diffusion challenge (AnDi). Participating teams applied their algorithms to a commonly-defined dataset including diverse conditions. Although no single method performed best across all scenarios, machine-learning-based approaches achieved superior performance for all tasks. The discussion of the challenge results provides practical advice for users and a benchmark for developers.