Multi-cohort and longitudinal Bayesian clustering study of stage and subtype in Alzheimer’s disease published in Nature Communications

Comparison of cluster-specific covariance matrixes with node strength. (Image by the Authors.)
Multi-cohort and longitudinal Bayesian clustering study of stage and subtype in Alzheimer’s disease
Konstantinos Poulakis, Joana B. Pereira, J.-Sebastian Muehlboeck, Lars-Olof Wahlund, Örjan Smedby, Giovanni Volpe, Colin L. Masters, David Ames, Yoshiki Niimi, Takeshi Iwatsubo, Daniel Ferreira, Eric Westman, Japanese Alzheimer’s Disease Neuroimaging Initiative & Australian Imaging, Biomarkers and Lifestyle study
Nature Communications 13, 4566 (2022)
doi: 10.1038/s41467-022-32202-6

Understanding Alzheimer’s disease (AD) heterogeneity is important for understanding the underlying pathophysiological mechanisms of AD. However, AD atrophy subtypes may reflect different disease stages or biologically distinct subtypes. Here we use longitudinal magnetic resonance imaging data (891 participants with AD dementia, 305 healthy control participants) from four international cohorts, and longitudinal clustering to estimate differential atrophy trajectories from the age of clinical disease onset. Our findings (in amyloid-β positive AD patients) show five distinct longitudinal patterns of atrophy with different demographical and cognitive characteristics. Some previously reported atrophy subtypes may reflect disease stages rather than distinct subtypes. The heterogeneity in atrophy rates and cognitive decline within the five longitudinal atrophy patterns, potentially expresses a complex combination of protective/risk factors and concomitant non-AD pathologies. By alternating between the cross-sectional and longitudinal understanding of AD subtypes these analyses may allow better understanding of disease heterogeneity.

Unraveling Parkinson’s disease heterogeneity using subtypes based on multimodal data published in Parkinsonism and Related Disorders

Particular of the brain in the group comparison analysis. (Image by the Authors.)
Unraveling Parkinson’s disease heterogeneity using subtypes based on multimodal data
Franziska Albrecht, Konstantinos Poulakis, Malin Freidle, Hanna Johansson, Urban Ekman, Giovanni Volpe, Eric Westman, Joana B. Pereira, Erika Franzén
Parkinsonism and Related Disorders 102, 19-29 (2022)
doi: 10.1016/j.parkreldis.2022.07.014


Parkinson’s disease (PD) is a clinically and neuroanatomically heterogeneous neurodegenerative disease characterized by different subtypes. To this date, no studies have used multimodal data that combines clinical, motor, cognitive and neuroimaging assessments to identify these subtypes, which may provide complementary, clinically relevant information. To address this limitation, we subtyped participants with mild-moderate PD based on a rich, multimodal dataset of clinical, cognitive, motor, and neuroimaging variables.


Cross-sectional data from 95 PD participants from our randomized EXPANd (EXercise in PArkinson’s disease and Neuroplasticity) controlled trial were included. Participants were subtyped using clinical, motor, and cognitive assessments as well as structural and resting-state MRI data. Subtyping was done by random forest clustering. We extracted information about the subtypes by inspecting their neuroimaging profiles and descriptive statistics.


Our multimodal subtyping analysis yielded three PD subtypes: a motor-cognitive subtype characterized by widespread alterations in brain structure and function as well as impairment in motor and cognitive abilities; a cognitive dominant subtype mainly impaired in cognitive function that showed frontoparietal structural and functional changes; and a motor dominant subtype impaired in motor variables without any brain alterations. Motor variables were most important for the subtyping, followed by gray matter volume in the right medial postcentral gyrus.


Three distinct PD subtypes were identified in our multimodal dataset. The most important features to subtype PD participants were motor variables in addition to structural MRI in the sensorimotor region. These findings have the potential to improve our understanding of PD heterogeneity, which in turn can lead to personalized interventions and rehabilitation.

Dynamic live/apoptotic cell assay using phase-contrast imaging and deep learning on bioRxiv

Phase-contrast image before virtual staining. (Image by the Authors.)
Dynamic live/apoptotic cell assay using phase-contrast imaging and deep learning
Zofia Korczak, Jesús Pineda, Saga Helgadottir, Benjamin Midtvedt, Mattias Goksör, Giovanni Volpe, Caroline B. Adiels

Chemical live/dead assay has a long history of providing information about the viability of cells cultured in vitro. The standard methods rely on imaging chemically-stained cells using fluorescence microscopy and further analysis of the obtained images to retrieve the proportion of living cells in the sample. However, such a technique is not only time-consuming but also invasive. Due to the toxicity of chemical dyes, once a sample is stained, it is discarded, meaning that longitudinal studies are impossible using this approach. Further, information about when cells start programmed cell death (apoptosis) is more relevant for dynamic studies. Here, we present an alternative method where cell images from phase-contrast time-lapse microscopy are virtually-stained using deep learning. In this study, human endothelial cells are stained live or apoptotic and subsequently counted using the self-supervised single-shot deep-learning technique (LodeSTAR). Our approach is less labour-intensive than traditional chemical staining procedures and provides dynamic live/apoptotic cell ratios from a continuous cell population with minimal impact. Further, it can be used to extract data from dense cell samples, where manual counting is unfeasible.

Neural Network Training with Highly Incomplete Datasets published in Machine Learning: Science and Technology

Working principles for training neural networks with highly incomplete dataset: vanilla (upper panel) vs GapNet (lower panel) (Image by Yu-Wei Chang.)
Neural Network Training with Highly Incomplete Datasets
Yu-Wei Chang, Laura Natali, Oveis Jamialahmadi, Stefano Romeo, Joana B. Pereira, Giovanni Volpe
Machine Learning: Science and Technology 3, 035001 (2022)
arXiV: 2107.00429
doi: 10.1088/2632-2153/ac7b69

Neural network training and validation rely on the availability of large high-quality datasets. However, in many cases only incomplete datasets are available, particularly in health care applications, where each patient typically undergoes different clinical procedures or can drop out of a study. Since the data to train the neural networks need to be complete, most studies discard the incomplete datapoints, which reduces the size of the training data, or impute the missing features, which can lead to artefacts. Alas, both approaches are inadequate when a large portion of the data is missing. Here, we introduce GapNet, an alternative deep-learning training approach that can use highly incomplete datasets. First, the dataset is split into subsets of samples containing all values for a certain cluster of features. Then, these subsets are used to train individual neural networks. Finally, this ensemble of neural networks is combined into a single neural network whose training is fine-tuned using all complete datapoints. Using two highly incomplete real-world medical datasets, we show that GapNet improves the identification of patients with underlying Alzheimer’s disease pathology and of patients at risk of hospitalization due to Covid-19. By distilling the information available in incomplete datasets without having to reduce their size or to impute missing values, GapNet will permit to extract valuable information from a wide range of datasets, benefiting diverse fields from medicine to engineering.

Deep learning in light–matter interactions published in Nanophotonics

Artificial neurons can be combined in a dense neural network (DNN), where the input layer is connected to the output layer via a set of hidden layers. (Image by the Authors.)
Deep learning in light–matter interactions
Daniel Midtvedt, Vasilii Mylnikov, Alexander Stilgoe, Mikael Käll, Halina Rubinsztein-Dunlop and Giovanni Volpe
Nanophotonics, 11(14), 3189-3214 (2022)
doi: 10.1515/nanoph-2022-0197

The deep-learning revolution is providing enticing new opportunities to manipulate and harness light at all scales. By building models of light–matter interactions from large experimental or simulated datasets, deep learning has already improved the design of nanophotonic devices and the acquisition and analysis of experimental data, even in situations where the underlying theory is not sufficiently established or too complex to be of practical use. Beyond these early success stories, deep learning also poses several challenges. Most importantly, deep learning works as a black box, making it difficult to understand and interpret its results and reliability, especially when training on incomplete datasets or dealing with data generated by adversarial approaches. Here, after an overview of how deep learning is currently employed in photonics, we discuss the emerging opportunities and challenges, shining light on how deep learning advances photonics.

Label-free nanofluidic scattering microscopy of size and mass of single diffusing molecules and nanoparticles published in Nature Methods

Kymographs of DNA inside Channel II. (Image by the Authors.)
Label-free nanofluidic scattering microscopy of size and mass of single diffusing molecules and nanoparticles
Barbora Špačková, Henrik Klein Moberg, Joachim Fritzsche, Johan Tenghamn, Gustaf Sjösten, Hana Šípová-Jungová, David Albinsson, Quentin Lubart, Daniel van Leeuwen, Fredrik Westerlund, Daniel Midtvedt, Elin K. Esbjörner, Mikael Käll, Giovanni Volpe & Christoph Langhammer
Nature Methods 19, 751–758 (2022)
doi: 10.1038/s41592-022-01491-6

Label-free characterization of single biomolecules aims to complement fluorescence microscopy in situations where labeling compromises data interpretation, is technically challenging or even impossible. However, existing methods require the investigated species to bind to a surface to be visible, thereby leaving a large fraction of analytes undetected. Here, we present nanofluidic scattering microscopy (NSM), which overcomes these limitations by enabling label-free, real-time imaging of single biomolecules diffusing inside a nanofluidic channel. NSM facilitates accurate determination of molecular weight from the measured optical contrast and of the hydrodynamic radius from the measured diffusivity, from which information about the conformational state can be inferred. Furthermore, we demonstrate its applicability to the analysis of a complex biofluid, using conditioned cell culture medium containing extracellular vesicles as an example. We foresee the application of NSM to monitor conformational changes, aggregation and interactions of single biomolecules, and to analyze single-cell secretomes.

Single-shot self-supervised particle tracking on ArXiv

LodeSTAR tracks the plankton Noctiluca scintillans. (Image by the Authors of the manuscript.)
Single-shot self-supervised particle tracking
Benjamin Midtvedt, Jesús Pineda, Fredrik Skärberg, Erik Olsén, Harshith Bachimanchi, Emelie Wesén, Elin K. Esbjörner, Erik Selander, Fredrik Höök, Daniel Midtvedt, Giovanni Volpe
arXiv: 2202.13546

Particle tracking is a fundamental task in digital microscopy. Recently, machine-learning approaches have made great strides in overcoming the limitations of more classical approaches. The training of state-of-the-art machine-learning methods almost universally relies on either vast amounts of labeled experimental data or the ability to numerically simulate realistic datasets. However, the data produced by experiments are often challenging to label and cannot be easily reproduced numerically. Here, we propose a novel deep-learning method, named LodeSTAR (Low-shot deep Symmetric Tracking And Regression), that learns to tracks objects with sub-pixel accuracy from a single unlabeled experimental image. This is made possible by exploiting the inherent roto-translational symmetries of the data. We demonstrate that LodeSTAR outperforms traditional methods in terms of accuracy. Furthermore, we analyze challenging experimental data containing densely packed cells or noisy backgrounds. We also exploit additional symmetries to extend the measurable particle properties to the particle’s vertical position by propagating the signal in Fourier space and its polarizability by scaling the signal strength. Thanks to the ability to train deep-learning models with a single unlabeled image, LodeSTAR can accelerate the development of high-quality microscopic analysis pipelines for engineering, biology, and medicine.

Tunable critical Casimir forces counteract Casimir-Lifshitz attraction on ArXiv

Gold flake suspended over a functionalized gold-coated substrate. (Image by F. Schmidt.)
Tunable critical Casimir forces counteract Casimir-Lifshitz attraction
Falko Schmidt, Agnese Callegari, Abdallah Daddi-Moussa-Ider, Battulga Munkhbat, Ruggero Verre, Timur Shegai, Mikael Käll, Hartmut Löwen, Andrea Gambassi and Giovanni Volpe
arXiv: 2202.10926

Casimir forces in quantum electrodynamics emerge between microscopic metallic objects because of the confinement of the vacuum electromagnetic fluctuations occurring even at zero temperature. Their generalization at finite temperature and in material media are referred to as Casimir-Lifshitz forces. These forces are typically attractive, leading to the widespread problem of stiction between the metallic parts of micro- and nanodevices. Recently, repulsive Casimir forces have been experimentally realized but their reliance on specialized materials prevents their dynamic control and thus limits their further applicability. Here, we experimentally demonstrate that repulsive critical Casimir forces, which emerge in a critical binary liquid mixture upon approaching the critical temperature, can be used to actively control microscopic and nanoscopic objects with nanometer precision. We demonstrate this by using critical Casimir forces to prevent the stiction caused by the Casimir-Lifshitz forces. We study a microscopic gold flake above a flat gold-coated substrate immersed in a critical mixture. Far from the critical temperature, stiction occurs because of dominant Casimir-Lifshitz forces. Upon approaching the critical temperature, however, we observe the emergence of repulsive critical Casimir forces that are sufficiently strong to counteract stiction. This experimental demonstration can accelerate the development of micro- and nanodevices by preventing stiction as well as providing active control and precise tunability of the forces acting between their constituent parts.

Microplankton life histories revealed by holographic microscopy and deep learning on ArXiv

Tracking of microplankton by holographic optical microscopy and deep learning. (Image by H. Bachimanchi.)
Microplankton life histories revealed by holographic microscopy and deep learning
Harshith Bachimanchi, Benjamin Midtvedt, Daniel Midtvedt, Erik Selander, and Giovanni Volpe
arXiv: 2202.09046

The marine microbial food web plays a central role in the global carbon cycle. Our mechanistic understanding of the ocean, however, is biased towards its larger constituents, while rates and biomass fluxes in the microbial food web are mainly inferred from indirect measurements and ensemble averages. Yet, resolution at the level of the individual microplankton is required to advance our understanding of the oceanic food web. Here, we demonstrate that, by combining holographic microscopy with deep learning, we can follow microplanktons throughout their lifespan, continuously measuring their three dimensional position and dry mass. The deep learning algorithms circumvent the computationally intensive processing of holographic data and allow rapid measurements over extended time periods. This permits us to reliably estimate growth rates, both in terms of dry mass increase and cell divisions, as well as to measure trophic interactions between species such as predation events. The individual resolution provides information about selectivity, individual feeding rates and handling times for individual microplanktons. This method is particularly useful to explore the flux of carbon through micro-zooplankton, the most important and least known group of primary consumers in the global oceans. We exemplify this by detailed descriptions of micro-zooplankton feeding events, cell divisions, and long term monitoring of single cells from division to division.

Geometric deep learning reveals the spatiotemporal fingerprint of microscopic motion on ArXiv

Input graph structure including a redundant number of edges. (Image by J. Pineda.)
Geometric deep learning reveals the spatiotemporal fingerprint of microscopic motion
Jesús Pineda, Benjamin Midtvedt, Harshith Bachimanchi, Sergio Noé, Daniel Midtvedt, Giovanni Volpe, Carlo Manzo
arXiv: 2202.06355

The characterization of dynamical processes in living systems provides important clues for their mechanistic interpretation and link to biological functions. Thanks to recent advances in microscopy techniques, it is now possible to routinely record the motion of cells, organelles, and individual molecules at multiple spatiotemporal scales in physiological conditions. However, the automated analysis of dynamics occurring in crowded and complex environments still lags behind the acquisition of microscopic image sequences. Here, we present a framework based on geometric deep learning that achieves the accurate estimation of dynamical properties in various biologically-relevant scenarios. This deep-learning approach relies on a graph neural network enhanced by attention-based components. By processing object features with geometric priors, the network is capable of performing multiple tasks, from linking coordinates into trajectories to inferring local and global dynamic properties. We demonstrate the flexibility and reliability of this approach by applying it to real and simulated data corresponding to a broad range of biological experiments.