Presentation by J. Pineda at SPIE-ETAI, San Diego, 23 August 2023

Input graph structure including a redundant number of edges. (Image by J. Pineda.)
MAGIK: Microscopic motion analysis through graph inductive knowledge
Jesús Pineda
Date: 23 August 2023
Time: 2:30 PM PDT

Characterizing dynamic processes in living systems provides essential information for advancing our understanding of life processes in health and diseases and for developing new technologies and treatments. In the past two decades, optical microscopy has undergone significant developments, enabling us to study the motion of cells, organelles, and individual molecules with unprecedented detail at various scales in space and time. However, analyzing the dynamic processes that occur in complex and crowded environments remains a challenge. This work introduces MAGIK, a deep-learning framework for the analysis of biological system dynamics from time-lapse microscopy. MAGIK models the movement and interactions of particles through a directed graph where nodes represent detections and edges connect spatiotemporally close nodes. The framework utilizes an attention-based graph neural network (GNN) to process the graph and modulate the strength of associations between its elements, enabling MAGIK to derive insights into the dynamics of the systems. MAGIK provides a key enabling technology to estimate any dynamic aspect of the particles, from reconstructing their trajectories to inferring local and global dynamics. We demonstrate the flexibility and reliability of the framework by applying it to real and simulated data corresponding to a broad range of biological experiments.

Reference
Pineda, J., Midtvedt, B., Bachimanchi, H. et al. Geometric deep learning reveals the spatiotemporal features of microscopic motionNat Mach Intell 5, 71–82 (2023)

Poster by J. Pineda at SPIE-ETAI, San Diego, 21 August 2023

The proposed method allows for robust detection, segmentation, and tracking of soft granular clusters. (Image by J. Pineda.)
Unveiling the complex dynamics of soft granular materials using deep learning
Jesús Pineda
Date: 21 August 2023
Time: 5:30 PM PDT

Soft granular materials, comprising closely packed grains held together by a thin layer of lubricating fluid, display intricate many-body dynamics resulting in complex flows and rheological behavior, including plasticity and viscoelasticity, memory effects, and avalanches. Despite their widespread presence in nature and industrial applications, the structural mechanics and microscale dynamics of soft granular clusters still need to be better understood, especially those under strong confinement or surrounded by free interfaces. This work aims to bridge the gap in understanding the internal dynamics of finite-sized soft granular media by introducing a deep learning approach to characterize the shapes and movements of deformable grains in the material. We demonstrate the reliability and versatility of the method by studying the dynamics of soft granular clusters that self-organize under external flow in various physically relevant scenarios.

Soft Matter Lab members present at SPIE Optics+Photonics conference in San Diego, 20-24 August 2023

The Soft Matter Lab participates to the SPIE Optics+Photonics conference in San Diego, CA, USA, 20-24 August 2023, with the presentations listed below.

Giovanni Volpe is also co-author of the presentations:

  • Jiawei Sun (KI): (Poster) Assessment of nonlinear changes in functional brain connectivity during aging using deep learning
    21 August 2023 • 5:30 PM – 7:00 PM PDT | Conv. Ctr. Exhibit Hall A
  • Blanca Zufiria Gerbolés (KI): (Poster) Exploring age-related changes in anatomical brain connectivity using deep learning analysis in cognitively healthy individuals
    21 August 2023 • 5:30 PM – 7:00 PM PDT | Conv. Ctr. Exhibit Hall A
  • Mite Mijalkov (KI): Uncovering vulnerable connections in the aging brain using reservoir computing
    22 August 2023 • 9:15 AM – 9:30 AM PDT | Conv. Ctr. Room 6C

J. Pineda was awarded the Young Investigator Poster Award at the XVII International Congress of the Spanish Biophysical Society, Castelldefels, 30 Jun 2023

Jesús Pineda receives the Young Investigator Poster Award. (Photo by S. Masò Orriols.)
Jesús Pineda was awarded the Young Investigator Poster Award on 30 June 2023 for its poster MAGIK: Microscopic motion analysis through graph inductive knowledge presented at the XVII International Congress of the Spanish Biophysical Society in Castedefells.

Here the link to the poster.

Presentation by J. Pineda at the XVII International Congress of the Spanish Biophysical Society, Castelldefels, 30 June 2023

Input graph structure including a redundant number of edges. (Image by J. Pineda.)
MAGIK: Microscopic motion analysis through graph inductive knowledge
Jesús Pineda

Characterizing dynamic processes in living systems provides essential information for advancing our understanding of life processes in health and diseases and for developing new technologies and treatments. In the past two decades, optical microscopy has undergone significant developments, enabling us to study the motion of cells, organelles, and individual molecules with unprecedented detail at various scales in space and time. However, analyzing the dynamic processes that occur in complex and crowded environments remains a challenge. This work introduces MAGIK, a deep-learning framework for the analysis of biological system dynamics from time-lapse microscopy. MAGIK models the movement and interactions of particles through a directed graph where nodes represent detections and edges connect spatiotemporally close nodes. The framework utilizes an attention-based graph neural network (GNN) to process the graph and modulate the strength of associations between its elements, enabling MAGIK to derive insights into the dynamics of the systems. MAGIK provides a key enabling technology to estimate any dynamic aspect of the particles, from reconstructing their trajectories to inferring local and global dynamics. We demonstrate the flexibility and reliability of the framework by applying it to real and simulated data corresponding to a broad range of biological experiments.

Date: 30 June 2023
Time: 12:30
Event: XVII International Congress of the Spanish Biophysical Society

Presentation by J. Pineda at AI for Scientific Data Analysis, Gothenburg, 1 Jun 2023

Input graph structure including a redundant number of edges. (Image by J. Pineda.)
Geometric deep learning reveals the spatiotemporal features of microscopic motion
Jesús Pineda

Characterizing dynamic processes in living systems provides essential information for advancing our understanding of life processes in health and diseases and for developing new technologies and treatments. In the past two decades, optical microscopy has undergone significant developments, enabling us to study the motion of cells, organelles, and individual molecules with unprecedented detail at various scales in space and time. However, analyzing the dynamic processes that occur in complex and crowded environments remains a challenge. This work introduces MAGIK, a deep-learning framework for the analysis of biological system dynamics from time-lapse microscopy. MAGIK models the movement and interactions of particles through a directed graph where nodes represent detections and edges connect spatiotemporally close nodes. The framework utilizes an attention-based graph neural network (GNN) to process the graph and modulate the strength of associations between its elements, enabling MAGIK to derive insights into the dynamics of the systems. MAGIK provides a key enabling technology to estimate any dynamic aspect of the particles, from reconstructing their trajectories to inferring local and global dynamics. We demonstrate the flexibility and reliability of the framework by applying it to real and simulated data corresponding to a broad range of biological experiments.

Date: 1 June 2023
Time: 10:15
Place: MC2 Kollektorn
Event: AI for Scientific Data Analysis: Miniconference

Roadmap on Deep Learning for Microscopy on ArXiv

Spatio-temporal spectrum diagram of microscopy techniques and their applications. (Image by the Authors of the manuscript.)
Roadmap on Deep Learning for Microscopy
Giovanni Volpe, Carolina Wählby, Lei Tian, Michael Hecht, Artur Yakimovich, Kristina Monakhova, Laura Waller, Ivo F. Sbalzarini, Christopher A. Metzler, Mingyang Xie, Kevin Zhang, Isaac C.D. Lenton, Halina Rubinsztein-Dunlop, Daniel Brunner, Bijie Bai, Aydogan Ozcan, Daniel Midtvedt, Hao Wang, Nataša Sladoje, Joakim Lindblad, Jason T. Smith, Marien Ochoa, Margarida Barroso, Xavier Intes, Tong Qiu, Li-Yu Yu, Sixian You, Yongtao Liu, Maxim A. Ziatdinov, Sergei V. Kalinin, Arlo Sheridan, Uri Manor, Elias Nehme, Ofri Goldenberg, Yoav Shechtman, Henrik K. Moberg, Christoph Langhammer, Barbora Špačková, Saga Helgadottir, Benjamin Midtvedt, Aykut Argun, Tobias Thalheim, Frank Cichos, Stefano Bo, Lars Hubatsch, Jesus Pineda, Carlo Manzo, Harshith Bachimanchi, Erik Selander, Antoni Homs-Corbera, Martin Fränzl, Kevin de Haan, Yair Rivenson, Zofia Korczak, Caroline Beck Adiels, Mite Mijalkov, Dániel Veréb, Yu-Wei Chang, Joana B. Pereira, Damian Matuszewski, Gustaf Kylberg, Ida-Maria Sintorn, Juan C. Caicedo, Beth A Cimini, Muyinatu A. Lediju Bell, Bruno M. Saraiva, Guillaume Jacquemet, Ricardo Henriques, Wei Ouyang, Trang Le, Estibaliz Gómez-de-Mariscal, Daniel Sage, Arrate Muñoz-Barrutia, Ebba Josefson Lindqvist, Johanna Bergman
arXiv: 2303.03793

Through digital imaging, microscopy has evolved from primarily being a means for visual observation of life at the micro- and nano-scale, to a quantitative tool with ever-increasing resolution and throughput. Artificial intelligence, deep neural networks, and machine learning are all niche terms describing computational methods that have gained a pivotal role in microscopy-based research over the past decade. This Roadmap is written collectively by prominent researchers and encompasses selected aspects of how machine learning is applied to microscopy image data, with the aim of gaining scientific knowledge by improved image quality, automated detection, segmentation, classification and tracking of objects, and efficient merging of information from multiple imaging modalities. We aim to give the reader an overview of the key developments and an understanding of possibilities and limitations of machine learning for microscopy. It will be of interest to a wide cross-disciplinary audience in the physical sciences and life sciences.

Geometric deep learning reveals the spatiotemporal fingerprint of microscopic motion published in Nature Machine Intelligence

Input graph structure including a redundant number of edges. (Image by J. Pineda.)
Geometric deep learning reveals the spatiotemporal fingerprint of microscopic motion
Jesús Pineda, Benjamin Midtvedt, Harshith Bachimanchi, Sergio Noé, Daniel Midtvedt, Giovanni Volpe, Carlo Manzo
Nature Machine Intelligence 5, 71–82 (2023)
arXiv: 2202.06355
doi: 10.1038/s42256-022-00595-0

The characterization of dynamical processes in living systems provides important clues for their mechanistic interpretation and link to biological functions. Thanks to recent advances in microscopy techniques, it is now possible to routinely record the motion of cells, organelles, and individual molecules at multiple spatiotemporal scales in physiological conditions. However, the automated analysis of dynamics occurring in crowded and complex environments still lags behind the acquisition of microscopic image sequences. Here, we present a framework based on geometric deep learning that achieves the accurate estimation of dynamical properties in various biologically-relevant scenarios. This deep-learning approach relies on a graph neural network enhanced by attention-based components. By processing object features with geometric priors, the network is capable of performing multiple tasks, from linking coordinates into trajectories to inferring local and global dynamic properties. We demonstrate the flexibility and reliability of this approach by applying it to real and simulated data corresponding to a broad range of biological experiments.

Corneal endothelium assessment in specular microscopy images with Fuchs’ dystrophy via deep regression of signed distance maps published in Biomedical Optics Express

Example of final segmentation with the UNet-dm of the specular microscopy image of a severe case of cornea guttata. (Image by the Authors of the manuscript.)
Corneal endothelium assessment in specular microscopy images with Fuchs’ dystrophy via deep regression of signed distance maps
Juan S. Sierra, Jesus Pineda, Daniela Rueda, Alejandro Tello, Angelica M. Prada, Virgilio Galvis, Giovanni Volpe, Maria S. Millan, Lenny A. Romero, Andres G. Marrugo
Biomedical Optics Express 14, 335-351 (2023)
doi: 10.1364/BOE.477495
arXiv: 2210.07102

Specular microscopy assessment of the human corneal endothelium (CE) in Fuchs’ dystrophy is challenging due to the presence of dark image regions called guttae. This paper proposes a UNet-based segmentation approach that requires minimal post-processing and achieves reliable CE morphometric assessment and guttae identification across all degrees of Fuchs’ dystrophy. We cast the segmentation problem as a regression task of the cell and gutta signed distance maps instead of a pixel-level classification task as typically done with UNets. Compared to the conventional UNet classification approach, the distance-map regression approach converges faster in clinically relevant parameters. It also produces morphometric parameters that agree with the manually-segmented ground-truth data, namely the average cell density difference of -41.9 cells/mm2 (95% confidence interval (CI) [-306.2, 222.5]) and the average difference of mean cell area of 14.8 um2 (95% CI [-41.9, 71.5]). These results suggest a promising alternative for CE assessment.

Single-shot self-supervised object detection in microscopy published in Nature Communications

LodeSTAR tracks the plankton Noctiluca scintillans. (Image by the Authors of the manuscript.)
Single-shot self-supervised particle tracking
Benjamin Midtvedt, Jesús Pineda, Fredrik Skärberg, Erik Olsén, Harshith Bachimanchi, Emelie Wesén, Elin K. Esbjörner, Erik Selander, Fredrik Höök, Daniel Midtvedt, Giovanni Volpe
Nature Communications 13, 7492 (2022)
arXiv: 2202.13546
doi: 10.1038/s41467-022-35004-y

Object detection is a fundamental task in digital microscopy, where machine learning has made great strides in overcoming the limitations of classical approaches. The training of state-of-the-art machine-learning methods almost universally relies on vast amounts of labeled experimental data or the ability to numerically simulate realistic datasets. However, experimental data are often challenging to label and cannot be easily reproduced numerically. Here, we propose a deep-learning method, named LodeSTAR (Localization and detection from Symmetries, Translations And Rotations), that learns to detect microscopic objects with sub-pixel accuracy from a single unlabeled experimental image by exploiting the inherent roto-translational symmetries of this task. We demonstrate that LodeSTAR outperforms traditional methods in terms of accuracy, also when analyzing challenging experimental data containing densely packed cells or noisy backgrounds. Furthermore, by exploiting additional symmetries we show that LodeSTAR can measure other properties, e.g., vertical position and polarizability in holographic microscopy.