Presentation by J. Pineda at LoG Meetup Sweden, 12 February 2025

MIRO employs a recurrent graph neural network to refine SMLM point clouds by compressing clusters around their center, enhancing inter-cluster distinction and background separation for efficient clustering. (Image by J. Pineda.)
Relational Inductive Biases as a Key to Smarter Deep Learning Microscopy
Jesús Pineda
Learning on graphs and geometry meetup at Uppsala University
Date: 12 February 2025
Time: 11:15
Place: Lecture hall 4101, Ångströmlaboratoriet, Uppsala, Sweden

Geometric deep learning has revolutionized fields like social network analysis, molecular chemistry, and neuroscience, but its application to microscopy data analysis remains a significant challenge. The hurdles stem not only from the scarcity of high-quality data but also from the intrinsic complexity and variability of microscopy datasets. This presentation introduces two groundbreaking geometric deep-learning frameworks designed to overcome these barriers, advancing the integration of graph neural networks (GNNs) into microscopy and unlocking their full potential. First, we present MAGIK, a cutting-edge framework for analyzing biological system dynamics through time-lapse microscopy. Leveraging a graph neural network augmented with attention-based mechanisms, MAGIK processes object features using geometric priors. This enables it to perform a range of tasks, from linking coordinates into trajectories to uncovering local and global dynamic properties with unprecedented precision. Remarkably, MAGIK excels under minimal data conditions, maintaining exceptional performance and robust generalization across diverse scenarios. Next, we introduce MIRO, a novel algorithm powered by recurrent graph neural networks. MIRO pre-processes Single Molecule Localization (SML) datasets to enhance the efficiency of conventional clustering methods. Its ability to handle clusters of varying shapes and scales enables more accurate and consistent analyses across complex datasets. Furthermore, MIRO’s single- and few-shot learning capabilities allow it to generalize effortlessly across scenarios, making it an efficient, scalable, and versatile tool for microscopy data analysis. Together, MAGIK and MIRO address critical limitations in microscopy data analysis, offering innovative solutions for multi-scale data analysis and advancing the boundaries of what is currently achievable with geometric deep learning in the field.

Reference

Pineda, Jesús, Benjamin Midtvedt, Harshith Bachimanchi, Sergio Noé, Daniel Midtvedt, Giovanni Volpe, and Carlo Manzo. Geometric deep learning reveals the spatiotemporal features of microscopic motionNat Mach Intell 5, 71–82 (2023).

Pineda, Jesús, Sergi Masó-Orriols, Joan Bertran, Mattias Goksör, Giovanni Volpe, and Carlo Manzo. Spatial Clustering of Molecular Localizations with Graph Neural Networks.  arXiv: 2412.00173 (2024).

Poster by M. Granfors at the Learning on graphs and geometry meetup in Uppsala, 11 February 2025

GAUDI leverages a hierarchical graph-convolutional variational autoencoder architecture, where an encoder progressively compresses the graph into a low-dimensional latent space, and a decoder reconstructs the graph from the latent embedding. (Image by M. Granfors and J. Pineda.)
Global graph features unveiled by unsupervised geometric deep learning
Mirja Granfors, Jesús Pineda, Blanca Zufiria Gerbolés, Daniel Vereb, Joana B. Pereira, Carlo Manzo, and Giovanni Volpe
Learning on graphs and geometry meetup at Uppsala University
Date: 11 February 2025
Place: Uppsala university

Graphs are used to model complex relationships, such as interactions between particles or connections between brain regions. The structural complexity and variability of graphs pose challenges to their efficient analysis and classification. Here, we propose GAUDI (Graph Autoencoder Uncovering Descriptive Information), a graph autoencoder that addresses these challenges. GAUDI’s encoder progressively reduces the size of the graph using multi-step hierarchical pooling, while its decoder incrementally increases the graph size until the original dimensions are restored, focusing on the node and edge features while preserving the graph structure through skip-connections. By training GAUDI to minimize the difference between the node and edge features of the input graph and those of the output graph, it is compelled to capture the most critical parameters describing these features in the latent space, thereby enabling the extraction of essential parameters characterizing the graphs. We demonstrate the performance of GAUDI across diverse graph data originating from complex systems, including the estimation of the parameters of Watts-Strogatz graphs, the classification of protein assembly structures from single-molecule localization microscopy data, the analysis of collective behaviors, and correlations between brain connections and age. This approach offers a robust framework for efficiently analyzing and interpreting complex graph data, facilitating the extraction of meaningful patterns and insights across a wide range of applications.

Invited talk by J. Pineda at the CMCB Lab on 27 January 2025

MIRO employs a recurrent graph neural network to refine SMLM point clouds by compressing clusters around their center, enhancing inter-cluster distinction and background separation for efficient clustering. (Image by J. Pineda.)
Spatial clustering of molecular localizations with graph neural networks
Jesús Pineda
Date: 27 January 2025
Time: 10:00
Place: SciLifeLab Campus Solna, Sweden

Single-molecule localization microscopy (SMLM) generates point clouds corresponding to fluorophore localizations. Spatial cluster identification and analysis of these point clouds are crucial for extracting insights about molecular organization. However, this task becomes challenging in the presence of localization noise, high point density, or complex biological structures. Here, we introduce MIRO (Multimodal Integration through Relational Optimization), an algorithm that uses recurrent graph neural networks to transform the point clouds in order to improve clustering efficiency when applying conventional clustering techniques. We show that MIRO supports simultaneous processing of clusters of different shapes and at multiple scales, demonstrating improved performance across varied datasets. Our comprehensive evaluation demonstrates MIRO’s transformative potential for single-molecule localization applications, showcasing its capability to revolutionize cluster analysis and provide accurate, reliable details of molecular architecture. In addition, MIRO’s robust clustering capabilities hold promise for applications in various fields such as neuroscience, for the analysis of neural connectivity patterns, and environmental science, for studying spatial distributions of ecological data.

Reference
Pineda, Jesús, Sergi Masó-Orriols, Joan Bertran, Mattias Goksör, Giovanni Volpe, and Carlo Manzo. Spatial Clustering of Molecular Localizations with Graph Neural Networks.  arXiv: 2412.00173

Talk by Ivo Sbalzarini, 9 January 2025

Ivo Sbalzarini, talk. (Photo by Y.-W. Chang.)
Content-adaptive deep learning for large-scale
fluorescence microscopy imaging

Ivo Sbalzarini
Max Planck Institute of Molecular Cell Biology and Genetics
Center for Systems Biology Dresden
https://sbalzarini-lab.org/

Date: 9 January 2025
Time: 11:00
Place: Nexus

Invited Seminar by G. Volpe at FEMTO-ST, 26 November 2024

DeepTrack 2.1 Logo. (Image from DeepTrack 2.1 Project)
How can deep learning enhance microscopy?
Giovanni Volpe
FEMTO-ST’s Internal Seminar 2024
Date: 26 November 2024
Time: 15:00
Place: Besançon, Paris

Video microscopy has a long history of providing insights and breakthroughs for a broad range of disciplines, from physics to biology. Image analysis to extract quantitative information from video microscopy data has traditionally relied on algorithmic approaches, which are often difficult to implement, time consuming, and computationally expensive. Recently, alternative data-driven approaches using deep learning have greatly improved quantitative digital microscopy, potentially offering automatized, accurate, and fast image analysis. However, the combination of deep learning and video microscopy remains underutilized primarily due to the steep learning curve involved in developing custom deep-learning solutions.

To overcome this issue, we have introduced a software, currently at version DeepTrack 2.1, to design, train and validate deep-learning solutions for digital microscopy. We use it to exemplify how deep learning can be employed for a broad range of applications, from particle localization, tracking and characterization to cell counting and classification. Thanks to its user-friendly graphical interface, DeepTrack 2.1 can be easily customized for user-specific applications, and, thanks to its open-source object-oriented programming, it can be easily expanded to add features and functionalities, potentially introducing deep-learning-enhanced video microscopy to a far wider audience.

Invited Talk by G. Volpe at SPAOM, Toledo, Spain, 22 November 2024

DeepTrack 2.1 Logo. (Image from DeepTrack 2.1 Project)
How can deep learning enhance microscopy?
Giovanni Volpe
SPAOM 2024
Date: 22 November 2024
Time: 10:15-10:45
Place: Toledo, Spain

Video microscopy has a long history of providing insights and breakthroughs for a broad range of disciplines, from physics to biology. Image analysis to extract quantitative information from video microscopy data has traditionally relied on algorithmic approaches, which are often difficult to implement, time consuming, and computationally expensive. Recently, alternative data-driven approaches using deep learning have greatly improved quantitative digital microscopy, potentially offering automatized, accurate, and fast image analysis. However, the combination of deep learning and video microscopy remains underutilized primarily due to the steep learning curve involved in developing custom deep-learning solutions. To overcome this issue, we have introduced a software, currently at version DeepTrack 2.1, to design, train and validate deep-learning solutions for digital microscopy.

Harshith Bachimanchi is shortlisted as one of the RMS early-career award speakers at RMS annual general meeting 2024, London, UK on 2 October 2024

The three RMS Early Career Award speakers (l to r) Harshith Bachimanchi, Akaash Kumar and Liam Rooney. (Image by RMS.)
Harshith Bachimanchi is shortlisted as one of the RMS (Royal Microscopical Society) early-career award speakers at RMS AMG 2024 (RMS Annual General Meeting 2024) held in London, UK, on 2 October 2024.

In this meeting, Harshith presented his work on leveraging deep learning as a powerful tool to enhance the microscopic data analysis pipelines, to study microorganisms in unprecedented detail. Taking holographic microscopy as an example, he demonstrated that combining holography with deep learning can be used to follow marine micro-organisms through out their lifespan, continuously measuring their three-dimensional positions and dry mass. He also presented some recent results on using deep learning to transform microscopy images from one modality to another (For eg., from Holography to Bright-field and vice versa).

The articles related to his presentation can be found at the following links:
1. Microplankton life histories revealed by holographic microscopy and deep learning.
2. Deep-learning-powered data analysis in plankton ecology

The annual Early Career Award—for which Harshith is shortlisted as one of the potential candidates—recognises the achievements of an outstanding early career imaging scientist in their contribution to microscopy, image analysis, or cytometry.

From RMS:

Invited Talk by A. Ciarlo at Italy-Sweden bilateral workshop on smart sensor technologies and applications, 1 October 2024

Representation of DNA stretching experiment with the miniTweezer. (Image by A. Ciarlo)
miniTweezers2.0: smart optical tweezers for health and life sciences
Antonio Ciarlo
Italy-Sweden bilateral workshop on smart sensor technologies and applications
Date: 1 October 2024
Time: 14:40-15:05
Place: Meeting Room Kronan, Studenthuset, Linköping University, Campus Valla

Optical tweezers have become indispensable tools in various scientific fields such as biology, physics, chemistry, and materials science. Their wide range of applications has attracted the interest of scientists with limited expertise in optics and physics. Therefore, it is crucial to have a system that is accessible to non-experts. In this study, we present miniTweezers2.0, a highly versatile and user-friendly instrument enhanced by artificial intelligence. We demonstrate the capabilities of the system through three autonomous case study experiments. The first is DNA stretching, a fundamental experiment in single-molecule force spectroscopy. The second experiment focuses on stretching red blood cells, providing insight into their membrane stiffness. The final experiment examines the electrostatic interactions between microparticles in different environments. Our results highlight the potential of automated, versatile optical tweezers to advance our understanding of nanoscale and microscale systems by enabling high-throughput, unbiased measurements. The miniTweezers2.0 system successfully demonstrates the integration of artificial intelligence and automation to make optical tweezers more accessible and versatile, especially for health and life sciences. The adaptability of miniTweezers2.0 underscores its potential as a powerful tool for future scientific exploration across multiple disciplines.

Invited Talk by G. Volpe at Gothenburg Lise Meitner Award 2024 Symposium, 27 September 2024

(Image created by G. Volpe with the assistance of DALL·E 2)
What is a physicist to do in the age of AI?
Giovanni Volpe
Gothenburg Lise Meitner Award 2024 Symposium
Date: 27 September 2024
Time: 15:00-15:30
Place: PJ Salen

In recent years, the rapid growth of artificial intelligence, particularly deep learning, has transformed fields from natural sciences to technology. While deep learning is often viewed as a glorified form of curve fitting, its advancement to multi-layered, deep neural networks has resulted in unprecedented performance improvements, often surprising experts. As AI models grow larger and more complex, many wonder whether AI will eventually take over the world and what role remains for physicists and, more broadly, humans.

A critical, yet underappreciated fact is that these AI systems rely heavily on vast amounts of training data, most of which are generated and annotated by humans. This dependency raises an intriguing issue: what happens when human-generated data is no longer available, or when AI begins to train on AI-generated data? The phenomenon of AI poisoning, where the quality of AI outputs declines due to self-referencing, demonstrates the limitations of current AI models. For example, in image recognition tasks, such as those involving the MNIST dataset, AI tends to gravitate towards ‘safe’ or average outputs, diminishing originality and accuracy.

In this context, the unique role of humans becomes clear. Physicists, with their capacity for originality, deep understanding of physical phenomena, and the ability to exploit fundamental symmetries in nature, bring invaluable perspectives to the development of AI. By incorporating physics-informed training architectures and embracing the human drive for meaning and discovery, we can guide the future of AI in truly innovative directions. The message is clear: physicists must remain original, pursue their passions, and continue searching for the hidden laws that govern the world and society.

Seminar by G. Volpe at ESPCI/Sorbonne, Paris, 26 September 2024

(Image by A. Argun)
Deep Learning for Microscopy
Giovanni Volpe
Date: 26 September 2024
Place: ESPCI/Sorbonne, Paris, France

Video microscopy has a long history of providing insights and breakthroughs for a broad range of disciplines, from physics to biology. Image analysis to extract quantitative information from video microscopy data has traditionally relied on algorithmic approaches, which are often difficult to implement, time consuming, and computationally expensive. Recently, alternative data-driven approaches using deep learning have greatly improved quantitative digital microscopy, potentially offering automatized, accurate, and fast image analysis. However, the combination of deep learning and video microscopy remains underutilized primarily due to the steep learning curve involved in developing custom deep-learning solutions. To overcome this issue, we have introduced a software, DeepTrack 2.1, to design, train and validate deep-learning solutions for digital microscopy.