Diffusion models for super-resolution microscopy: a tutorial published in Journal of Physics: Photonics

Super-resolution by diffusion models: low-resolution images of microtubules (left) are transformed to high-resolution (right) by diffusion model. Dataset courtesy: BioSR Dataset. (Image by H. Bachimamchi.)
Diffusion models for super-resolution microscopy: a tutorial
Harshith Bachimanchi, Giovanni Volpe
Journal of Physics: Photonics 7, 013001 (2025)
doi: 10.1088/2515-7647/ada101
arXiv: 2409.16488

Diffusion models have emerged as a prominent technique in generative modeling with neural networks, making their mark in tasks like text-to-image translation and super-resolution. In this tutorial, we provide a comprehensive guide to build denoising diffusion probabilistic models from scratch, with a specific focus on transforming low-resolution microscopy images into their corresponding high-resolution versions in the context of super-resolution microscopy. We provide the necessary theoretical background, the essential mathematical derivations, and a detailed Python code implementation using PyTorch. We discuss the metrics to quantitatively evaluate the model, illustrate the model performance at different noise levels of the input low-resolution images, and briefly discuss how to adapt the tutorial for other applications. The code provided in this tutorial is also available as a Python notebook in the supplementary information.

Roadmap on machine learning glassy dynamics published in Nature Review Physics

Visual summary of the scope of the review. (Image by the Authors.)
Roadmap on machine learning glassy dynamics
Gerhard Jung, Rinske M. Alkemade, Victor Bapst, Daniele Coslovich, Laura Filion, François P. Landes, Andrea J. Liu, Francesco Saverio Pezzicoli, Hayato Shiba, Giovanni Volpe, Francesco Zamponi, Ludovic Berthier & Giulio Biroli
Nature Review Physics (2025)
doi: 10.1038/s42254-024-00791-4
arxiv: 2311.14752

Unravelling the connections between microscopic structure, emergent physical properties and slow dynamics has long been a challenge when studying the glass transition. The absence of clear visible structural order in amorphous configurations complicates the identification of the key physical mechanisms underpinning slow dynamics. The difficulty in sampling equilibrated configurations at low temperatures hampers thorough numerical and theoretical investigations. We explore the potential of machine learning (ML) techniques to face these challenges, building on the algorithms that have revolutionized computer vision and image recognition. We present both successful ML applications and open problems for the future, such as transferability and interpretability of ML approaches. To foster a collaborative community effort, we also highlight the ‘GlassBench’ dataset, which provides simulation data and benchmarks for both 2D and 3D glass formers. We compare the performance of emerging ML methodologies, in line with benchmarking practices in image and text recognition. Our goal is to provide guidelines for the development of ML techniques in systems displaying slow dynamics and inspire new directions to improve our theoretical understanding of glassy liquids.

Connecting genomic results for psychiatric disorders to human brain cell types and regions reveals convergence with functional connectivity published in Nature Communications

Brain region connectivity. (Image by the Authors of the manuscript.)
Connecting genomic results for psychiatric disorders to human brain cell types and regions reveals convergence with functional connectivity
Shuyang Yao, Arvid Harder, Fahimeh Darki, Yu-Wei Chang , Ang Li, Kasra Nikouei, Giovanni Volpe, Johan N Lundström, Jian Zeng , Naomi Wray, Yi Lu, Patrick F Sullivan, Jens Hjerling-Leffler
Nature Communications 16, 395 (2025)
doi: 10.1038/s41467-024-55611-1
medRxiv: 10.1101/2024.01.18.24301478

Identifying cell types and brain regions critical for psychiatric disorders and brain traits is essential for targeted neurobiological research. By integrating genomic insights from genome-wide association studies with a comprehensive single-cell transcriptomic atlas of the adult human brain, we prioritized specific neuronal clusters significantly enriched for the SNP-heritabilities for schizophrenia, bipolar disorder, and major depressive disorder along with intelligence, education, and neuroticism. Extrapolation of cell-type results to brain regions reveals the whole-brain impact of schizophrenia genetic risk, with subregions in the hippocampus and amygdala exhibiting the most significant enrichment of SNP-heritability. Using functional MRI connectivity, we further confirmed the significance of the central and lateral amygdala, hippocampal body, and prefrontal cortex in distinguishing schizophrenia cases from controls. Our findings underscore the value of single-cell transcriptomics in understanding the polygenicity of psychiatric disorders and suggest a promising alignment of genomic, transcriptomic, and brain imaging modalities for identifying common biological targets.

Spatial clustering of molecular localizations with graph neural networks on ArXiv

MIRO employs a recurrent graph neural network to refine SMLM point clouds by compressing clusters around their center, enhancing inter-cluster distinction and background separation for efficient clustering. (Image by J. Pineda.)
Spatial clustering of molecular localizations with graph neural networks
Jesús Pineda, Sergi Masó-Orriols, Joan Bertran, Mattias Goksör, Giovanni Volpe and Carlo Manzo
arXiv: 2412.00173

Single-molecule localization microscopy (SMLM) generates point clouds corresponding to fluorophore localizations. Spatial cluster identification and analysis of these point clouds are crucial for extracting insights about molecular organization. However, this task becomes challenging in the presence of localization noise, high point density, or complex biological structures. Here, we introduce MIRO (Multimodal Integration through Relational Optimization), an algorithm that uses recurrent graph neural networks to transform the point clouds in order to improve clustering efficiency when applying conventional clustering techniques. We show that MIRO supports simultaneous processing of clusters of different shapes and at multiple scales, demonstrating improved performance across varied datasets. Our comprehensive evaluation demonstrates MIRO’s transformative potential for single-molecule localization applications, showcasing its capability to revolutionize cluster analysis and provide accurate, reliable details of molecular architecture. In addition, MIRO’s robust clustering capabilities hold promise for applications in various fields such as neuroscience, for the analysis of neural connectivity patterns, and environmental science, for studying spatial distributions of ecological data.

Cross-modality transformations in biological microscopy enabled by deep learning published in Advanced Photonics

Cross-modality transformation and segmentation. (Image by the Authors of the manuscript.)
Cross-modality transformations in biological microscopy enabled by deep learning
Dana Hassan, Jesús Domínguez, Benjamin Midtvedt, Henrik Klein Moberg, Jesús Pineda, Christoph Langhammer, Giovanni Volpe, Antoni Homs Corbera, Caroline B. Adiels
Advanced Photonics 6, 064001 (2024)
doi: 10.1117/1.AP.6.6.064001

Recent advancements in deep learning (DL) have propelled the virtual transformation of microscopy images across optical modalities, enabling unprecedented multimodal imaging analysis hitherto impossible. Despite these strides, the integration of such algorithms into scientists’ daily routines and clinical trials remains limited, largely due to a lack of recognition within their respective fields and the plethora of available transformation methods. To address this, we present a structured overview of cross-modality transformations, encompassing applications, data sets, and implementations, aimed at unifying this evolving field. Our review focuses on DL solutions for two key applications: contrast enhancement of targeted features within images and resolution enhancements. We recognize cross-modality transformations as a valuable resource for biologists seeking a deeper understanding of the field, as well as for technology developers aiming to better grasp sample limitations and potential applications. Notably, they enable high-contrast, high-specificity imaging akin to fluorescence microscopy without the need for laborious, costly, and disruptive physical-staining procedures. In addition, they facilitate the realization of imaging with properties that would typically require costly or complex physical modifications, such as achieving superresolution capabilities. By consolidating the current state of research in this review, we aim to catalyze further investigation and development, ultimately bringing the potential of cross-modality transformations into the hands of researchers and clinicians alike.

Invited Seminar by G. Volpe at FEMTO-ST, 26 November 2024

DeepTrack 2.1 Logo. (Image from DeepTrack 2.1 Project)
How can deep learning enhance microscopy?
Giovanni Volpe
FEMTO-ST’s Internal Seminar 2024
Date: 26 November 2024
Time: 15:00
Place: Besançon, Paris

Video microscopy has a long history of providing insights and breakthroughs for a broad range of disciplines, from physics to biology. Image analysis to extract quantitative information from video microscopy data has traditionally relied on algorithmic approaches, which are often difficult to implement, time consuming, and computationally expensive. Recently, alternative data-driven approaches using deep learning have greatly improved quantitative digital microscopy, potentially offering automatized, accurate, and fast image analysis. However, the combination of deep learning and video microscopy remains underutilized primarily due to the steep learning curve involved in developing custom deep-learning solutions.

To overcome this issue, we have introduced a software, currently at version DeepTrack 2.1, to design, train and validate deep-learning solutions for digital microscopy. We use it to exemplify how deep learning can be employed for a broad range of applications, from particle localization, tracking and characterization to cell counting and classification. Thanks to its user-friendly graphical interface, DeepTrack 2.1 can be easily customized for user-specific applications, and, thanks to its open-source object-oriented programming, it can be easily expanded to add features and functionalities, potentially introducing deep-learning-enhanced video microscopy to a far wider audience.

Invited Talk by G. Volpe at SPAOM, Toledo, Spain, 22 November 2024

DeepTrack 2.1 Logo. (Image from DeepTrack 2.1 Project)
How can deep learning enhance microscopy?
Giovanni Volpe
SPAOM 2024
Date: 22 November 2024
Time: 10:15-10:45
Place: Toledo, Spain

Video microscopy has a long history of providing insights and breakthroughs for a broad range of disciplines, from physics to biology. Image analysis to extract quantitative information from video microscopy data has traditionally relied on algorithmic approaches, which are often difficult to implement, time consuming, and computationally expensive. Recently, alternative data-driven approaches using deep learning have greatly improved quantitative digital microscopy, potentially offering automatized, accurate, and fast image analysis. However, the combination of deep learning and video microscopy remains underutilized primarily due to the steep learning curve involved in developing custom deep-learning solutions. To overcome this issue, we have introduced a software, currently at version DeepTrack 2.1, to design, train and validate deep-learning solutions for digital microscopy.

Playing with Active Matter featured in Scilight

The article Playing with active matter, published in the American Journal of Physics, has been featured on Scilight with a news with title “Using Hexbugs to model active matter”.

The news highlights that the approach used in the featured paper will make possible for students in the primary and secondary school system to demonstrate complex active motion principles in the classroom, at an affordable budget.
In fact, experiments at the microscale often require very expensive equipment. The commercially available toys called Hexbugs used in the publication provide a macroscopic analogue of active matter at the microscale and have the advantage of being affordable for experimentation in the classroom.

About Scilight:
Scilight showcase the most interesting research across the physical sciences published in AIP Publishing Journals.

Reference:
Hannah Daniel, Using Hexbugs to model active matter, Scilight 2024, 431101 (2024)
doi: 10.1063/10.0032401

Playing with Active Matter published in American Journal of Physics

One exemplar of the HEXBUGS used in the experiment. (Image by the Authors of the manuscript.)
Playing with Active Matter
Angelo Barona Balda, Aykut Argun, Agnese Callegari, Giovanni Volpe
Americal Journal of Physics 92, 847–858 (2024)
doi: 10.1119/5.0125111
arXiv: 2209.04168

In the past 20 years, active matter has been a very successful research field, bridging the fundamental physics of nonequilibrium thermodynamics with applications in robotics, biology, and medicine. Active particles, contrary to Brownian particles, can harness energy to generate complex motions and emerging behaviors. Most active-matter experiments are performed with microscopic particles and require advanced microfabrication and microscopy techniques. Here, we propose some macroscopic experiments with active matter employing commercially available toy robots (the Hexbugs). We show how they can be easily modified to perform regular and chiral active Brownian motion and demonstrate through experiments fundamental signatures of active systems such as how energy and momentum are harvested from an active bath, how obstacles can sort active particles by chirality, and how active fluctuations induce attraction between planar objects (a Casimir-like effect). These demonstrations enable hands-on experimentation with active matter and showcase widely used analysis methods.

Invited Talk by G. Volpe at Gothenburg Lise Meitner Award 2024 Symposium, 27 September 2024

(Image created by G. Volpe with the assistance of DALL·E 2)
What is a physicist to do in the age of AI?
Giovanni Volpe
Gothenburg Lise Meitner Award 2024 Symposium
Date: 27 September 2024
Time: 15:00-15:30
Place: PJ Salen

In recent years, the rapid growth of artificial intelligence, particularly deep learning, has transformed fields from natural sciences to technology. While deep learning is often viewed as a glorified form of curve fitting, its advancement to multi-layered, deep neural networks has resulted in unprecedented performance improvements, often surprising experts. As AI models grow larger and more complex, many wonder whether AI will eventually take over the world and what role remains for physicists and, more broadly, humans.

A critical, yet underappreciated fact is that these AI systems rely heavily on vast amounts of training data, most of which are generated and annotated by humans. This dependency raises an intriguing issue: what happens when human-generated data is no longer available, or when AI begins to train on AI-generated data? The phenomenon of AI poisoning, where the quality of AI outputs declines due to self-referencing, demonstrates the limitations of current AI models. For example, in image recognition tasks, such as those involving the MNIST dataset, AI tends to gravitate towards ‘safe’ or average outputs, diminishing originality and accuracy.

In this context, the unique role of humans becomes clear. Physicists, with their capacity for originality, deep understanding of physical phenomena, and the ability to exploit fundamental symmetries in nature, bring invaluable perspectives to the development of AI. By incorporating physics-informed training architectures and embracing the human drive for meaning and discovery, we can guide the future of AI in truly innovative directions. The message is clear: physicists must remain original, pursue their passions, and continue searching for the hidden laws that govern the world and society.