Content-adaptive deep learning for large-scale
fluorescence microscopy imaging
Ivo Sbalzarini
Max Planck Institute of Molecular Cell Biology and Genetics
Center for Systems Biology Dresden https://sbalzarini-lab.org/
Benjamin Midtvedt defended his PhD thesis on 9 January 2025. The defense will take place in PJ, Institutionen för fysik, Origovägen 6b, Göteborg, at 13:00. Congrats!
Title: Annotation-free deep learning for quantitative microscopy
Abstract: Quantitative microscopy is an essential tool for studying and understanding microscopic structures. However, analyzing the large and complex datasets generated by modern microscopes presents significant challenges. Manual analysis is time-intensive and subjective, rendering it impractical for large datasets. While automated algorithms offer faster and more consistent results, they often require careful parameter tuning to achieve acceptable performance, and struggle to interpret the more complex data produced by modern microscopes. As such, there is a pressing need to develop new, scalable analysis methods for quantitative microscopy. In recent years, deep learning has transformed the field of computer vision, achieving superhuman performance in tasks ranging from image classification to object detection. However, this success depends on large, annotated datasets, which are often unavailable in microscopy. As such, to successfully and efficiently apply deep learning to microscopy, new strategies that bypass the dependency on extensive annotations are required. In this dissertation, I aim to lower the barrier for applying deep learning in microscopy by developing methods that do not rely on manual annotations and by providing resources to assist researchers in using deep learning to analyze their own microscopy data. First, I present two cases where training annotations are generated through alternative means that bypass the need for human effort. Second, I introduce a deep learning method that leverages symmetries in both the data and the task structure to train a statistically optimal model for object detection without any annotations. Third, I propose a method based on contrastive learning to estimate nanoparticle sizes in diffraction-limited microscopy images, without requiring annotations or prior knowledge of the optical system. Finally, I deliver a suite of resources that empower researchers in applying deep learning to microscopy. Through these developments, I aim to demonstrate that deep learning is not merely a “black box” tool. Instead, effective deep learning models should be designed with careful consideration of the data, assumptions, task structure, and model architecture, encoding as much prior knowledge as possible. By structuring these interactions with care, we can develop models that are more efficient, interpretable, and generalizable, enabling them to tackle a wider range of microscopy tasks.
Supervisor: Giovanni Volpe Examiner: Dag Hanstorp Opponent: Ivo Sbalzarini Committee: Susan Cox, Maria Arrate Munoz Barrutia, Ignacio Arganda-Carreras Alternate board member: Måns Henningson
Roadmap on machine learning glassy dynamics
Gerhard Jung, Rinske M. Alkemade, Victor Bapst, Daniele Coslovich, Laura Filion, François P. Landes, Andrea J. Liu, Francesco Saverio Pezzicoli, Hayato Shiba, Giovanni Volpe, Francesco Zamponi, Ludovic Berthier & Giulio Biroli
Nature Review Physics (2025)
doi: 10.1038/s42254-024-00791-4
arxiv: 2311.14752
Unravelling the connections between microscopic structure, emergent physical properties and slow dynamics has long been a challenge when studying the glass transition. The absence of clear visible structural order in amorphous configurations complicates the identification of the key physical mechanisms underpinning slow dynamics. The difficulty in sampling equilibrated configurations at low temperatures hampers thorough numerical and theoretical investigations. We explore the potential of machine learning (ML) techniques to face these challenges, building on the algorithms that have revolutionized computer vision and image recognition. We present both successful ML applications and open problems for the future, such as transferability and interpretability of ML approaches. To foster a collaborative community effort, we also highlight the ‘GlassBench’ dataset, which provides simulation data and benchmarks for both 2D and 3D glass formers. We compare the performance of emerging ML methodologies, in line with benchmarking practices in image and text recognition. Our goal is to provide guidelines for the development of ML techniques in systems displaying slow dynamics and inspire new directions to improve our theoretical understanding of glassy liquids.
Connecting genomic results for psychiatric disorders to human brain cell types and regions reveals convergence with functional connectivity
Shuyang Yao, Arvid Harder, Fahimeh Darki, Yu-Wei Chang , Ang Li, Kasra Nikouei, Giovanni Volpe, Johan N Lundström, Jian Zeng , Naomi Wray, Yi Lu, Patrick F Sullivan, Jens Hjerling-Leffler
Nature Communications 16, 395 (2025)
doi: 10.1038/s41467-024-55611-1
medRxiv: 10.1101/2024.01.18.24301478
Identifying cell types and brain regions critical for psychiatric disorders and brain traits is essential for targeted neurobiological research. By integrating genomic insights from genome-wide association studies with a comprehensive single-cell transcriptomic atlas of the adult human brain, we prioritized specific neuronal clusters significantly enriched for the SNP-heritabilities for schizophrenia, bipolar disorder, and major depressive disorder along with intelligence, education, and neuroticism. Extrapolation of cell-type results to brain regions reveals the whole-brain impact of schizophrenia genetic risk, with subregions in the hippocampus and amygdala exhibiting the most significant enrichment of SNP-heritability. Using functional MRI connectivity, we further confirmed the significance of the central and lateral amygdala, hippocampal body, and prefrontal cortex in distinguishing schizophrenia cases from controls. Our findings underscore the value of single-cell transcriptomics in understanding the polygenicity of psychiatric disorders and suggest a promising alignment of genomic, transcriptomic, and brain imaging modalities for identifying common biological targets.
Sreekanth K. Manikandan began working as a researcher at the Physics Department of the University of Gothenburg on December 9, 2024.
He received his Ph.D. in Theoretical Physics in 2020 from Stockholm University under the supervision of Supriya Krishnamurthy. His thesis, titled “Nonequilibrium Thermodynamics at the Microscopic Scales,” focused on finite and short-time fluctuations in non-equilibrium systems, as opposed to the large-time asymptotic properties studied within the framework of large deviation theory. One of the key outcomes of his Ph.D. research was the development of a method to infer entropy production rates directly from experimentally accessible trajectories in a model-independent manner.
Following his PhD, Sreekanth received the NORDITA postdoctoral fellowship for independent research. During this time, he expanded on his earlier work by developing generalizations of the inference scheme for entropy production and integrating it with machine-learning tools for practical inference of dissipative forces and entropy production from experimental data. Later, in 2022, he was awarded the Wallenberg Scholarship for postdoctoral research at Stanford, where he developed machine-learning-based non-equilibrium control techniques for targeted self-assembly and transport of biomolecular systems.
Currently he is interested in combining methods from Non-equilibrium Physics and Machine Learning to quantitatively characterize and control nanoscale biophysical processes.
Cross-modality transformations in biological microscopy enabled by deep learning
Dana Hassan, Jesús Domínguez, Benjamin Midtvedt, Henrik Klein Moberg, Jesús Pineda, Christoph Langhammer, Giovanni Volpe, Antoni Homs Corbera, Caroline B. Adiels
Advanced Photonics 6, 064001 (2024)
doi: 10.1117/1.AP.6.6.064001
Recent advancements in deep learning (DL) have propelled the virtual transformation of microscopy images across optical modalities, enabling unprecedented multimodal imaging analysis hitherto impossible. Despite these strides, the integration of such algorithms into scientists’ daily routines and clinical trials remains limited, largely due to a lack of recognition within their respective fields and the plethora of available transformation methods. To address this, we present a structured overview of cross-modality transformations, encompassing applications, data sets, and implementations, aimed at unifying this evolving field. Our review focuses on DL solutions for two key applications: contrast enhancement of targeted features within images and resolution enhancements. We recognize cross-modality transformations as a valuable resource for biologists seeking a deeper understanding of the field, as well as for technology developers aiming to better grasp sample limitations and potential applications. Notably, they enable high-contrast, high-specificity imaging akin to fluorescence microscopy without the need for laborious, costly, and disruptive physical-staining procedures. In addition, they facilitate the realization of imaging with properties that would typically require costly or complex physical modifications, such as achieving superresolution capabilities. By consolidating the current state of research in this review, we aim to catalyze further investigation and development, ultimately bringing the potential of cross-modality transformations into the hands of researchers and clinicians alike.
How can deep learning enhance microscopy?
Giovanni Volpe FEMTO-ST’s Internal Seminar 2024 Date: 26 November 2024 Time: 15:00 Place: Besançon, Paris
Video microscopy has a long history of providing insights and breakthroughs for a broad range of disciplines, from physics to biology. Image analysis to extract quantitative information from video microscopy data has traditionally relied on algorithmic approaches, which are often difficult to implement, time consuming, and computationally expensive. Recently, alternative data-driven approaches using deep learning have greatly improved quantitative digital microscopy, potentially offering automatized, accurate, and fast image analysis. However, the combination of deep learning and video microscopy remains underutilized primarily due to the steep learning curve involved in developing custom deep-learning solutions.
To overcome this issue, we have introduced a software, currently at version DeepTrack 2.1, to design, train and validate deep-learning solutions for digital microscopy. We use it to exemplify how deep learning can be employed for a broad range of applications, from particle localization, tracking and characterization to cell counting and classification. Thanks to its user-friendly graphical interface, DeepTrack 2.1 can be easily customized for user-specific applications, and, thanks to its open-source object-oriented programming, it can be easily expanded to add features and functionalities, potentially introducing deep-learning-enhanced video microscopy to a far wider audience.
How can deep learning enhance microscopy?
Giovanni Volpe SPAOM 2024 Date: 22 November 2024 Time: 10:15-10:45 Place: Toledo, Spain
Video microscopy has a long history of providing insights and breakthroughs for a broad range of disciplines, from physics to biology. Image analysis to extract quantitative information from video microscopy data has traditionally relied on algorithmic approaches, which are often difficult to implement, time consuming, and computationally expensive. Recently, alternative data-driven approaches using deep learning have greatly improved quantitative digital microscopy, potentially offering automatized, accurate, and fast image analysis. However, the combination of deep learning and video microscopy remains underutilized primarily due to the steep learning curve involved in developing custom deep-learning solutions. To overcome this issue, we have introduced a software, currently at version DeepTrack 2.1, to design, train and validate deep-learning solutions for digital microscopy.
The news highlights that the approach used in the featured paper will make possible for students in the primary and secondary school system to demonstrate complex active motion principles in the classroom, at an affordable budget.
In fact, experiments at the microscale often require very expensive equipment. The commercially available toys called Hexbugs used in the publication provide a macroscopic analogue of active matter at the microscale and have the advantage of being affordable for experimentation in the classroom.
About Scilight: Scilight showcase the most interesting research across the physical sciences published in AIP Publishing Journals.
Reference:
Hannah Daniel, Using Hexbugs to model active matter, Scilight 2024, 431101 (2024)
doi: 10.1063/10.0032401
Playing with Active Matter
Angelo Barona Balda, Aykut Argun, Agnese Callegari, Giovanni Volpe
Americal Journal of Physics 92, 847–858 (2024)
doi: 10.1119/5.0125111
arXiv: 2209.04168
In the past 20 years, active matter has been a very successful research field, bridging the fundamental physics of nonequilibrium thermodynamics with applications in robotics, biology, and medicine. Active particles, contrary to Brownian particles, can harness energy to generate complex motions and emerging behaviors. Most active-matter experiments are performed with microscopic particles and require advanced microfabrication and microscopy techniques. Here, we propose some macroscopic experiments with active matter employing commercially available toy robots (the Hexbugs). We show how they can be easily modified to perform regular and chiral active Brownian motion and demonstrate through experiments fundamental signatures of active systems such as how energy and momentum are harvested from an active bath, how obstacles can sort active particles by chirality, and how active fluctuations induce attraction between planar objects (a Casimir-like effect). These demonstrations enable hands-on experimentation with active matter and showcase widely used analysis methods.