Benjamin Midtvedt defended his PhD thesis on 9 January 2025. Congrats!

Benjamin Midtvedt, PhD defense. (Photo by H. P. Thanabalan.)
Benjamin Midtvedt defended his PhD thesis on 9 January 2025. The defense will take place in PJ, Institutionen för fysik, Origovägen 6b, Göteborg, at 13:00. Congrats!

Title: Annotation-free deep learning for quantitative microscopy

Abstract: Quantitative microscopy is an essential tool for studying and understanding microscopic structures. However, analyzing the large and complex datasets generated by modern microscopes presents significant challenges. Manual analysis is time-intensive and subjective, rendering it impractical for large datasets. While automated algorithms offer faster and more consistent results, they often require careful parameter tuning to achieve acceptable performance, and struggle to interpret the more complex data produced by modern microscopes. As such, there is a pressing need to develop new, scalable analysis methods for quantitative microscopy. In recent years, deep learning has transformed the field of computer vision, achieving superhuman performance in tasks ranging from image classification to object detection. However, this success depends on large, annotated datasets, which are often unavailable in microscopy. As such, to successfully and efficiently apply deep learning to microscopy, new strategies that bypass the dependency on extensive annotations are required. In this dissertation, I aim to lower the barrier for applying deep learning in microscopy by developing methods that do not rely on manual annotations and by providing resources to assist researchers in using deep learning to analyze their own microscopy data. First, I present two cases where training annotations are generated through alternative means that bypass the need for human effort. Second, I introduce a deep learning method that leverages symmetries in both the data and the task structure to train a statistically optimal model for object detection without any annotations. Third, I propose a method based on contrastive learning to estimate nanoparticle sizes in diffraction-limited microscopy images, without requiring annotations or prior knowledge of the optical system. Finally, I deliver a suite of resources that empower researchers in applying deep learning to microscopy. Through these developments, I aim to demonstrate that deep learning is not merely a “black box” tool. Instead, effective deep learning models should be designed with careful consideration of the data, assumptions, task structure, and model architecture, encoding as much prior knowledge as possible. By structuring these interactions with care, we can develop models that are more efficient, interpretable, and generalizable, enabling them to tackle a wider range of microscopy tasks.

Thesis: https://hdl.handle.net/2077/84178

Supervisor: Giovanni Volpe
Examiner: Dag Hanstorp
Opponent: Ivo Sbalzarini
Committee: Susan Cox, Maria Arrate Munoz Barrutia, Ignacio Arganda-Carreras
Alternate board member: Måns Henningson

Ivo Sbalzarini (left) and Benjamin Midtvedt (right). (Photo by H. P. Thanabalan.)
Benjamin Midtvedt (left), Giovanni Volpe (right), announcement. (Photo by H. P. Thanabalan.)
From left to right: Ignacio Arganda, Arrate Muñoz Barrutia, Susan Cox, Benjamin Midtvedt, Giovanni Volpe, Ivo Sbalzarini. (Photo by H. P. Thanabalan.)

Cross-modality transformations in biological microscopy enabled by deep learning published in Advanced Photonics

Cross-modality transformation and segmentation. (Image by the Authors of the manuscript.)
Cross-modality transformations in biological microscopy enabled by deep learning
Dana Hassan, Jesús Domínguez, Benjamin Midtvedt, Henrik Klein Moberg, Jesús Pineda, Christoph Langhammer, Giovanni Volpe, Antoni Homs Corbera, Caroline B. Adiels
Advanced Photonics 6, 064001 (2024)
doi: 10.1117/1.AP.6.6.064001

Recent advancements in deep learning (DL) have propelled the virtual transformation of microscopy images across optical modalities, enabling unprecedented multimodal imaging analysis hitherto impossible. Despite these strides, the integration of such algorithms into scientists’ daily routines and clinical trials remains limited, largely due to a lack of recognition within their respective fields and the plethora of available transformation methods. To address this, we present a structured overview of cross-modality transformations, encompassing applications, data sets, and implementations, aimed at unifying this evolving field. Our review focuses on DL solutions for two key applications: contrast enhancement of targeted features within images and resolution enhancements. We recognize cross-modality transformations as a valuable resource for biologists seeking a deeper understanding of the field, as well as for technology developers aiming to better grasp sample limitations and potential applications. Notably, they enable high-contrast, high-specificity imaging akin to fluorescence microscopy without the need for laborious, costly, and disruptive physical-staining procedures. In addition, they facilitate the realization of imaging with properties that would typically require costly or complex physical modifications, such as achieving superresolution capabilities. By consolidating the current state of research in this review, we aim to catalyze further investigation and development, ultimately bringing the potential of cross-modality transformations into the hands of researchers and clinicians alike.

Book “Deep Learning Crash Course” published at No Starch Press

The book Deep Learning Crash Course, authored by Giovanni Volpe, Benjamin Midtvedt, Jesús Pineda, Henrik Klein Moberg, Harshith Bachimanchi, Joana B. Pereira, and Carlo Manzo, has been published online by No Starch Press in July 2024.

Preorder Discount
A preorder discount is available: preorders with coupon code PREORDER will receive 25% off. Link: Preorder @ No Starch Press | Deep Learning Crash Course

Links
@ No Starch Press

Citation 
Giovanni Volpe, Benjamin Midtvedt, Jesús Pineda, Henrik Klein Moberg, Harshith Bachimanchi, Joana B. Pereira, and Carlo Manzo. Deep Learning Crash Course. No Starch Press.
ISBN-13: 9781718503922

Nanoalignment by Critical Casimir Torques featured in the Editors’ Highlights of Nature Communications

Artist rendition of a disk-shaped microparticle trapped above a circular uncoated pattern within a thin gold layer coated on a glass surface. (Image by the Authors of the manuscript.)
Our article, entitled Nanoalignment by Critical Casimir Torques, has been selected as a featured article by the editor at Nature Communications. This recognition highlights the significance of our research within the field of applied physics and mathematics.

The editors have included our work in their Editors’ Highlights webpage, which showcases the 50 best papers recently published in this area. You can view the feature on the Editors’ Highlights page (https://www.nature.com/ncomms/editorshighlights) as well as on the journal homepage (https://www.nature.com/ncomms/).

 

Screenshot from the Editors’ Highlights page of Nature Communications, dated 2 July 2024.

Nanoalignment by Critical Casimir Torques published in Nature Communications

Artist rendition of a disk-shaped microparticle trapped above a circular uncoated pattern within a thin gold layer coated on a glass surface. (Image by the Authors of the manuscript.)
Nanoalignment by Critical Casimir Torques
Gan Wang, Piotr Nowakowski, Nima Farahmand Bafi, Benjamin Midtvedt, Falko Schmidt, Agnese Callegari, Ruggero Verre, Mikael Käll, S. Dietrich, Svyatoslav Kondrat, Giovanni Volpe
Nature Communications, 15, 5086 (2024)
DOI: 10.1038/s41467-024-49220-1
arXiv: 2401.06260

The manipulation of microscopic objects requires precise and controllable forces and torques. Recent advances have led to the use of critical Casimir forces as a powerful tool, which can be finely tuned through the temperature of the environment and the chemical properties of the involved objects. For example, these forces have been used to self-organize ensembles of particles and to counteract stiction caused by Casimir-Liftshitz forces. However, until now, the potential of critical Casimir torques has been largely unexplored. Here, we demonstrate that critical Casimir torques can efficiently control the alignment of microscopic objects on nanopatterned substrates. We show experimentally and corroborate with theoretical calculations and Monte Carlo simulations that circular patterns on a substrate can stabilize the position and orientation of microscopic disks. By making the patterns elliptical, such microdisks can be subject to a torque which flips them upright while simultaneously allowing for more accurate control of the microdisk position. More complex patterns can selectively trap 2D-chiral particles and generate particle motion similar to non-equilibrium Brownian ratchets. These findings provide new opportunities for nanotechnological applications requiring precise positioning and orientation of microscopic objects.

Presentation by B. Midtvedt at SPIE-ETAI, San Diego, 23 August 2023

LodeSTAR tracks the plankton Noctiluca scintillans. (Image by the Authors of the manuscript.)
Single-shot self-supervised object detection
Benjamin Midtvedt, Jesus Pineda, Fredrik Skärberg, Erik Olsén, Harshith Bachimanchi, Emelie Wesén, Elin Esbjörner, Erik Selander, Fredrik Höök, Daniel Midtvedt, Giovanni Volpe
Date: 23 August 2023
Time: 10:30 AM (PDT)

Object detection is a fundamental task in digital microscopy. Recently, machine-learning approaches have made great strides in overcoming the limitations of more classical approaches. The training of state-of-the-art machine-learning methods almost universally relies on either vast amounts of labeled experimental data or the ability to numerically simulate realistic datasets. However, the data produced by experiments are often challenging to label and cannot be easily reproduced numerically. Here, we propose a novel deep-learning method, named LodeSTAR (Low-shot deep Symmetric Tracking And Regression), that learns to detect small, spatially confined, and largely homogeneous objects that have sufficient contrast to the background with sub-pixel accuracy from a single unlabeled experimental image. This is made possible by exploiting the inherent roto-translational symmetries of the data. We demonstrate that LodeSTAR outperforms traditional methods in terms of accuracy. Furthermore, we analyze challenging experimental data containing densely packed cells or noisy backgrounds. We also exploit additional symmetries to extend the measurable particle properties to the particle’s vertical position by propagating the signal in Fourier space and its polarizability by scaling the signal strength. Thanks to the ability to train deep-learning models with a single unlabeled image, LodeSTAR can accelerate the development of high-quality microscopic analysis pipelines for engineering, biology, and medicine.

Soft Matter Lab members present at SPIE Optics+Photonics conference in San Diego, 20-24 August 2023

The Soft Matter Lab participates to the SPIE Optics+Photonics conference in San Diego, CA, USA, 20-24 August 2023, with the presentations listed below.

Giovanni Volpe is also co-author of the presentations:

  • Jiawei Sun (KI): (Poster) Assessment of nonlinear changes in functional brain connectivity during aging using deep learning
    21 August 2023 • 5:30 PM – 7:00 PM PDT | Conv. Ctr. Exhibit Hall A
  • Blanca Zufiria Gerbolés (KI): (Poster) Exploring age-related changes in anatomical brain connectivity using deep learning analysis in cognitively healthy individuals
    21 August 2023 • 5:30 PM – 7:00 PM PDT | Conv. Ctr. Exhibit Hall A
  • Mite Mijalkov (KI): Uncovering vulnerable connections in the aging brain using reservoir computing
    22 August 2023 • 9:15 AM – 9:30 AM PDT | Conv. Ctr. Room 6C

Roadmap on Deep Learning for Microscopy on ArXiv

Spatio-temporal spectrum diagram of microscopy techniques and their applications. (Image by the Authors of the manuscript.)
Roadmap on Deep Learning for Microscopy
Giovanni Volpe, Carolina Wählby, Lei Tian, Michael Hecht, Artur Yakimovich, Kristina Monakhova, Laura Waller, Ivo F. Sbalzarini, Christopher A. Metzler, Mingyang Xie, Kevin Zhang, Isaac C.D. Lenton, Halina Rubinsztein-Dunlop, Daniel Brunner, Bijie Bai, Aydogan Ozcan, Daniel Midtvedt, Hao Wang, Nataša Sladoje, Joakim Lindblad, Jason T. Smith, Marien Ochoa, Margarida Barroso, Xavier Intes, Tong Qiu, Li-Yu Yu, Sixian You, Yongtao Liu, Maxim A. Ziatdinov, Sergei V. Kalinin, Arlo Sheridan, Uri Manor, Elias Nehme, Ofri Goldenberg, Yoav Shechtman, Henrik K. Moberg, Christoph Langhammer, Barbora Špačková, Saga Helgadottir, Benjamin Midtvedt, Aykut Argun, Tobias Thalheim, Frank Cichos, Stefano Bo, Lars Hubatsch, Jesus Pineda, Carlo Manzo, Harshith Bachimanchi, Erik Selander, Antoni Homs-Corbera, Martin Fränzl, Kevin de Haan, Yair Rivenson, Zofia Korczak, Caroline Beck Adiels, Mite Mijalkov, Dániel Veréb, Yu-Wei Chang, Joana B. Pereira, Damian Matuszewski, Gustaf Kylberg, Ida-Maria Sintorn, Juan C. Caicedo, Beth A Cimini, Muyinatu A. Lediju Bell, Bruno M. Saraiva, Guillaume Jacquemet, Ricardo Henriques, Wei Ouyang, Trang Le, Estibaliz Gómez-de-Mariscal, Daniel Sage, Arrate Muñoz-Barrutia, Ebba Josefson Lindqvist, Johanna Bergman
arXiv: 2303.03793

Through digital imaging, microscopy has evolved from primarily being a means for visual observation of life at the micro- and nano-scale, to a quantitative tool with ever-increasing resolution and throughput. Artificial intelligence, deep neural networks, and machine learning are all niche terms describing computational methods that have gained a pivotal role in microscopy-based research over the past decade. This Roadmap is written collectively by prominent researchers and encompasses selected aspects of how machine learning is applied to microscopy image data, with the aim of gaining scientific knowledge by improved image quality, automated detection, segmentation, classification and tracking of objects, and efficient merging of information from multiple imaging modalities. We aim to give the reader an overview of the key developments and an understanding of possibilities and limitations of machine learning for microscopy. It will be of interest to a wide cross-disciplinary audience in the physical sciences and life sciences.

Geometric deep learning reveals the spatiotemporal fingerprint of microscopic motion published in Nature Machine Intelligence

Input graph structure including a redundant number of edges. (Image by J. Pineda.)
Geometric deep learning reveals the spatiotemporal fingerprint of microscopic motion
Jesús Pineda, Benjamin Midtvedt, Harshith Bachimanchi, Sergio Noé, Daniel Midtvedt, Giovanni Volpe, Carlo Manzo
Nature Machine Intelligence 5, 71–82 (2023)
arXiv: 2202.06355
doi: 10.1038/s42256-022-00595-0

The characterization of dynamical processes in living systems provides important clues for their mechanistic interpretation and link to biological functions. Thanks to recent advances in microscopy techniques, it is now possible to routinely record the motion of cells, organelles, and individual molecules at multiple spatiotemporal scales in physiological conditions. However, the automated analysis of dynamics occurring in crowded and complex environments still lags behind the acquisition of microscopic image sequences. Here, we present a framework based on geometric deep learning that achieves the accurate estimation of dynamical properties in various biologically-relevant scenarios. This deep-learning approach relies on a graph neural network enhanced by attention-based components. By processing object features with geometric priors, the network is capable of performing multiple tasks, from linking coordinates into trajectories to inferring local and global dynamic properties. We demonstrate the flexibility and reliability of this approach by applying it to real and simulated data corresponding to a broad range of biological experiments.

Single-shot self-supervised object detection in microscopy published in Nature Communications

LodeSTAR tracks the plankton Noctiluca scintillans. (Image by the Authors of the manuscript.)
Single-shot self-supervised particle tracking
Benjamin Midtvedt, Jesús Pineda, Fredrik Skärberg, Erik Olsén, Harshith Bachimanchi, Emelie Wesén, Elin K. Esbjörner, Erik Selander, Fredrik Höök, Daniel Midtvedt, Giovanni Volpe
Nature Communications 13, 7492 (2022)
arXiv: 2202.13546
doi: 10.1038/s41467-022-35004-y

Object detection is a fundamental task in digital microscopy, where machine learning has made great strides in overcoming the limitations of classical approaches. The training of state-of-the-art machine-learning methods almost universally relies on vast amounts of labeled experimental data or the ability to numerically simulate realistic datasets. However, experimental data are often challenging to label and cannot be easily reproduced numerically. Here, we propose a deep-learning method, named LodeSTAR (Localization and detection from Symmetries, Translations And Rotations), that learns to detect microscopic objects with sub-pixel accuracy from a single unlabeled experimental image by exploiting the inherent roto-translational symmetries of this task. We demonstrate that LodeSTAR outperforms traditional methods in terms of accuracy, also when analyzing challenging experimental data containing densely packed cells or noisy backgrounds. Furthermore, by exploiting additional symmetries we show that LodeSTAR can measure other properties, e.g., vertical position and polarizability in holographic microscopy.