Latent space-driven quantification of biofilm formation using time-resolved droplet microfluidics published in Microchemical Journal

Automated segnmentation of bacterial structures within a droplet. The image shows a bright-field microscopy view where a large biofilm region (green, outlined in blue) has been segmented from surrounding features. Small aggregates (yellow contours) are also highlighted. This segmentation enables structural differentiation of biofilm components for downstream quantitative analysis. (Image by D. Pérez Guerrero.)
Latent space-driven quantification of biofilm formation using time-resolved droplet microfluidics
Daniela Pérez Guerrero, Jesús Manuel Antúnez Domínguez, Aurélie Vigne, Daniel Midtvedt, Wylie Ahmed, Lisa D. Muiznieks, Giovanni Volpe, Caroline Beck Adiels
Microchemical Journal 225, 117685 (2026)
arXiv: 2507.07632
DOI: 10.1016/j.microc.2026.117685

Bacterial biofilms play crucial roles across diverse contexts, from public health risks to beneficial applications in bioremediation, biodegradation, and wastewater treatment. However, tools that enable high-resolution, dynamic analysis of their responses to environmental cues and collective cellular behaviors remain limited. Here, we present a droplet-based microfluidic platform that combines continuous in situ microscopy with subsequent unsupervised deep learning for quantitative analysis of biofilm development. In our setup, Bacillus subtilis cells are encapsulated in monodisperse aqueous microdroplets containing Lysogeny Broth, suspended in an oil phase and immobilized within microfabricated traps, providing continuous optical access throughout biofilm formation at the water–oil interface. The platform supports both fluorescence and bright-field imaging, enabling high-throughput, time-resolved monitoring of thousands of droplets under controlled conditions. To extract quantitative information from these large datasets, we developed an automated analysis pipeline based on a Variational Autoencoder (VAE) trained directly on microscopy images from our experiments. This unsupervised model enables segmentation and latent-space representation of bacterial structures without manual annotation or synthetic training data. Post-segmentation size thresholding enables classification of bacterial aggregates and larger biofilm-like clusters, including quantification of biofilm porosity, thereby supporting detailed morphological and temporal analyses across droplets and conditions. By integrating droplet microfluidics with unsupervised deep learning, our platform provides a scalable, robust, and rapid approach for high-throughput quantitative studies of biofilm behavior. It resolves complex structural biofilm patterns, bypasses the need for manual annotation, and opens new opportunities to probe environmental determinants of biofilm formation. Departing from earlier methods, our framework fuses biological training data with unsupervised models to quantify microbial community dynamics across scales, offering a generalizable platform for future high-resolution microbiology.

Label-free mass and size characterization of few-kDa biomolecules by hierarchical vision transformer augmented nanofluidic scattering microscopy published in Nature Communications

The principle of differential imaging in NSM, in which we subtract the light scattered (yellow arrows indicate the scattered-light direction) by an empty nanochannel from the light scattered by the same channel with a molecule inside. A sequence of differential images of a nanochannel containing a diffusing single molecule obtained in this way is combined into a kymograph, which then contains the full molecular trajectory. (Image from the article.)
Label-free mass and size characterization of few-kDa biomolecules by hierarchical vision transformer augmented nanofluidic scattering microscopy
Henrik K. Moberg, Bohdan Yeroshenko, Joachim Fritzsche, David Albinsson, Barbora Spackova, Daniel Midtvedt, Giovanni Volpe, Christoph Langhammer
Nature Communications 17, 2533 (2026)
DOI: 10.1038/s41467-026-70514-z

Nanofluidic scattering microscopy characterizes single molecules in subwavelength nanofluidic channels label-free, using the interference of visible light scattered by the molecule and nanochannel. It determines a molecule’s hydrodynamic radius by tracking its diffusion trajectory and its molecular weight by analyzing its scattering intensity along that trajectory. However, using standard analysis algorithms, it is limited to characterization of proteins larger than ≈ 60 kDa. Here, we push this limit by one order of magnitude to below ≈ 6 kDa molecular weight and ≈ 1.5 nm hydrodynamic radius — as we exemplify on the peptide hormone insulin — by using ultrasmall nanofluidic channels and by analyzing the data with a hierarchical vision transformer. When we benchmark this approach against the theoretical limit set by the Cramér–Rao Lower Bound, we find that it can be approached with sufficiently long molecular trajectories. This enables quantitative label-free single-molecule microscopy for biologically relevant families of sub-10-kDa molecules, such as cytokines, chemokines and peptide hormones.

Roadmap on Deep Learning for Microscopy published in Journal of Physics: Photonics

Spatio-temporal spectrum diagram of microscopy techniques and their applications. (Image by the Authors of the manuscript.)
Roadmap on Deep Learning for Microscopy
Giovanni Volpe, Carolina Wählby, Lei Tian, Michael Hecht, Artur Yakimovich, Kristina Monakhova, Laura Waller, Ivo F. Sbalzarini, Christopher A. Metzler, Mingyang Xie, Kevin Zhang, Isaac C.D. Lenton, Halina Rubinsztein-Dunlop, Daniel Brunner, Bijie Bai, Aydogan Ozcan, Daniel Midtvedt, Hao Wang, Nataša Sladoje, Joakim Lindblad, Jason T. Smith, Marien Ochoa, Margarida Barroso, Xavier Intes, Tong Qiu, Li-Yu Yu, Sixian You, Yongtao Liu, Maxim A. Ziatdinov, Sergei V. Kalinin, Arlo Sheridan, Uri Manor, Elias Nehme, Ofri Goldenberg, Yoav Shechtman, Henrik K. Moberg, Christoph Langhammer, Barbora Špačková, Saga Helgadottir, Benjamin Midtvedt, Aykut Argun, Tobias Thalheim, Frank Cichos, Stefano Bo, Lars Hubatsch, Jesus Pineda, Carlo Manzo, Harshith Bachimanchi, Erik Selander, Antoni Homs-Corbera, Martin Fränzl, Kevin de Haan, Yair Rivenson, Zofia Korczak, Caroline Beck Adiels, Mite Mijalkov, Dániel Veréb, Yu-Wei Chang, Joana B. Pereira, Damian Matuszewski, Gustaf Kylberg, Ida-Maria Sintorn, Juan C. Caicedo, Beth A Cimini, Muyinatu A. Lediju Bell, Bruno M. Saraiva, Guillaume Jacquemet, Ricardo Henriques, Wei Ouyang, Trang Le, Estibaliz Gómez-de-Mariscal, Daniel Sage, Arrate Muñoz-Barrutia, Ebba Josefson Lindqvist, Johanna Bergman
Journal of Physics: Photonics 8, 012501 (2026)
arXiv: 2303.03793
doi: 10.1088/2515-7647/ae0fd1

Through digital imaging, microscopy has evolved from primarily being a means for visual observation of life at the micro- and nano-scale, to a quantitative tool with ever-increasing resolution and throughput. Artificial intelligence, deep neural networks, and machine learning (ML) are all niche terms describing computational methods that have gained a pivotal role in microscopy-based research over the past decade. This Roadmap encompasses key aspects of how ML is applied to microscopy image data, with the aim of gaining scientific knowledge by improved image quality, automated detection, segmentation, classification and tracking of objects, and efficient merging of information from multiple imaging modalities. We aim to give the reader an overview of the key developments and an understanding of possibilities and limitations of ML for microscopy. It will be of interest to a wide cross-disciplinary audience in the physical sciences and life sciences.

Optical Label-Free Microscopy Characterization of Dielectric Nanoparticles published in Nanoscale

Propagation of scattered light through a scattering microscope, illustrating typical nanoparticles studied. (Image by B. García Rodriguez.)
Optical Label-Free Microscopy Characterization of Dielectric Nanoparticles
Berenice Garcia Rodriguez, Erik Olsén, Fredrik Skärberg, Giovanni Volpe, Fredrik Höök, Daniel Sundås Midtvedt
Nanoscale, 17, 8336-8362 (2025)
arXiv: 2409.11810
doi: 10.1039/D4NR03860F

In order to relate nanoparticle properties to function, fast and detailed particle characterization, is needed. The ability to characterize nanoparticle samples using optical microscopy techniques has drastically improved over the past few decades; consequently, there are now numerous microscopy methods available for detailed characterization of particles with nanometric size. However, there is currently no “one size fits all” solution to the problem of nanoparticle characterization. Instead, since the available techniques have different detection limits and deliver related but different quantitative information, the measurement and analysis approaches need to be selected and adapted for the sample at hand. In this tutorial, we review the optical theory of single particle scattering and how it relates to the differences and similarities in the quantitative particle information obtained from commonly used microscopy techniques, with an emphasis on nanometric (submicron) sized dielectric particles. Particular emphasis is placed on how the optical signal relates to mass, size, structure, and material properties of the detected particles and to its combination with diffusivity-based particle sizing. We also discuss emerging opportunities in the wake of new technology development, with the ambition to guide the choice of measurement strategy based on various challenges related to different types of nanoparticle samples and associated analytical demands.

Deep-learning-powered data analysis in plankton ecology published in Limnology and Oceanography Letters

Segmentation of two plankton species using deep learning (N. scintillans in blue, D. tertiolecta in green). (Image by H. Bachimanchi.)
Deep-learning-powered data analysis in plankton ecology
Harshith Bachimanchi, Matthew I. M. Pinder, Chloé Robert, Pierre De Wit, Jonathan Havenhand, Alexandra Kinnby, Daniel Midtvedt, Erik Selander, Giovanni Volpe
Limnology and Oceanography Letters (2024)
doi: 10.1002/lol2.10392
arXiv: 2309.08500

The implementation of deep learning algorithms has brought new perspectives to plankton ecology. Emerging as an alternative approach to established methods, deep learning offers objective schemes to investigate plankton organisms in diverse environments. We provide an overview of deep-learning-based methods including detection and classification of phytoplankton and zooplankton images, foraging and swimming behavior analysis, and finally ecological modeling. Deep learning has the potential to speed up the analysis and reduce the human experimental bias, thus enabling data acquisition at relevant temporal and spatial scales with improved reproducibility. We also discuss shortcomings and show how deep learning architectures have evolved to mitigate imprecise readouts. Finally, we suggest opportunities where deep learning is particularly likely to catalyze plankton research. The examples are accompanied by detailed tutorials and code samples that allow readers to apply the methods described in this review to their own data.

Dual-Angle Interferometric Scattering Microscopy for Optical Multiparametric Particle Characterization published in Nano Letters

Conceptual schematic of dual-angle interferometric scattering microscopy (DAISY). (Image by the Authors of the manuscript.)
Dual-Angle Interferometric Scattering Microscopy for Optical Multiparametric Particle Characterization
Erik Olsén, Berenice García Rodríguez, Fredrik Skärberg, Petteri Parkkila, Giovanni Volpe, Fredrik Höök, and Daniel Sundås Midtvedt
Nano Letters, 24(6), 1874-1881 (2024)
doi: 10.1021/acs.nanolett.3c03539
arXiv: 2309.07572

Traditional single-nanoparticle sizing using optical microscopy techniques assesses size via the diffusion constant, which requires suspended particles to be in a medium of known viscosity. However, these assumptions are typically not fulfilled in complex natural sample environments. Here, we introduce dual-angle interferometric scattering microscopy (DAISY), enabling optical quantification of both size and polarizability of individual nanoparticles (radius <170 nm) without requiring a priori information regarding the surrounding media or super-resolution imaging. DAISY achieves this by combining the information contained in concurrently measured forward and backward scattering images through twilight off-axis holography and interferometric scattering (iSCAT). Going beyond particle size and polarizability, single-particle morphology can be deduced from the fact that the hydrodynamic radius relates to the outer particle radius, while the scattering-based size estimate depends on the internal mass distribution of the particles. We demonstrate this by differentiating biomolecular fractal aggregates from spherical particles in fetal bovine serum at the single-particle level.

Geometric deep learning reveals the spatiotemporal fingerprint of microscopic motion published in Nature Machine Intelligence

Input graph structure including a redundant number of edges. (Image by J. Pineda.)
Geometric deep learning reveals the spatiotemporal fingerprint of microscopic motion
Jesús Pineda, Benjamin Midtvedt, Harshith Bachimanchi, Sergio Noé, Daniel Midtvedt, Giovanni Volpe, Carlo Manzo
Nature Machine Intelligence 5, 71–82 (2023)
arXiv: 2202.06355
doi: 10.1038/s42256-022-00595-0

The characterization of dynamical processes in living systems provides important clues for their mechanistic interpretation and link to biological functions. Thanks to recent advances in microscopy techniques, it is now possible to routinely record the motion of cells, organelles, and individual molecules at multiple spatiotemporal scales in physiological conditions. However, the automated analysis of dynamics occurring in crowded and complex environments still lags behind the acquisition of microscopic image sequences. Here, we present a framework based on geometric deep learning that achieves the accurate estimation of dynamical properties in various biologically-relevant scenarios. This deep-learning approach relies on a graph neural network enhanced by attention-based components. By processing object features with geometric priors, the network is capable of performing multiple tasks, from linking coordinates into trajectories to inferring local and global dynamic properties. We demonstrate the flexibility and reliability of this approach by applying it to real and simulated data corresponding to a broad range of biological experiments.

Single-shot self-supervised object detection in microscopy published in Nature Communications

LodeSTAR tracks the plankton Noctiluca scintillans. (Image by the Authors of the manuscript.)
Single-shot self-supervised particle tracking
Benjamin Midtvedt, Jesús Pineda, Fredrik Skärberg, Erik Olsén, Harshith Bachimanchi, Emelie Wesén, Elin K. Esbjörner, Erik Selander, Fredrik Höök, Daniel Midtvedt, Giovanni Volpe
Nature Communications 13, 7492 (2022)
arXiv: 2202.13546
doi: 10.1038/s41467-022-35004-y

Object detection is a fundamental task in digital microscopy, where machine learning has made great strides in overcoming the limitations of classical approaches. The training of state-of-the-art machine-learning methods almost universally relies on vast amounts of labeled experimental data or the ability to numerically simulate realistic datasets. However, experimental data are often challenging to label and cannot be easily reproduced numerically. Here, we propose a deep-learning method, named LodeSTAR (Localization and detection from Symmetries, Translations And Rotations), that learns to detect microscopic objects with sub-pixel accuracy from a single unlabeled experimental image by exploiting the inherent roto-translational symmetries of this task. We demonstrate that LodeSTAR outperforms traditional methods in terms of accuracy, also when analyzing challenging experimental data containing densely packed cells or noisy backgrounds. Furthermore, by exploiting additional symmetries we show that LodeSTAR can measure other properties, e.g., vertical position and polarizability in holographic microscopy.

Recent eLife article on plankton tracking gets featured on Swedish national radio

Planktons imaged under a holographic microscope. (Illustration by J. Heuschele.)
The article Microplankton life histories revealed by holographic microscopy and deep learning gets featured on Vetenskapradion Nyheter (Science radio) operated by Sveriges Radio (Swedish national radio) on November 7, 2022.

The short audio feature (Hologram hjälper forskare att förstå plankton) which highlights the important results of the paper (in Swedish) is now available for public listening.

Vetenskapradion Nyheter airs daily news, reports and in-depth discussions about latest research.

Press release on Microplankton life histories revealed by holographic microscopy and deep learning

Planktons imaged under a holographic microscope. (Illustration by J. Heuschele.)
The article Microplankton life histories revealed by holographic microscopy and deep learning has been featured in the news of University of Gothenburg (in English & Swedish) and in the press release of eLife (in English).

The study, now published in eLife, and co-written by researchers at the Soft Matter Lab of the Department of Physics at the University of Gothenburg, demonstrates how the combination of holographic microscopy and deep learning provides a strong complimentary tool in marine microbial ecology. The research allows quantitative assessments of microplankton feeding behaviours, and biomass increase throughout the cell cycle from generation to generation.

The study is featured also in eLife digest.

Here are the links to the press releases:
Researchers combine microscopy with AI to characterise marine microbial food web (eLife, English)
Holographic microscopy provides insights into the life of microplankton (GU, English)
Hologram ger insyn i planktonens liv (GU, Swedish)
The secret lives of microbes (eLife digest)