Roadmap on Deep Learning for Microscopy published in Journal of Physics: Photonics

Spatio-temporal spectrum diagram of microscopy techniques and their applications. (Image by the Authors of the manuscript.)
Roadmap on Deep Learning for Microscopy
Giovanni Volpe, Carolina Wählby, Lei Tian, Michael Hecht, Artur Yakimovich, Kristina Monakhova, Laura Waller, Ivo F. Sbalzarini, Christopher A. Metzler, Mingyang Xie, Kevin Zhang, Isaac C.D. Lenton, Halina Rubinsztein-Dunlop, Daniel Brunner, Bijie Bai, Aydogan Ozcan, Daniel Midtvedt, Hao Wang, Nataša Sladoje, Joakim Lindblad, Jason T. Smith, Marien Ochoa, Margarida Barroso, Xavier Intes, Tong Qiu, Li-Yu Yu, Sixian You, Yongtao Liu, Maxim A. Ziatdinov, Sergei V. Kalinin, Arlo Sheridan, Uri Manor, Elias Nehme, Ofri Goldenberg, Yoav Shechtman, Henrik K. Moberg, Christoph Langhammer, Barbora Špačková, Saga Helgadottir, Benjamin Midtvedt, Aykut Argun, Tobias Thalheim, Frank Cichos, Stefano Bo, Lars Hubatsch, Jesus Pineda, Carlo Manzo, Harshith Bachimanchi, Erik Selander, Antoni Homs-Corbera, Martin Fränzl, Kevin de Haan, Yair Rivenson, Zofia Korczak, Caroline Beck Adiels, Mite Mijalkov, Dániel Veréb, Yu-Wei Chang, Joana B. Pereira, Damian Matuszewski, Gustaf Kylberg, Ida-Maria Sintorn, Juan C. Caicedo, Beth A Cimini, Muyinatu A. Lediju Bell, Bruno M. Saraiva, Guillaume Jacquemet, Ricardo Henriques, Wei Ouyang, Trang Le, Estibaliz Gómez-de-Mariscal, Daniel Sage, Arrate Muñoz-Barrutia, Ebba Josefson Lindqvist, Johanna Bergman
Journal of Physics: Photonics 8, 012501 (2026)
arXiv: 2303.03793
doi: 10.1088/2515-7647/ae0fd1

Through digital imaging, microscopy has evolved from primarily being a means for visual observation of life at the micro- and nano-scale, to a quantitative tool with ever-increasing resolution and throughput. Artificial intelligence, deep neural networks, and machine learning (ML) are all niche terms describing computational methods that have gained a pivotal role in microscopy-based research over the past decade. This Roadmap encompasses key aspects of how ML is applied to microscopy image data, with the aim of gaining scientific knowledge by improved image quality, automated detection, segmentation, classification and tracking of objects, and efficient merging of information from multiple imaging modalities. We aim to give the reader an overview of the key developments and an understanding of possibilities and limitations of ML for microscopy. It will be of interest to a wide cross-disciplinary audience in the physical sciences and life sciences.

Inchworm-Inspired Soft Robot with Groove-Guided Locomotion on ArXiv

Photograph of the soft robot, consisting of a multilayer rolled dielectric elastomer actuator integrated with a
flexible PET sheet. (Image by H. P. Thanabalan.)
Inchworm-Inspired Soft Robot with Groove-Guided Locomotion
Hari Prakash Thanabalan, Lars Bengtsson, Ugo Lafont, Giovanni Volpe
arXiv: 2512.07813

Soft robots require directional control to navigate complex terrains. However, achieving such control often requires multiple actuators, which increases mechanical complexity, complicates control systems, and raises energy consumption. Here, we introduce an inchworm-inspired soft robot whose locomotion direction is controlled passively by patterned substrates. The robot employs a single rolled dielectric elastomer actuator, while groove patterns on a 3D-printed substrate guide its alignment and trajectory. Through systematic experiments, we demonstrate that varying groove angles enables precise control of locomotion direction without the need for complex actuation strategies. This groove-guided approach reduces energy consumption, simplifies robot design, and expands the applicability of bio-inspired soft robots in fields such as search and rescue, pipe inspection, and planetary exploration.

Enhanced spatial clustering of single-molecule localizations with graph neural networks published in Nature Communications

MIRO employs a recurrent graph neural network to refine SMLM point clouds by compressing clusters around their center, enhancing inter-cluster distinction and background separation for efficient clustering. (Image by J. Pineda.)
Enhanced spatial clustering of single-molecule localizations with graph neural networks
Jesús Pineda, Sergi Masó-Orriols, Montse Masoliver, Joan Bertran, Mattias Goksör, Giovanni Volpe and Carlo Manzo
Nature Communications 16, 9693 (2025)
arXiv: 2412.00173
doi: 10.1038/s41467-025-65557-7

Single-molecule localization microscopy generates point clouds corresponding to fluorophore localizations. Spatial cluster identification and analysis of these point clouds are crucial for extracting insights about molecular organization. However, this task becomes challenging in the presence of localization noise, high point density, or complex biological structures. Here, we introduce MIRO (Multifunctional Integration through Relational Optimization), an algorithm that uses recurrent graph neural networks to transform the point clouds in order to improve clustering efficiency when applying conventional clustering techniques. We show that MIRO supports simultaneous processing of clusters of different shapes and at multiple scales, demonstrating improved performance across varied datasets. Our comprehensive evaluation demonstrates MIRO’s transformative potential for single-molecule localization applications, showcasing its capability to revolutionize cluster analysis and provide accurate, reliable details of molecular architecture. In addition, MIRO’s robust clustering capabilities hold promise for applications in various fields such as neuroscience, for the analysis of neural connectivity patterns, and environmental science, for studying spatial distributions of ecological data.

Myxococcus xanthus for active matter studies: a tutorial for its growth and potential applications published in Soft Matter

Myxococcus xanthus colonies develop different strategies to adapt to their environment, leading to the formation of macroscopic patterns from microscopic entities. (Image by the Authors of the manuscript.)
Tutorial for the growth and development of Myxococcus xanthus as a Model System at the Intersection of Biology and Physics
Jesus Manuel Antúnez Domínguez, Laura Pérez García, Natsuko Rivera-Yoshida, Jasmin Di Franco, David Steiner, Alejandro V. Arzola, Mariana Benítez, Charlotte Hamngren Blomqvist, Roberto Cerbino, Caroline Beck Adiels, Giovanni Volpe
Soft Matter 21, 8602-8623 (2025)
arXiv: 2407.18714
doi: 10.1063/5.0235449

Myxococcus xanthus is a unicellular organism known for its capacity to move and communicate, giving rise to complex collective properties, structures and behaviors. These characteristics have contributed to position M. xanthus as a valuable model organism for exploring emergent collective phenomena at the interface of biology and physics, particularly within the growing domain of active matter research. Yet, researchers frequently encounter difficulties in establishing reproducible and reliable culturing protocols. This tutorial provides a detailed and accessible guide to the culture, growth, development, and experimental sample preparation of M. xanthus. In addition, it presents several exemplary experiments that can be conducted using these samples, including motility assays, fruiting body formation, predation, and elasticotaxis—phenomena of direct relevance for active matter studies.

Video‐rate tunable colour electronic paper with human resolution published in Nature

High-resolution display of “The Kiss” on Retina E-Paper vs. iPhone 15: Photographs comparing the display of “The Kiss” on an iPhone 15 and Retina E-paper. The surface area of the Retina E-paper is ~ 1/4000 times smaller than the iPhone 15. (Image by the Authors of the manuscript.)
Video‐rate tunable colour electronic paper with human resolution
Ade Satria Saloka Santosa, Yu-Wei Chang, Andreas B. Dahlin, Lars Osterlund, Giovanni Volpe, Kunli Xiong
Nature 646, 1089-1095 (2025)
arXiv: 2502.03580
doi: 10.1038/s41586-025-09642-3

As demand for immersive experiences grows, displays are moving closer to the eye with smaller sizes and higher resolutions. However, shrinking pixel emitters reduce intensity, making them harder to perceive. Electronic Papers utilize ambient light for visibility, maintaining optical contrast regardless of pixel size, but cannot achieve high resolution. We show electrically tunable meta-pixels down to ~560 nm in size (>45,000 PPI) consisting of WO3 nanodiscs, allowing one-to-one pixel-photodetector mapping on the retina when the display size matches the pupil diameter, which we call Retina Electronic Paper. Our technology also supports video display (25 Hz), high reflectance (~80%), and optical contrast (~50%), which will help create the ultimate virtual reality display.

Microscopic Geared Metamachines published in Nature Communications

Top: single gear; Bottom: the second gear from the right has an optical metamaterial that react to laserlight and makes the gear move. All gears are made in silica directly on a chip. Each gear is about 0.016 mm in diameter. (Image by G. Wang)

Microscopic Geared Metamachines
Gan Wang, Marcel Rey, Antonio Ciarlo, Mohanmmad Mahdi Shanei, Kunli Xiong, Giuseppe Pesce, Mikael Käll and Giovanni Volpe
Nature Communications 16, 7767 (2025)
doi: 10.1038/s41467-025-62869-6
arXiv: 2409.17284

The miniaturization of mechanical machines is critical for advancing nanotechnology and reducing device footprints. Traditional efforts to downsize gears and micromotors have faced limitations at around 0.1 mm for over thirty years due to the complexities of constructing drives and coupling systems at such scales. Here, we present an alternative approach utilizing optical metasurfaces to locally drive microscopic machines, which can then be fabricated using standard lithography techniques and seamlessly integrated on the chip, achieving sizes down to tens of micrometers with movements precise to the sub-micrometer scale. As a proof of principle, we demonstrate the construction of microscopic gear trains powered by a single driving gear with a metasurface activated by a plane light wave. Additionally, we develop a versatile pinion and rack micromachine capable of transducing rotational motion, performing periodic motion, and controlling microscopic mirrors for light deflection. Our on-chip fabrication process allows for straightforward parallelization and integration. Using light as a widely available and easily controllable energy source, these miniaturized metamachines offer precise control and movement, unlocking new possibilities for micro- and nanoscale systems.

After the article was published, it was reported by many media outlets, University of Gothenburg, New Scientist, Optics.org, Phys.org, ScienceDaily, Discover Magazine, among others.

Featured in:
GU: Light powered motor fits inside a strand of hair.
New Scientist: Microscopic gears powered by light could be used to make tiny machines and video on Youtube.
Optics.org: University of Gothenburg makes micron-scale light-powered gears
Phys.org: New light-powered gear fits inside a strand of hair
ScienceDaily: Scientists build micromotors smaller than a human hair
Discover Magazine: The Smallest Motors in History Can Fit Inside a Strand of Hair
@DrBenMiles: Scientists Create Micro Machines Powered by Light

Roadmap for animate matter published on Journal of Physics: Condensed Matter

The three properties of animacy. The three polar plots sketch our jointly perceived level of development for each principle of animacy (i.e. activity, adaptiveness and autonomy) for each system discussed in this roadmap. The polar coordinate represents the various systems, while the radial coordinate represents the level of development (from low to high) that each system shows in the principle of each polar plot. Ideally, within a generation, all systems will fill these polar plots to show high levels in each of the three attributes of animacy. For now, only biological materials (not represented here) can be considered fully animated. (Image from the manuscript, adapted.)
Roadmap for animate matter
Giorgio Volpe, Nuno A M Araújo, Maria Guix, Mark Miodownik, Nicolas Martin, Laura Alvarez, Juliane Simmchen, Roberto Di Leonardo, Nicola Pellicciotta, Quentin Martinet, Jérémie Palacci, Wai Kit Ng, Dhruv Saxena, Riccardo Sapienza, Sara Nadine, João F Mano, Reza Mahdavi, Caroline Beck Adiels, Joe Forth, Christian Santangelo, Stefano Palagi, Ji Min Seok, Victoria A Webster-Wood, Shuhong Wang, Lining Yao, Amirreza Aghakhani, Thomas Barois, Hamid Kellay, Corentin Coulais, Martin van Hecke, Christopher J Pierce, Tianyu Wang, Baxi Chong, Daniel I Goldman, Andreagiovanni Reina, Vito Trianni, Giovanni Volpe, Richard Beckett, Sean P Nair, Rachel Armstrong
Journal of Physics: Condensed Matter 37, 333501 (2025)
arXiv: 2407.10623
doi: 10.1088/1361-648X/adebd3

Humanity has long sought inspiration from nature to innovate materials and devices. As science advances, nature-inspired materials are becoming part of our lives. Animate materials, characterized by their activity, adaptability, and autonomy, emulate properties of living systems. While only biological materials fully embody these principles, artificial versions are advancing rapidly, promising transformative impacts in the circular economy, health and climate resilience within a generation. This roadmap presents authoritative perspectives on animate materials across different disciplines and scales, highlighting their interdisciplinary nature and potential applications in diverse fields including nanotechnology, robotics and the built environment. It underscores the need for concerted efforts to address shared challenges such as complexity management, scalability, evolvability, interdisciplinary collaboration, and ethical and environmental considerations. The framework defined by classifying materials based on their level of animacy can guide this emerging field to encourage cooperation and responsible development. By unravelling the mysteries of living matter and leveraging its principles, we can design materials and systems that will transform our world in a more sustainable manner.

Quantitative evaluation of methods to analyze motion changes in single-particle experiments published on Nature Communications

Rationale for the challenge organization. The interactions of biomolecules in complex environments, such as the cell membrane, regulate physiological processes in living systems. These interactions produce changes in molecular motion that can be used as a proxy to measure interaction parameters. Time-lapse single-molecule imaging allows us to visualize these processes with high spatiotemporal resolution and, in combination with single-particle tracking methods, provide trajectories of individual molecules. (Image by the Authors of the manuscript.)
Quantitative evaluation of methods to analyze motion changes in single-particle experiments
Gorka Muñoz-Gil, Harshith Bachimanchi, Jesús Pineda, Benjamin Midtvedt, Gabriel Fernández-Fernández, Borja Requena, Yusef Ahsini, Solomon Asghar, Jaeyong Bae, Francisco J. Barrantes, Steen W. B. Bender, Clément Cabriel, J. Alberto Conejero, Marc Escoto, Xiaochen Feng, Rasched Haidari, Nikos S. Hatzakis, Zihan Huang, Ignacio Izeddin, Hawoong Jeong, Yuan Jiang, Jacob Kæstel-Hansen, Judith Miné-Hattab, Ran Ni, Junwoo Park, Xiang Qu, Lucas A. Saavedra, Hao Sha, Nataliya Sokolovska, Yongbing Zhang, Giorgio Volpe, Maciej Lewenstein, Ralf Metzler, Diego Krapf, Giovanni Volpe, Carlo Manzo
Nature Communications 16, 6749 (2025)
arXiv: 2311.18100
doi: https://doi.org/10.1038/s41467-025-61949-x

The analysis of live-cell single-molecule imaging experiments can reveal valuable information about the heterogeneity of transport processes and interactions between cell components. These characteristics are seen as motion changes in the particle trajectories. Despite the existence of multiple approaches to carry out this type of analysis, no objective assessment of these methods has been performed so far. Here, we report the results of a competition to characterize and rank the performance of these methods when analyzing the dynamic behavior of single molecules. To run this competition, we implemented a software library that simulates realistic data corresponding to widespread diffusion and interaction models, both in the form of trajectories and videos obtained in typical experimental conditions. The competition constitutes the first assessment of these methods, providing insights into the current limitations of the field, fostering the development of new approaches, and guiding researchers to identify optimal tools for analyzing their experiments.

Deep-Learning Investigation of Vibrational Raman Spectra for Plant-Stress Analysis on ArXiv

In this work, we present an unsupervised deep learning framework using Variational Autoencoders (VAEs) to decode stress-specific biomolecular fingerprints directly from Raman spectral data across multiple plant species and genotypes. (Image by the Authors of the manuscript. A part of the image was designed using Biorender.com.)
From Spectra to Stress: Unsupervised Deep Learning for Plant Health Monitoring
Anoop C. Patil, Benny Jian Rong Sng, Yu-Wei Chang, Joana B. Pereira, Chua Nam-Hai, Rajani Sarojam, Gajendra Pratap Singh, In-Cheol Jang, and Giovanni Volpe
ArXiv: 2507.15772

Detecting stress in plants is crucial for both open-farm and controlled-environment agriculture. Biomolecules within plants serve as key stress indicators, offering vital markers for continuous health monitoring and early disease detection. Raman spectroscopy provides a powerful, non-invasive means to quantify these biomolecules through their molecular vibrational signatures. However, traditional Raman analysis relies on customized data-processing workflows that require fluorescence background removal and prior identification of Raman peaks of interest-introducing potential biases and inconsistencies. Here, we introduce DIVA (Deep-learning-based Investigation of Vibrational Raman spectra for plant-stress Analysis), a fully automated workflow based on a variational autoencoder. Unlike conventional approaches, DIVA processes native Raman spectra-including fluorescence backgrounds-without manual preprocessing, identifying and quantifying significant spectral features in an unbiased manner. We applied DIVA to detect a range of plant stresses, including abiotic (shading, high light intensity, high temperature) and biotic stressors (bacterial infections). By integrating deep learning with vibrational spectroscopy, DIVA paves the way for AI-driven plant health assessment, fostering more resilient and sustainable agricultural practices.

Latent Space-Driven Quantification of Biofilm Formation using Time Resolved Droplet Microfluidics on ArXiv

Automated segnmentation of bacterial structures within a droplet. The image shows a bright-field microscopy view where a large biofilm region (green, outlined in blue) has been segmented from surrounding features. Small aggregates (yellow contours) are also highlighted. This segmentation enables structural differentiation of biofilm components for downstream quantitative analysis. (Image by D. Pérez Guerrero.)
Latent Space-Driven Quantification of Biofilm Formation using Time Resolved Droplet Microfluidics
Daniela Pérez Guerrero, Jesús Manuel Antúnez Domínguez, Aurélie Vigne, Daniel Midtvedt, Wylie Ahmed, Lisa D. Muiznieks, Giovanni Volpe, Caroline Beck Adiels
arXiv: 2507.07632

Bacterial biofilms play a significant role in various fields that impact our daily lives, from detrimental public health hazards to beneficial applications in bioremediation, biodegradation, and wastewater treatment. However, high-resolution tools for studying their dynamic responses to environmental changes and collective cellular behavior remain scarce. To characterize and quantify biofilm development, we present a droplet-based microfluidic platform combined with an image analysis tool for in-situ studies. In this setup, Bacillus subtilis was inoculated in liquid Lysogeny Broth microdroplets, and biofilm formation was examined within emulsions at the water-oil interface. Bacteria were encapsulated in droplets, which were then trapped in compartments, allowing continuous optical access throughout biofilm formation. Droplets, each forming a distinct microenvironment, were generated at high throughput using flow-controlled pressure pumps, ensuring monodispersity. A microfluidic multi-injection valve enabled rapid switching of encapsulation conditions without disrupting droplet generation, allowing side-by-side comparison. Our platform supports fluorescence microscopy imaging and quantitative analysis of droplet content, along with time-lapse bright-field microscopy for dynamic observations. To process high-throughput, complex data, we integrated an automated, unsupervised image analysis tool based on a Variational Autoencoder (VAE). This AI-driven approach efficiently captured biofilm structures in a latent space, enabling detailed pattern recognition and analysis. Our results demonstrate the accurate detection and quantification of biofilms using thresholding and masking applied to latent space representations, enabling the precise measurement of biofilm and aggregate areas.